article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
the vast streams of data that are produced by the use of automated digital services such as social media , email and mobile phones , also known as ` big data ' , have for some time been leveraged in the private sector to assist in tasks as diverse as logistics , targeted advertising and offering personalised multimedia content .more recently , these same data sources and methodologies have begun to be used to assist humanitarian and development organisations , allowing new ways to use data to implement , monitor and evaluate programs and policies .the ability of such novel data sources to complement traditional data collection techniques such as household surveys and focus groups is clear .the data is collected passively without the need for costly and potentially dangerous active data collection , which also avoids inaccuracies due to human error , bias or dishonesty . however , the use of big data for development is still relatively nascent and questions remain over the ability of such sources to measure or approximate metrics of interest .invariably , data sources such as social networking applications enjoy deeper penetration in developed economies and rely on expensive technologies such as smart phones and robust communications infrastructure .it has been noted that measurements of human dynamics based on such recent platforms can lead to strong biases , with worse implications for those with limited access to these digital platforms . in this paperwe present analysis of a data source which is undoubtedly ` big ' yet represents one of the most established and pervasive long - distance communications networks in the history of mankind .the international postal network ( ipn ) established in 1874 is administered by a dedicated united nations specialised agency : the universal postal union ( upu ) .due to regulatory reporting requirements and the capabilities of automated data capture technologies such as rfid tags , the records of individual postal items maintained by upu represent a rich record of human activity with unparalleled penetration , which can be expected to reflect individual level behaviour , local , regional and national economic activity and international economic relations .network representations have emerged as an extremely powerful and general framework for analysing and modeling systems as diverse as transportation , biological processes , academic authorship and logistics among others .network science provides powerful tools for understanding such systems with large sets of coupled components with emergent behaviours more generally known as complex systems .previous work has explored flows of both physical and digital nature , where physical flows of goods and people and digital flows of information and communication have been extensively studied in the past in order to understand better the way in which they affect the wealth , resilience and function of social systems on global , regional , national and sub - national scales . with our workwe aim to address the general question of whether _ structural network properties of different flow networks between countries can be used to produce proxy indicators for the socioeconomic profile of a country_. a natural extension of a network in which edges between pairs of nodes represent a single kind of flow between those nodes , is to a _ multiplex _model including several qualitatively different kinds of flows which may each be understood as a single distinct layer .the advantages of a multiplex model is that the aggregation of several different network layers have been shown to be more informative than a single layer .this is particularly true if some layers are partially or wholly unobserved or if one layer imposes a barrier to entry in the form of a cost for an edge to form .for example , flows of trade and flights rely on bilateral agreements and legal frameworks as well as predictable demand to be commercially viable . in contrast , personal communications flows such as email are more readily initiated , requiring only that at least one participant has the email addresses of all the others .multiplexity , or the multiple layers of interactions between the same entities , has been explored in a wide range of systems from global air transportation to massive online multiplayer games . in ,the author studied the implications of multiple media usage on social ties in an academic organisation and discovered that multiplex ties ( those which use multiple media ) indicate a stronger bond .this has been empirically evaluated on networks with both geographical and social interactions recently , where it was found that people share a stronger bond when observed to communicate through many different media .these findings support the intuition that a pair of nodes enjoy a stronger relationship if they are better connected across several diverse network layers .the multichannel exchange of information or goods , offers a simple and reliable way of estimating tie strength but has not been applied to international networks of flows until now .in this work , we explore over four years of daily postal data records between 187 countries by comparing it to other global flow networks . those of a physical nature , such as the trade , migration and digital networks .we show how the network properties of global flow networks can approximate critical socioeconomic indicators and how network communities formed across physical and digital flow networks can reveal socioeconomic similarities possibly indicating dependencies within clusters of countries. real - time measurements of international flow networks can ultimately act as global monitors of wellbeing with positive implications for international development efforts . using knowledge about the way in which countries interact through flows of goods , people and information , we use the principles of multiplexity theory to understand the strength of international ties and the network communities they form . in this section, we will detail the methods used to perform our analysis and the various datasets with focus on the international postal network ( ipn ) , which has previously not been described .a comprehensive review of multiplex network models can be found in , however , in this work we will apply a simple multiplex model to capture the multiple flow interactions which we will describe in the following section . in our model , we consider all six networks in our study as a collection of graphs : where each graph contains a set of edges and nodes , and is the total number of networks .this allows us to define the multiplex neighbourhood of a node as the union of its neighbourhoods on each single graph : where is the neighbourhood of nodes to which node is connected on layer .the cardinality of this set can be considered as the node s global multiplex degree , or in other words the number of countries with which a country has exchanges : from the multiplex neighbourhood , we can also compute the weighted global degree of a node as : this is the sum of the weights of edges in the multiplex neighbourhood and for each graph layer they appear on .we add an edge weight if for each network in the collection .we only consider edges present in both directions because the global degree is ultimately a measure of tie strength and we want to consider well - established flows between countries only .this is common practice in other contexts where tie strength is of importance such as in social networks .we then normalise the weighted global degree by the number of possible edges , where is the total number of nodes and is the number of networks in the multiplex collection .we plot the cumulative degree distribution of both the weighted and unweighted global degrees in fig .[ fig : gdegs ] . 0.25 0.25 networks are powerful representations of complex systems with a large degree of interdependence .however in many such systems , the network representing it naturally partitions into communities made up of nodes that share dependencies between each other , but share fewer with other components . in the present context ,communities are composed of groups of countries that share higher connectivity than the rest of the network .if two countries appear in the same community in most of the six networks , this can be considered a greater level of interconnectedness otherwise not visible from the single network perspective .we formalise this idea as the _ community multiplexity _ of a pair of countries : where is a discrete variable indexing the cluster of which country is a member . if the two are equivalent for a given network , the level of community multiplexity increases by one , represented by the kronecker delta function , which evaluates membership equivalency of the two nodes .having described our multiplex methodology , which has not been previously applied to international networks of flows , we will proceed to describe the six networks and fourteen global socioeconomic indicators which we use in the core of our analysis next .although postal flows are understood to follow a distance based gravity model , similar to other networks describing flows , little is understood about the network properties of the postal network and how they relate to those of other global flow networks .the international postal network ( ipn ) is constructed using electronic data records of origin and destination for individual items sent between countries collected by the universal postal union ( upu ) since 2010 until present .items are recorded on a daily basis amounting to nearly 14 million records of items sent between countries .as one of the most developed communication networks on a global scale , it is a dense network with 201 countries and autonomous areas , and 23k postal connections between them , with 64% of all possible postal connections established .the global volume of post has seasonal peaks observable in fig .[ fig : vol ] . notably ,since 2010 postal activity is on the rise and this can be accounted for by the parallel growth of e - commerce . this positions postal flows as a sustainable indicator of socioeconomic activity . in terms of daily activity, we can observe the mean relative number of daily items sent and received by countries during the period in fig .[ fig : dact ] .this can be highly dependent on the size of the population of a country so we have normalised the volume per country s population .we use annual population statistics provided by the world bank and collected by the united nations population division . from the distribution of volumeit becomes clear that the majority of countries send and receive a similar amount of post per capita , however with a number of exceptions on both ends where a few countries send and receive exceptionally low or high number of items .next we report on the degree distributions of both the weighted and unweighted global postal graphs .the unweighted postal graph simply contains all directed edges present in the network regardless of flow volume .the weighted graph on the other hand also includes the weight of connections in the graph .we weight the network by summing the total annual volumes of directed flow between two countries , averaged over years and normalised over the population of the country of origin .we then further normalise by the maximum weight in the network , resulting in a value between 0 and 1 , allowing us to compare values between networks .the weighted adjacency matrix of the top quartile of countries in terms of degree can be seen in fig .[ fig : matrix ] with the us and uk having the largest numbers of postal partners .prominent postal network countries have relatively high interaction with most of their partners , including interactions with lower ranked countries .this is related to the degree assortativity within the postal network , discussed in the following section .further , both weighted and unweighted degree distributions are shown in fig . [ fig : degs ] , as the complementary cumulative probability function ( ccdf ) .we can see in fig .[ fig : degs]b that the in and out degrees are relatively balanced in both instances and that about 50% of countries have more than 100 postal partners . the weighted degree in fig .[ fig : degs]a follows a similar pattern , which means that countries tend to interact equally proportional to the number of their postal partners . in the following section , we will compare the postal network properties to other flow networks .0.25 0.25 this work builds upon previous efforts using global flow networks to present novel data sources for international development efforts such as the ipn and to demonstrate a holistic view of several distinct flow networks .we consider five networks , which have been previously studied independently , along with the ipn .we will now describe these networks and compare their network properties in the following section .[ [ the - world - trade - network ] ] the world trade network + + + + + + + + + + + + + + + + + + + + + + + the trade network is constructed from records maintained by the un statistics division in the comtrade database and provided by the atlas project and contains the number and value of products traded between countries classified by commodity class . 0.25 0.25 0.25 0.25 [ [ the - global - migration - network ] ] the global migration network + + + + + + + + + + + + + + + + + + + + + + + + + + + + this is compiled from bilateral flows between 196 countries as estimated from sequential stock tables .it captures the number of people who changed their country of residence over a five - year period .this reflects _ migration transitions _ and not short term movements .this data is provided by the global migration project .[ [ the - international - flights - network ] ] the international flights network + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the flights data is collected by 191 national civil aviation administrations and compiled by the international civil aviation organisation ( icao ) .these tables detail , for all commercial passenger and freight flights , country of origin and destination and the number of flights between them . .[ [ the - ip - traceroute - network ] ] the ip traceroute network + + + + + + + + + + + + + + + + + + + + + + + + + this city to city geocoded dataset is built from traceroutes in the form of directed ip to ip edges collected in a crowdsourced fashion by volunteers through the dimes project .the project relies on data from volunteers who have installed the measurement software which collects origin , destination and number of ip level edges which were discovered daily .we aggregate this data on a country to country basis and use it to construct an undirected internet topology network , weighted by the number of ips discovered and normalised by population as all other networks .the data collection methods are described in detail in the founding paper of the project .the global mapping of the internet topology provides insight into international relationships from the perspective of the digital infrastructure layer .[ [ the - social - media - density - network ] ] the social media density network + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + is constructed from aggregated digital communication data from the mesh of civilizations project , where twitter and yahoo email data is combined to produce an openly available density measure of the strength of digital communication between nations .this measure is normalised by the population of internet users in each country and thus is well aligned with the rest of the networks we use .it also blends data from two distinct sources and thus provides greater independence from service bias .because the study considers tie strength , it only includes bi - directed edges in the two platforms where there has been a reciprocal exchange of information and therefore this network is undirected . + in the following analysis we compare these networks and use multiplexity theory to extract knowledge about the strength of connectivity across them .we will distinguish between single layer and multiplex measures , which will allow us to observe to a deeper extent the international relationships and the potential for using global flow networks to estimate the wellbeing of countries in terms of a number of socioeconomic indicators ( summarised in table [ tab : indi ] ) .[ t ! ] .description and source of the fourteen indicators we try to approximate using flow network measures [ cols="<,<,<,<",options="header " , ]in order to understand the multiplex relationships of countries through flows of information and goods in context , we first compare all flow networks together . we then present their respective and collective ability to approximate crucial socioeconomic indicators and finally perform a network community analysis of individual networks and their multiplex communities where the most socioeconomically similar countries can be found .[ t ! ] l*8lc * network * & * weight * & * years * & & & & * assort * & & * cc * + post & postal items & 2010 15 & 201 & 22,280 & 110.85 & -0.26 & 0.55 & 0.79 + trade & export value & 2007 12 & 228 & 30,235 & 132.6 & -0.39 & 0.58 & 0.84 + migration & migrants & 2005 10 & 193 & 11,431 & 59.22 & -0.33 & 0.31 & 0.68 + flights & flights & 2010 15 & 223 & 6,425 & 28.81 & -0.1 & 0.13 & 0.49 + ip & ips & 2007 11 & 225 & 9,717 & 43.19 & -0.42 & 0.19 & 0.6 + sm & density & 2009 & 147 & 10,667 & 145.13 & -0.02 & 0.98 & 0.99 + although each of the five networks previously described apart from the international postal network ( ipn ) has been studied separately , there has not been a comparative analysis of all . in table [ tab : nets ] , we list the network properties of all six network separately .the number of nodes or countries exceeds 195(6 ) due to differing lists of member states providing statistics to each authority . in terms of weights , although distinct for each network , it is also a value of volume that is flowing between areas .while there are small discrepancies between the years of each network , most networks cover a five year period , with the exception of the social media network which is from a single year . the volume of interaction between two countries is therefore averaged over the number of years for each network . we weight all networks by normalising the raw volume of interaction described above by the population of each respective country of origin and rescaling all weights across networks within the same range [ 0,1 ] by dividing by the maximal weight , as we did for the postal network in the previous section .we compute the out degree for each network in a standard way as for the postal network , as well as the degree assortativity ( pearson correlation between the degrees of two connected countries ) , the network density and clustering coefficient .the assortativity coefficient determines to what extent nodes in the network have mixing patterns that are determined by their degree .positive assortativity means that nodes with high degree tend to connect to other nodes with high degree , whereas a negative assortativity means that nodes with high degree tend to connect with others with lower degree , which is the case for most of the six networks .although all networks differ in size and average degree , they have relatively high clustering coefficients , reflecting a general tendency for countries to cluster together in global networks .this clustering however is not based on the importance of a node ( its degree ) since the assortativity coefficients for all networks are low or negative , suggesting that global networks are dissassortative and therefore higher degree nodes tend to connect to lower degree nodes. fig .[ fig : comp ] presents a comparative analysis between the six networks .we refer to them for short as : post , trade , ip , mig , sm and fly .we use the jaccard coefficient to compute the overlap of edges in fig.[fig : comp]a , where we have the number of edges that exist on both networks over the union of edges on the two networks .the highest jaccard overlap is between the postal and trade networks , the two densest networks .the rest of the networks however are not strongly overlapping in terms of edges , which implies that each distinct network layer provides a non - trivial and complementary view of how countries connect .the correlation between edges in fig .[ fig : comp]b reveals that the volume of flow of goods , people , and information is correlated for those edges which exist on both networks .a notable exception is the digital communications network ( sm ) , which is entirely uncorrelated in terms of density with any other network .this means that countries likely connect in unexpected ways on social media and email .when considering the degree of a country as an indicator of its position in the network , we find that there are high correlations between the in and out positions of countries in fig .[ fig : comp]c and fig .[ fig : comp]d .although lower , the social media network is also correlated with the others .we should note that this is likely due to the smaller overlap between edges but for the nodes present across networks , we find that there is a strong correspondence between their positions in the different networks .next we will explore how well different degree metrics approximate the socioeconomic indicators described above .timely statistics on key metrics of socio - economic status are essential for provision of services to societies , in particular marginalised populations .the motivation for this measurement varies from social resilience in the event of natural or man - made disasters to ensuring social rights such as education and access to information . while national governments typically administer their territories and allocate resources in terms of sub national divisions , international organisations such as the united nations and the world bank , as well as regional organisations and blocs such as the economic council or latin american and the caribbean and the african union invariably partition populations under nation states . in this context, the nation state is the primary geographical entity considered for funding , planning and allocation of resources for development .despite the importance of accurate statistics to quantify the state of a country and progress towards favourable socio - economic outcomes , regular and reliable measurement is difficult and costly particularly in low income countries . with this in mind ,in this section we compare the positions of countries within the different networks discussed previously to the values of several socioeconomic indicators .[ fig : cormatrix ] shows the spearman rank correlation between the network degrees of the six networks ( in and out degree , and weighted in and out degree ) and various socio - economic indicators : gdp , life expectancy , corruption perception index ( cpi ) , internet penetration rate , happiness index , gini index , economic complexity index ( eci ) , literacy , poverty , emissions , fixed phone line penetration , mobile phone users , and the human development index . these indicators and their significance for the international development agendaare described in detail in the data section ( see table [ tab : indi ] ) . for each of the six networks ,we compute the network degree , defined as the sum of the neighbours for both incoming and outgoing connections where directed .this reflects how well connected a country is in a particular network .we also take into account the amount of connectivity by computing the weighted incoming and outgoing degrees on each network , defined as the sum of the normalised flows from all neighbours and reflecting the volume of incoming and outgoing flows .in addition to these standard single - layer network metrics , we define and compute the _ global degree _ of a country , which takes into account connectivity across all networks .all degrees of single networks and the global degree appear vertically in fig .[ fig : cormatrix ] and all indicators appear horizontally. in general , weighted outgoing degrees on the single networks perform best for the postal , trade , ip and flight networks .an exception from the physical flow networks is the migration network , where the incoming migration degree is more correlated with the various indicators .the best - performing degree across is the global degree .this suggests that looking at how well connected a country is in the global multiplex can be more indicative of its socioeconomic profile than looking at single networks . 0.3 0.3 0.3 0.3 0.3 0.3 the gdp per capita and life expectancyare most closely correlated with the global degree , closely followed by the postal , trade and ip weighed degrees .this shows a relationship between national wealth and the flow of goods and information .the perception of corruption index ( cpi ) however , is most positively correlated with the out weighted degrees of the postal and trade networks , followed by the ip network but not so strongly with their out degrees , similar to their relationship with the happiness index .this signifies that less corrupt and more happy countries have greater outflows in those respects . on the other hand ,the gini index of inequality is distinctly most negatively correlated with the flight network , which means that countries with greater inequality have less incoming and outgoing flight connections .the eci index is equally highly correlated with most network degrees , and especially the global degree , trade , ip and post degrees .literacy , education and mobile phone users per capita were more weakly correlated across than other indicators , which means that there may be better predictor variables beyond the scope of this work for those indicators .fixed phone line households , internet penetration and co2 emissions , however , are positively correlated with the global degree , followed by the postal and ip degrees .this indicates the importance of global connectivity across networks with respect to these factors . similarly to gdp ,the rate of poverty of a country is best represented by the global degree , followed by the postal degree .the negative correlation indicates that the more impoverished a country is , the less well connected it is to the rest of the world .finally , one of the most strongly correlated indicators with the various degrees is the human development index ( hdi ) , low human development ( high rank ) is most highly negatively correlated with the global degree , followed by the postal , trade and ip degrees .this shows that high human development ( low rank ) is associated with high global connectivity and activity in terms of incoming and outgoing flows of information and goods .one notable observation is that the ip , postal and trade weighted out network degrees all have similar correlation patterns with the various indicators , the commonality between these networks is that they express the flow of resources from a country .another observation is that weighted social media and migration outflow are weak predictors of the explored indicators .because most indicators are related to each other , e.g. , high gdp indicates low poverty or high hdi indicates happiness , when a degree is a predictor of one , it tends to be a good predictor of the others . in this sectionwe have shown that network science can provide reliable and easy to compute approximations of various indices and that connectivity between countries determines their position in global flow networks which relate to the success of their socioeconomic properties .next , we will look at the community structure of countries across networks and evaluate their community multiplexity to show that countries with similar socioeconomic profiles tend to cluster together , much like in social networks . in the previous section we related network measures to various socioeconomic indicators , showing that metrics such as the network degree can be used to estimate wellbeing at a national level . in this section ,we further examine the relationship between countries and the way in which they cluster into communities across networks and the relationship of those communities to the various socioeconomic indicators .we use the louvain modularity optimisation method for community detection in each individual network , which takes into account the tie strength of relationships between countries and finds the optimal split in terms of disconnectedness in the international network .this returns between 4 - 6 communities for each network , the geographical distribution of which is shown in fig .[ fig : coms ] .although communities naturally seem to be very driven by geography in physical flow networks , this is not the case in digital networks where communities are geographically dispersed .this is an indication of the difference in the way countries connect through post , trade , migration and flights rather than on the ip and social media networks .however , _what does it mean for two countries to be both members of the same network community ?_ common community membership indicates a level of connectedness between two countries , which is beyond the randomly expected for the network .it is often observed that nodes in the same communities share many similar properties , therefore it can be expected that _ pairs of nodes which share multiple communities across networks _ are even more similar . in this work, we measure the overlap in pairwise membership between pairs of countries across our six networks .our hypothesis is that countries that are paired together in communities across more networks are more likely to be socioeconomically similar .we measure similarity here as the absolute difference between each indicator from the previous section for two countries and plot that against their community multiplexity .for example , the united states has an average life expectancy of 70 years , whereas afghanistan has an average life expectancy of 50 , the absolute difference between the two is 20 which represents low similarity when compared to the united kingdom s life expectancy of 72 for this indicator . in fig .[ fig : diff ] , we can observe the variations in similarity for countries with different levels of community multiplexity .what is immediately striking is that countries that share a maximal number of communities and therefore exhibit the greatest community multiplexity , have the smallest margin of difference across all indicators .this suggests that _ countries with the highest community multiplexity have a very similar socioeconomic profile ._ this is confirmed by a two - sample kolmogorov - smirnov test between the distributions of differences in each indicator for pairs sharing different numbers of communities . although the ks statistic is lower between groups sharing 0 and 1 communities ( apx .0.1 for all indicators and p - value < 0.01 ) , it is very high for groups between 1 and 6 communities ( 0.4 and above , p - value < 0.01 ) , except for mobile phone penetration . further to this observation , in most indicators there is a very strong significance in the level of community multiplexity - _ the higher the community multiplexity between two countries , the smaller the difference between their socioeconomic profiles ._ there are notable exceptions to this such as the mobile phone penetration ratio , where it appears that beyond the highest level of multiplexity , all other countries are relatively similar in this aspect with low variation even for those pairs of countries which share no communities .for all other indicators such as gdp , literacy ratio , hdi and internet penetration , there is a dramatic increase in similarity past a community multiplexity of 3 .ultimately , these similarities can be used to estimate the wellbeing of countries for which it is unknown but can be estimated from its neighbours .big data is often related to real - time data captured through the internet or social networks .however , the digital divide makes access to big data insights for development more challenging in the least developed and many developing and emerging countries .can we rely on other networks to overcome these critical data gaps in view of better measuring and monitoring developmental progress ?this is particularly important following the united nations adoption of the sustainable development goals ( sdg ) in september 2015 , made of 17 goals , 169 targets and almost 200 universal indicators , each of them calling for regular and increasingly disaggregated monitoring in every country during the 2016 - 30 period .this commitment invites a nuanced discussion on the nature and importance of measurement , inference and triangulation of data sources .this discussion is particularly prescient in the face of complex intertwined developmental challenges in an age of increased globalisation , economic interdependence and climate change .the work presented above has clearly shown the value of measuring , comparing , and combining metrics of global connectivity across six different global networks in order to approximate socioeconomic indicators and to identify network communities with similar connectivity profiles .we have shown how both global digital and physical network flows can contribute to support a better monitoring of sdg indicators , as illustrated by the high correlation between internet and postal flows on the one hand , with an exhaustive list of socioeconomic indicators on the other hand .we also note the considerable potential , exposed here , for future applications of postal flow data . while we have here restricted our analysis to country - level relations , postal flows allow for socio - economic mapping on a sub - national level which can inform development programmes on a practical level .an additional dimension to be explored - that is beyond the scope of this paper is temporal analysis which , combined with the multiplex network model presented above , could provide early warning of economic shocks and their propagation .interestingly , despite the ease of _ digital _ interactions and subsequent evidence that ` distance is dead ' , _ physical _ networks , particularly the global postal , flight and migration networks , are still stronger candidates for proxy variables in case of missing data than digital networks such as the internet or social media .these networks not only reach populations excluded from access to digital communications , but are also associated with the highest number of country pairs sharing relatively similar socioeconomic patterns , in turn opening numerous ways of completing missing data with proxy variables . in the digital era , greater granularity and frequency of analysis and monitoring of sdgs can , paradoxically , be achieved through global physical networks data .we expect that the value as proxies for the digital communication networks will increase as they mature , expand and become more accessible . in the near future, both physical and digital networks will need to be combined to optimise monitoring efforts . in that sense , the emergence of the internet of things ( iot ) could play a critical role by making even more fuzzy the frontiers between the digital and physical worlds .cardillo a , zanin m , gomez - gardenes j , romance m , garcia del amo a j and boccaletti s. modeling the multi - layer nature of the european air transport network : resilience and passengers re - scheduling under random failures . _ the european physical journal special topics , 23 - 33(215)1 _ , 2013 .grady d , brune r , thiemann r , theis f and brockmann d. modularity maximisation and tree clustering : novel ways to determine effective geographic borders _ optimisation and its applications : handbook of optimisation in complex networks ( 57 ) _ , 2011 .guimera r , mossa s , turtschi a and amaral l a n. the worldwide air transport network : anomolous centrality , community structure and cities global roles ._ proceedings of national academy of sciences 102.22 ( 2005 ) : 7794 - 7799 _ , 2005 .hristova d , musolesi m and mascolo c. keep your friends close and your facebook friends closer : a multiplex network approach to the analysis of offline and online social ties._proceedings of the international conference on weblogs and social media _( icwsm14 ) , 2014 .rutherford a , cebrian m , rahwan i , dsouza s , mcinerney j , naroditskiy v , venanzi m , jennings n r , delara j r , wahlstedt e and miller s u. targeted social mobilisation in a global manhunt ._ plos one ( 8) _ , 2013 .tufekci z. big questions for social media big data : representativeness , validity and other methodological pitfalls ._ proceedings of the international conference on weblogs and social media _ ( icwsm14 ) , 2014 .universal postal union , electronic postal services program .measuring postal e - services development .bern , switzerland , 2012 .http://www.upu.int/uploads/tx_sbdownloader/studypostaleservicesen.pdf retrieved 8th january 2016 .
|
the digital exhaust left by flows of physical and digital commodities provides a rich measure of the nature , strength and significance of relationships between countries in the global network . with this work , we examine how these traces and the network structure can reveal the socioeconomic profile of different countries . we take into account multiple international networks of physical and digital flows , including the previously unexplored international postal network . by measuring the position of each country in the trade , postal , migration , international flights , ip and digital communications networks , we are able to build proxies for a number of crucial socioeconomic indicators such as gdp per capita and the human development index ranking along with twelve other indicators used as benchmarks of national well - being by the united nations and other international organisations . in this context , we have also proposed and evaluated a global connectivity degree measure applying multiplex theory across the six networks that accounts for the strength of relationships between countries . we conclude with a multiplex community analysis of the global flow networks , showing how countries with shared community membership over multiple networks have similar socioeconomic profiles . combining multiple flow data sources into global multiplex networks can help understand the forces which drive economic activity on a global level . such an ability to infer such proxy indicators in a context of incomplete information is extremely timely in light of recent discussions on measurement of indicators relevant to the sustainable development goals . = 1
|
we live in the era of twitter . from the shenanigans of pop stars and actors to enduring political transformations , everything is being transacted on microblogging services .nonetheless , fundamental questions remain unanswered . we know , for instance , that discussions around certain topics `` go viral '' whereas other topics die an early death .the network propagates some ideas , and some make no headway . in view of the enormous influence of online social networks ( osn ) , understanding the mechanics of these systemsis critical . to characterize the properties of popular and non - popular topicsis of surpassing importance to our understanding of how these complex networks are shaping our world . in this paperwe present a large - scale measurement study that attempts to describe and explain the processes that animate microblogging services .we study a large set of popular and non - popular topics derived from a comprehensive data set of tweets and user information taken from twitter .a key strength of our study is that we observe both popular and not - so - popular topics .this allows us to hypothesize about the temporal and spatial behavior of popular topics and support our hypotheses by showing that non - popular topics display contrary behavior .note that we use the more general term _ popular _ rather than the more specific term _ viral ._ this is to make a clear distinction between those topics that achieve popularity because of processes and situations that lie _ outside _ the network and those whose popularity can be attributed to the dynamics that take place within the network .we reserve the term viral for those topics whose vast popularity is a product of the social network s internal dynamics . these topics could not have gained popularity in the pre - osn era unless traditional news media decided to promote them .our study does not focus on these kinds of topics in particular because we intend to study the entire ecosystem .our work emphasizes the structural aspects of topic spread .we give the semantic aspect its due importance in the process of topic identification and then proceed to study the fundamental temporal and spatial aspects of the spread of topics . in particular , we study topic movement over two interrelated spatial dimensions : the topology of the twitter network as formed by `` follower '' and `` following '' relationships , and the geospatial embedding of that network in the map of the world .our study spans several aspects of spatial diffusion , but our primary focus is on characterizing the temporal and spatial underpinnings of popularity .we focus on three important aspects as described in the sequel .first , in section [ sec : initiators ] , we study how topic initiators influence popularity of the topic , and make the following observations : * hypothesis 1 .* twitter is a partially democratic medium in the sense that popular topics are generally started by users with high numbers of followers ( we call them celebrities ) ; however , for a topic to become popular it must be taken up by non - celebrity users .* corollary 1a .* regions with large user bases or with large number of heavily followed celebrities and news sources dominate twitter .second , in section [ sec : topology ] , we study the effect of topology and the dynamics of topic spread on popularity . the primary objects of study to this end are the subnetworks formed by users discussing each topic . while it is known that the twitter network , like most large osn , contains a giant connected component , a key finding is that the subgraph of users talking about a popular topic on a particular days always contains a giant connected component containing most of the nodes ( users ) of the subgraph , whereas the subgraphs of non - popular topics tend to be highly disconnected . to summarize, we make the following observations : * hypothesis 2 . *most of the people talking about a popular topic on a given day tend to form a large connected subgraph ( giant component ) while unpopular topics are discussed in disconnected clusters .* hypothesis 2a .* the giant component forms when many tightly clustered sets of users discussing the topic merge .finally , we study the impact of geography on popularity by partitioning the twitter network according to regional divisions and studying the behavior of popular and non - popular topics .* hypothesis 3 . *popular topics cross regional boundaries while unpopular topics stay within them .the evidence for this observation is presented in section [ sec : geography ] .apart from the highlights mentioned above , we review related work in section [ sec : related ] .we describe the various methodological issues that needed to be surmounted to perform our study in section [ sec : methodology ] .section [ sec : conclusions ] concludes the paper with a discussion of the implications of our observations on different aspect of the osn sphere .leskovec , backstrom and kleinberg s seminal work on the evolution of topics in the news sphere was the starting point for this paper .they studied how the growth of one topic affects the growth of other topics in the blogosphere .they identified and tracked a small number of popular threads , and showed that the growth of the number of posts on a thread negatively impacts the growth of other threads . the basic question that arose on readingthat work was this : can the nuances of the temporal evolution of topics be explained by a more thorough study of their spatial evolution ? working with a data set taken from twitter we were able to extract the high level of structural and geographical information about the actors of the process that has allowed us to answer this question in the affirmative .this allows us to challenge the line of research that studies only the temporal evolution of topics , or seeks to explain this evolution on the basis of content . following the paper cited above there has been more interest in understanding how information and ideas propagate on osns .a pioneering study on these phenomena on twitter was conducted by kwak et . where several aspects of topic diffusion were studied . of particular relevance to our work was their study on the topological properties of retweet trees . since our data set is built on the data set they used ( cf .section [ sec : methodology ] ) for details ) , our work can easily be compared .our major contribution is that we work with a more general notion of a topic and that we work with an ecosystem of topics .also our work views the diffusion of topics through the lens of what we call `` topic graphs '' ( cf .section [ sec : topology ] ) , that are a significant generalization of retweet trees .retweet cascades have also been studied specifically for the case of tweets with urls in them by galuba et . and by rodrigues et .there is a line of work that seeks to uncover the structural processes behind topic diffusion by studying cascade models ( e.g. ghosh and lerman , sadikov et . ) but we feel this is a limited view of the effect of topology and try to view the network structure in a more complex way . in another work relevant to ours , sousa el al . investigated whether user interactions on twitter are based on social ties or on topics , by tracking replies and message exchange on twitter ; their study is focused on only three topics namely sports , religion , and politics .more recently , romero et al . studied topic diffusion mechanisms on twitter by focusing on topics identifiable by hashtags .they study the probability of a topic adoption based on repeated exposure , and provide quantitative evidence of a contagion phenomenon made more complex than normal studies of virus - like phenomena by the existence of multiple topics , and briefly report on the graph structure of topic networks . one major limitation of this work we found is that only a very small fraction ( approx .10% ) of tweets are tagged with hashtags ( see table [ table : dataset ] in section [ sec : methodology ] ) .our methodology of using a natural language processor ( opencalais ) allows us to study topic diffusion on a much larger scale than in this work since our topic choices are not limited to hashtags . on the geographical front , yardi et al . examine information spread along the social network and across geographic regions by analyzing tweets related to two specific events happening at two different geographic locations .as an aside we mention that krishnamurthy et .al . characterized the geographical properties of the twitter user base in 2008 . on a more general level , we note that it is implicitly assumed that the attention of users on a platform like twitter is elastic but bounded ( see e.g. ) and hence the diffusion process is essentially a competitive one , even if it is not explicitly adversarial .the study of competitive diffusion has largely revolved around the application domain of viral marketing where there is competition between different products .budak et al . consider the problem of diffusion of mis - information , where opposing ideas are competing and propagating in a social network .the study of processes by which rumor spread may be combated is another example of competitive diffusion .our work provides an important input into this area of study , articulating the properties of a complex system that requires extensive study to model correctly and comprehensively .we used a portion of the ` tweet7 ' data set crawled by yang et al .this data set contains 467 million tweets , collected over a period of seven months , from june to december , 2009 .the tweets emanated from over 17 million users and are estimated to constitute about 20 - 30% of all tweets posted during that time period . for our analysis , we used the first three month s tweets of this data set ..data set summary . [ cols="<,>",options="header " , ] in order to establish the hypothesis , we investigated a geographical property of the cumulative evolving graphs defined in section [ sec : topology ] . for each topicwe determined the fraction of edges in the cumulative evolving graph that went from one region to another ; that is , we studied the fraction of edges such that belongs to one region and is a user from another region .the evolution of this fraction for three topics , one highly popular , one with a medium level of popularity and one with a low level of popularity ( as defined in section [ sec : topology ] ) is shown in figure [ fig : geoproperties ] .we observe that the highly popular topic `` barack '' shows a high fraction of edges crossing regional boundaries throughout its evolution , ranging between 0.74 and 0.81 .on the other hand the topic with medium popularity , `` cambridge '' , has a low fraction of edges crossing regions .it s noteworthy that an increase in the popularity of the topic `` cambridge '' is accompanied by an increase in the fraction of edges crossing regional boundaries .this further supports hypothesis 3 .the topic `` hamburg '' which has low popularity shows a very small fraction of edges crossing regional boundaries . to examine this phenomenon at an aggregate level we took 40 topics from each category ( as we had done in section [ sec : topology ] ) and computed the mean and median of the fraction of edges crossing regional boundaries for the entire period in our window where the topic is tweeted on .we plotted a histogram using five different ranges for this fraction ( see figure [ fig : geomeanmedian ] ) .this histogram clearly shows that the most popular topics tend to have a very large fraction of edges crossing regional boundaries while the least popular topics have cumulative graphs that generally evolve within regional boundaries with small fractions of edges going to other regions .the studies we have presented in this paper have wide - ranging implications , some of which , we hope , will be discovered in the future . fornow we present a brief discussion of those area we feel our results may have an impact on . perhaps the most important implication pertains to the role and impact of highly influential users ( and consequently of highly influential geographies ) .the rise of osns has been accompanied by a triumphal narrative of democratization of communication through technology , and while it is true that twitter and other osn platforms have played an important role in giving voice to individuals who might otherwise find it difficult to speak to an audience beyond their immediate geography , our study shows that traditional holders of power and influence have not been unseated . our hypothesis on how a giant component forms on twitter by the merging of smaller tightly clustered sets of users is an important input into the sociology of how information is transacted on a social network .there is reason to believe that despite the fact that osn platforms being the world closer , older notions of proximity and community continue to contribute significantly to popularity in the way described .our study is broad in nature and captures a coarse phenomenon that we hope will excite sociologist and invite them to tease out the finer nuances that lie within such phenomena . from an engineering standpoint issues of content distribution and cachingcan be addressed from observing that highly popular topics cross national boundaries. a closer study of which national boundaries are crossed more often than others could underpin efficient content placement methods. our results could also be of great interest to those involved in using the vast reach of media like twitter to advertise their products and services .the notions of trust and reputation inherent in osns have been leveraged to a great extent already for marketing purposes .our study could help advertisers and marketers figure out how best to use these platforms for efficient and well - targeted marketing .s. bharathi , d. kempe , and m. salek .competitive influence maximization in social networks . in _ proceedings of the 3rd international conference on internet and network economics _ , wine 07 , pages 306311 , san diego , ca , usa , 2007 .springer - verlag . c. budak , d. agrawal , and a. el abbadi . limiting the spread of misinformation in social networks . in _ proceedings of the 20th international conference on world wide web _ , www 11 , pages 665674 , hyderabad , india , 2011 .t. carnes , c. nagarajan , s. m. wild , and a. van zuylen . maximizing influence in a competitive social network : a follower s perspective . in _ proceedings of the ninth international conference on electronic commerce _ , icec 07 , pages 351360 , minneapolis , mn , usa , 2007 .w. galuba , k. aberer , d. chakraborty , z. despotovic , and w. kellerer .outtweeting the twitterers - predicting information cascades in microblogs . in _ proceedings of the 3rd conference on onsline social networks _ , wosn 10 , 2010 .r. ghosh and k. lerman . a framework for quantitative analysis of cascades on networks . in _ proceedings of the 4th acm international conference on web search and data mining _ , wsdm 11 , 2011 .full version at http://arxiv.org/abs/1011.3571 .j. leskovec , l. backstrom , and j. kleinberg .meme - tracking and the dynamics of the news cycle . in _ proceedings ofthe 15th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 09 , pages 497506 .acm , 2009 .d. m. romero , b. meeder , and j. kleinberg .differences in the mechanics of information diffusion across topics : idioms , political hashtags , and complex contagion on twitter . in _ proceedings of the 20th international conference on world wide web _ , www 11 , pages 695704 , hyderabad , india , 2011 .a. ruhela , r. m. tripathy , s. triukose , a. s. , a. bagchi , and a. seth . .in _ proceedings of the fifth international conference on advanced networks and telecommunication systems _ , ants 11 , bangalore , india , 2011 . ieee .e. sadikov , m. medina , j. leskovec , and h. garcia - molina .correcting for missing data in information cascades .in _ proceedings of the 4th international conference on web search and data mining _ , wsdm 11 , pages 5564 , 2011 .d. sousa , l. sarmento , and e. mendes rodrigues. characterization of the twitter network : are user ties social or topical ? in _ proceedings of the 2nd international workshop on search and mining user - generated contents _ , smuc 10 , pages 6370 , toronto , on , canada , 2010 .r. m. tripathy , a. bagchi , and s. mehta .a study of rumor control strategies on social networks . in _ proceedings of the 19th acm international conference on information and knowledge management _ , cikm 10 , pages 18171820 , toronto , on , canada , 2010 .acm .s. yardi and d. boyd .tweeting from the town square : measuring geographic local networks . in _ proceedings of the fourth international aaai conference of weblogs and social media_. the aaai press , 2010 .
|
we present the first comprehensive characterization of the diffusion of ideas on twitter , studying more than 4000 topics that include both popular and less popular topics . on a data set containing approximately 10 million users and a comprehensive scraping of all the tweets posted by these users between june 2009 and august 2009 ( approximately 200 million tweets ) , we perform a rigorous temporal and spatial analysis , investigating the time - evolving properties of the subgraphs formed by the users discussing each topic . we focus on two different notions of the spatial : the network topology formed by follower - following links on twitter , and the geospatial location of the users . we investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on popularity . we deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network . our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively .
|
quantum teleportation , whereby a quantum state of a system can be transferred by a sender alice to a remote receiver bob through the use of classical communication and a shared entanglement resource , is a remarkable demonstration of how non - local correlations in quantum mechanics can be used to advantage .more than simply a novelty , quantum teleportation is useful for quantum information processing ; for example , it can be used as a universal primitive for quantum computation , as a fundamental component to quantum computation with linear optics , and as a means to implement non - local quantum transformations .one realization is the quantum teleportation of continuous variables ( cv ) , which teleports states of dynamical variables with continuous spectra ; such a realization allows for the teleportation of quantum states of light .any realistic ( imperfect ) quantum teleportation device can be characterized by a figure of merit to quantify its ability to perform successful teleportation .the _ fidelity _ is useful as a measure of the distinguishability of the output state from the input state , although the threshold for demonstrating genuine quantum teleportation is debatable .one such threshold demonstrates that an entanglement resource was used during the experiment .another threshold is that alice teleports the state to bob without learning about the state in question .for teleportation of a distribution of states in a set , one can define the average fidelity of the device as the average of the fidelity over .the average fidelity provides a figure of merit for the device to teleport states in this set .however , any quality measure for a useful quantum teleportation apparatus must be based on its intended application .one important application is that the device allows for the teleportation of entanglement , i.e. , that entanglement is preserved between the teleported system and another ( unteleported ) one .consider a user , victor , who acts to verify that the device functions as advertised .alice and bob claim that their device teleports coherent states with a particular average fidelity , as in experimental quantum teleportation .victor wishes to test the ability of this device to teleport entanglement , but is restricted to supplying alice with coherent states according to the advertised capability ( that is , alice must receive states in the specified set ) .one option is for victor to employ a two - mode entangled coherent state ( ecs ) . by supplying alice with only one mode, victor can conceal from her and bob that they are teleporting only one portion of a two - mode entangled state .bob returns the state to victor so that he can test whether the ecs is reconstructed with high fidelity or not , and thus whether the teleportation has preserved the entanglement .alice and bob specified that the advertised applies only to coherent states .however , if they decide to check , they will indeed see that they are receiving a mixture of coherent states from victor , so the supply of states from victor does not violate the specification that these states are drawn from a distribution of coherent states .thus , victor uses these ecss to quantify the capability of this quantum teleportation to preserve entanglement .provided that these states are entanglements of coherent states that are nearly orthogonal , alice and bob can not detect that victor is using ecss to verify the efficacy of the scheme .we show in this paper that ecss are useful for testing the ability of a device to teleport entanglement .moreover , we show that this entanglement fidelity does not depend on using these ecss , but applies generally to the teleportation of entanglement for a device that teleports coherent states .we note that quantum teleportation of ecss has been studied , but in entirely different contexts .one such investigation is the teleportation of ecss in their entirety , and another consideration has been to use an ecs as a substitute for the standard entanglement resource provided by the two - mode squeezed vacuum state .our study is quite different from these two cases ; in our investigation , victor employs ecss to replicate the conditions that alice and bob experience in the experiment of furusawa _ et al _ , and victor uses a second entangled mode to verify that quantum teleportation of entanglement is taking place .the paper is constructed as follows . in sec .[ sec : quantum ] , we develop the theory of quantum teleportation of coherent states according to a formalism that is useful for subsequent sections . in sec .[ sec : ecs ] , we discuss the quantum teleportation using ecss as a means of verifying the capability of teleporting entanglement ; we include a discussion of the entanglement fidelity as a measure of this capability .we define a noisy quantum teleportation scheme in sec . [sec : noisy ] , and present the result that the entanglement fidelity for the noisy quantum teleportation of ecss is extremely sensitive to very small errors in alice s measurement .we conclude with sec .[ sec : conc ] .quantum teleportation was proposed as a means by which a quantum state can be transferred from a system a ( alice ) to a remote system b ( bob ) by employing only classical communication and a shared entanglement resource .let alice hold an arbitrary quantum state that she wishes to send to bob . in cv quantum teleportation, is an infinite dimensional hilbert space ; typically , the hilbert space for a harmonic oscillator is used .in addition to this quantum system , alice holds a second quantum system with hilbert space , and bob holds a third with hilbert space .these two systems are in the two - mode squeezed state with the -boson fock state . in the limit that , alice and bob share a maximally entangled einstein - podolsky - rosen ( epr ) state . to perform quantum teleportation of the unknown state , alice begins by performing a joint projective measurement on of the form where is the displacement operator on system defined by a measurement result is a complex number .such a measurement can be implemented by mixing the two states on a beamsplitter and performing balanced homodyne detection on each of the two output modes .alice sends this measurement result via a classical channel to bob , who performs a displacement operation ( realizable by mixing with a strong coherent field with adjustable amplitude and phase ) on his system .bob s system is now in the state , which is identical to the initial state received by alice in the limit . in an experiment ,various constraints ( including finite squeezing and imperfect projective measurements ) limit the performance of the quantum teleportation .one must then define a figure of merit to describe how well a physical quantum teleportation device approximates the ideal case .one such measure is the _average fidelity _ of the process .consider a pure input state , which is imperfectly teleported such that bob receives the generally mixed state .the fidelity of the output state compared to the input state is given by ( note that some authors define the fidelity to be the square root of this quantity . ) for a given distribution of input states to be teleported , the average fidelity is defined to be the weighted average of the fidelity over . in the experiment by furusawa _ , a distribution of coherent states with fixed amplitude varied over all phases was chosen to test the teleportation .it is also possible to employ a distribution over both amplitude and phase .the average fidelity serves well as a figure of merit for certain applications .however , any measure quantifying the performance of a quantum teleportation device must be placed in the context of its intended use . in particular, one may ask how well a device performs at the important task of teleporting entanglement .victor , who wishes to test alice and bob s quantum teleportation device , supplies alice with a ( possibly mixed ) quantum state , and after the teleportation , bob returns a state to victor .victor can then perform measurements on to determine the success or failure of the quantum teleportation . in offering their services , alice and bobcan be expected to quote victor a measure of performance of their quantum teleportation device ( such as the average fidelity ) , as well as a restriction on the type of states that they can quantum teleport .for example , they may advertise an average fidelity for quantum teleportation of coherent states sampled from some distribution . depending on his intended use of the quantum teleportation device , these particular measures of performance may not be adequate .for example , victor may wish to teleport one component of an entangled state , and ensure that the final state returned by bob is still entangled with the system he kept . if alice and bob advertise that they can only teleport distributions of coherent states , it is important that the state that victor supplies to alice is indeed in the allowed set .consider the two - mode ecs with the normalization .this state is not separable , and thus possesses entanglement between modes and .the reduced density matrix for mode is for the case where the overlap is negligibly small , this reduced density matrix is indistinguishable from i.e. , a mixture of two coherent states .thus , a quantum teleportation device that functions for coherent state inputs should function equally well for such a state .note that the issues of overlaps can be avoided if victor instead uses a two - level system for ( such as a two - level atom ) , and entangles coherent states of mode to orthogonal states in .states of this form have been experimentally realized .the average fidelity may not be a good indicator of the quality of quantum teleportation .in particular , it overstates the capability to teleport entanglement ._ entanglement fidelity _ is a superior measure to quantify the ability of a process to preserve entanglement , and reduces to the standard fidelity for the case of pure states ( e.g. , coherent states ) .consider a process ( quantum channel ) described by a superoperator and a generally mixed state as input .let be the corresponding output state .we introduce as a purification of , i.e. , a pure state obtained by introducing an ancilla system such that .the entanglement fidelity has many exceptional properties ; see .it is independent of the choice of purification , meaning that it depends only on the reduced density matrix supplied to alice .this property will be useful in testing quantum teleportation , as victor can choose any purification ( any choice of entanglement between the state he supplies alice and the state he retains ) and achieve the same measure .also , if is a pure state , the entanglement fidelity reduces to the standard fidelity .thus , the results of tests employing the entanglement fidelity can be directly compared to the fidelity of teleporting pure coherent states .in this section , we consider a noisy quantum teleportation device that includes finite squeezing of the entanglement resource , propagation errors , imperfect detectors , and other errors that introduce a stochastic error given by a gaussian distribution .we show that , whereas the average fidelity for quantum teleportation of coherent states is quite robust against such errors , the entanglement fidelity for the quantum teleportation of distributions of coherent states of the form ( [ eq : mixedcs ] ) drops off very rapidly even with highly squeezed states and small errors . in quantum teleportation , alice must measure a complex amplitude via a joint measurement of the form ( [ eq : idealpovm ] ) and report this measurement result to bob .bob then performs a displacement on his state conditioned on this result . if there is a measurement error, however , alice may send to bob the values , where is a complex error ( an error in both position and momentum ) sampled from some ensemble with corresponding probabilities .bob then performs the displacement rather than .( equivalently , bob s displacement operation could be subject to a similar error . )the result is that the quantum teleportation process will no longer be ideal ; it becomes a noisy process described by some superoperator .the probability distribution we employ is a gaussian distribution with variance , defined such that vacuum noise has a variance .( note that has the units of the square of the coherent state complex amplitude . ) using a perfect teleportation scheme ( involving ideal epr states of the form of eq .( [ eq : eprstate ] ) and ideal projective measurements given by eq .( [ eq : idealpovm ] ) ) , but with a gaussian - distributed error , the teleported state will be related to the input state by one can view as the transfer superoperator for this ( noisy ) process . in , braunstein and kimble considered the effects of finite squeezing ( ) and imperfect detectors .both of these effects lead to gaussian noise , described by the same superoperator as given by eq .( [ eq : noisytransfer ] ) .also , propagation losses can be compensated for by linear amplification , which introduces an associated gaussian noise described similarly .thus , the variance for the total error is given by the sum of the variances for the individual errors as where is the variance for the noise introduced by linear amplification with gain , describes the effect of finite squeezing , describes the noise due to finite homodyne detection efficiency , and describes other sources of gaussian noise .again , all of these variances are defined such that is the level of vacuum noise .thus , an effective describes the cummulative effects of a wide variety of noise and errors in the quantum teleportation process . in a classical picture , without employing a squeezed resource ( ) , we find that , i.e. , that the output state acquires two units of vacuum noise .this noise is what braunstein and kimble refer to as `` quantum duty '' , or quduty : one unit of vacuum noise is acquired by each pass across the quantum / classical border ( one by alice s measurement and one by bob s reconstruction ) . for an input state given by a pure coherent state , it is straightforward to calculate the entanglement fidelity of the operation .( as the state is pure , the entanglement fidelity will equal the standard fidelity . )the entanglement fidelity for the coherent state , is independent of .the lower bounds on fidelity for quantum teleportation discussed in the introduction are clear from this equation .the bound of by braunstein and kimble can only be satisfied if ; i.e. , the variance must be less than twice the vacuum noise , verifying the use of an entangled resource .the more stringent bound , set by grosshans and grangier using an argument of no - cloning , requires or up to one unit of vacuum noise .if the state to be teleported is one mode of an ecs , the calculation of the entanglement fidelity is more involved .fortunately , a purification of the state ( [ eq : reducedecs ] ) is already provided : it is the ecs itself . as noted earlier , the entanglement fidelity is independent of the choice of purification .the entanglement fidelity for the noisy quantum teleportation of the mode is given by the expression is complicated by the nonorthogonality of coherent states , but the overlap drops rapidly as the coherent states are increasingly separated in phase space . to simplify this expression , we assume that is sufficiently large that we can ignore terms bounded above by .again , we note that these overlaps can be avoided if victor chooses to couple coherent states in to orthogonal modes in , and that this choice of purification gives identical results for the entanglement fidelity . using this assumption , where .the entanglement fidelity for the noisy quantum teleportation of one mode of an ecs differs from that of a pure coherent state ( eq . ( [ eq : entfofcs ] ) ) due to a term that drops exponentially in .as this term becomes negligibly small ( for even small errors described by ) , the entanglement fidelity for the teleportation of ecss approaches half the value for that of a pure coherent state . in fig .[ fig : fidelity ] , we compare the standard fidelity for teleportation of a pure coherent state to the entanglement fidelity for teleporting the ecs with for two values , and . for the errors in noisy quantum teleportation .the entanglement fidelity is plotted for a pure coherent state ( cs ) , and for entangled coherent states ( ecs ) of the form for ( a ) and ( b ) .note that a variance of corresponds to two units of vacuum noise , i.e. , two `` quduties''.,width=312 ] one key featureto notice is that the entanglement fidelity for all cases is reduced significantly for a variance on the order of the vacuum noise .thus , the precision of quadrature phase measurements must be very good on the scale of the standard quantum limit . for the ecss ,the rapid decrease of the entanglement fidelity to approximately half that of the pure coherent states is clearly evident .consider ecss of the form with mean photon number . in order to maintain a constant entanglement fidelity , the variance of the errors must scale as .thus , quantum teleportation of a single mode of an ecs with high entanglement fidelity becomes increasingly difficult as the mean photon number of the state is increased .note that , for any distribution of coherent states that leads to an average fidelity for quantum teleportation , one can also calculate an average entanglement fidelity both for pure coherent states and ecss sampled from the same distribution . in the experiment of furusawa _ et al _ , an average fidelity of has been obtained for the set , where is the coherent state with the mean photon flux and is the phase of the coherent state .this phase is uniformly distributed over the domain .the experimental result can be compared against our calculation for teleportation of ecss of the form with , presented in fig .[ fig : fidelity ] . whereas the entanglement fidelity for the pure coherent state is very high for small , the entanglement fidelity for the ecs is less than even for ( corresponding to a total error around 2% of the vacuum noise ) .note that with otherwise perfect conditions ( no propagation loss , detector noise , etc . ) , at least 8.5 db of squeezing is required to achieve an entanglement fidelity of greater than for this state ; this amount of squeezing represents a target for high entanglement fidelity quantum teleportation of distributions of coherent states .we have shown that the average fidelity is not necessarily a good measure of successful quantum teleportation , and in particular is shown to be a poor indicator for the capability of quantum teleportation to preserve entanglement . on the other hand , the entanglement fidelity provides a useful figure of merit for the quantum teleportation of entanglement .we demonstrate that entanglement with other systems can be used to test claims of quantum teleportation , even for a restricted set of allowed input states . in particular , ecss can be used to test quantum teleportation devices that advertise only teleportation of mixtures of coherent states .we show that the entanglement fidelity of distributions of coherent states is extremely fragile , and can be drastically reduced from the fidelity of the pure coherent states by the effects of finite squeezing , imperfect detection , propagation errors , or small stochastic errors in alice s measurements ( or bob s transformations ) .an important application of teleporting coherent states is in a distributed quantum network that employs only gaussian states and gaussian - preserving operations , i.e. , linear optics . in such a network ,the appropriate figure of merit for the teleportation of entanglement between nodes is clearly the entanglement fidelity .this project has been supported by an australian research council grant .tjj has been financially supported by the california institute of technology , and appreciates the hospitality of macquarie university where this project was undertaken .sdb acknowledges the support of macquarie university .we acknowledge useful discussions with t. c. ralph and t. tyc .99 c. h. bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , * 70 * , 1895 ( 1993 ) .d. gottesman and i. l. chuang , nature ( london ) * 402 * , 390 ( 1999 ) .e. knill , r. laflamme , and g. j. milburn , nature ( london ) * 409 * , 46 ( 2001 ) .j. eisert , k. jacobs , p. papadopoulos , and m. b. plenio , * 62 * , 052317 ( 2000 ) . s. l. braunstein and h. j. kimble , * 80 * , 869 ( 1998 ) .f. grosshans and p. grangier , * 64 * , 010301(r ) ( 2001 ) . s. m. tan , * 60 * , 2752 ( 1999 ) .a. furusawa , j. l. srensen , s. l. braunstein , c. a. fuchs , h. j. kimble and e. s. polzik , science * 282 * , 706 ( 1998 ) .b. c. sanders , * 45 * , 7746 ( 1992 ) ; * 46 * , 2966 ; b. c. sanders and d. a. rice , * 61 * , 013805 ( 2000 ) .x. wang , * 64 * , 022302 ( 2001 ) .h. jeong , m. s. kim , and j. lee , * 64 * , 052308 ( 2001 ) .s. j. van enk and o. hirota , * 64 * , 022313 ( 2001 ) .s. j. van enk , * 60 * , 5095 ( 1999 ) .w. p. bowen _et al _ , ` quant - ph/0207179 ` ( 2002 ) .a. mann , b. c. sanders , and w. j. munro , * 51 * , 989 ( 1995 ) .et al _ , * 77 * , 4887 ( 1996 ) . b. schumacher ,* 54 * , 2614 ( 1996 ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 2000 ) . c. m. caves ,* 23 * , 1693 ( 1981 ) .t. c. ralph , private communication .s. d. bartlett , b. c. sanders , s. l. braunstein , and k. nemoto , * 88 * , 097904 ( 2002 ) .
|
entangled coherent states can be used to determine the entanglement fidelity for a device that is designed to teleport coherent states . this entanglement fidelity is universal , in that the calculation is independent of the use of entangled coherent states and applies generally to the teleportation of entanglement using coherent states . the average fidelity is shown to be a poor indicator of the capability of teleporting entanglement ; i.e. , very high average fidelity for the quantum teleportation apparatus can still result in low entanglement fidelity for one mode of the two - mode entangled coherent state .
|
there has been much recent interest in the study of causality based on graphs , e.g. . a most common scenario studiedis when the observer collects data from a system and wants to make inferences about what would happen were she to control the system , for example by imposing a new treatment regime . to make prediction with such data she needs to hypothesize a certain causal mechanism which not only describes the data generating process , but also governs what might happenwere she to control the system .pioneering work by two different groups of authors have used a graphical framework called a causal bayesian network ( cbn ) .their work is based on bayesian networks ( bn ) which is a compact framework for representing certain collections of conditional independence statements .algebraic geometry and computational commutative algebra have been successfully employed to address identifiability issues and to understand the properties of the learning mechanisms behind bn s .a key point was the understanding that collections of conditional independence relations on discrete random variables expressed in a suitable parametrization are polynomials and have a close link with toric varieties .further related work showed that pairwise independence and global independence are expressed through toric ideals and that gaussian bn s are related to classical constructions in algebraic geometry e.g. .in this paper we observe that when model representations and causal hypotheses are expressed as a set of maps from one semi - algebraic space to another , then ideas of causality are separated from the classes of graphical models .this allows us to generalise straightforwardly concepts of graphical causality as defined in e.g. ( * ? ? ?* definition 3.2.1 ) to non - graphical model classes .many classes of models including context specific bn s , bayes linear constraint models ( blc s ) and chain event graphs ( ceg s ) are special cases of this algebraic formulation .causal hypotheses are most naturally expressed in terms of two types of hypotheses .the first type concerns when and how circumstances might unfold .this provides us with a hypothesized partial order which can be reflected by the parametrization of the joint probability mass function of the idle system .the second type of hypotheses concerns structural assertions about the uncontrolled system that , we assume , also apply in the controlled system .these are usually expressible as semi - algebraic constraints in the given parametrization . under these two types of hypothesesthe mass function of the manipulated system is defined as a projection of the mass function of the uncontrolled system , in total analogy to cbn s .the combination of the partial order and of these constraint equations and inequalities enables the use of various useful algebraic methodologies for the investigation of the properties of large classes of discrete inferential models and of their causal extensions .the main observation of the paper is that a ( discrete ) causal model can be redefined directly and very flexibly using an algebraic representation starting from a finite set of unfolding events and a description of a way they succeed one another .this is shown through model classes of increasing generality .first in section [ section2 ] we review the popular class of discrete bn models , our simplest class , their related factorization formulae under a preferred parametrisation , and their causal extensions .then we extrapolate the algebraic features of bn and give their formalisation in section [ section3 ] in a rather general context . in section [ section4 ]we show how this formalization can apply to more general classes of models than bn s , so that identifiability and feasibility issues can be addressed . herewe describe causal models based on trees in section [ sectiontree ] and the most general model class we consider is in section [ extreme ] .the issues are illustrated throughout by a typical albeit simple model for the study of the causal effects of violence of men who might watch a violent movie , introduced in section [ section2violent ] to outline some limitations of the framework of the bn for examining causal hypotheses , which , we believe , currently is the best framework to represent causal hypotheses . in section [ section4.1 ]we are able to express these limitations within an algebraic setting .the discrete bn is a powerful framework to describe hypotheses an observer might make about a particular system .it consists of a directed acyclic graph with nodes and of a set of probabilistic statements .it implicitly assumes that the features of main interest in a statistical model can be expressed in the following terms . * the observer s beliefs as expressed through the graph concern statements about relationships between a prescribed set of measurements taking values in a product space , where is a random variable that takes values in , . for be the cardinality of , be finite and take value on the set of integers , henceforth indicated as ] , marginalisation over and gives a linear map which is a projection of convex polytopes .namely , where ,i\in j^{c}}p(x_{1},\ldots , x_{n}) ] and define and =\mathbb x_1 \times \ldots \times \mathbb x_{i-1 } \times \mathbb x_{i+1 } \times\ldots \times \mathbb x_n ] . for ]then , the operation of conditioning returns a ratio of polynomial forms of the type where stand for joint mass function values .this has been implemented in computer algebra softwares by various researchers , as an application of elimination theory .a basic algorithm considers indeterminates with for the domain space and with ] , the set of polynomials in the with rational coefficients .its projection onto the face can be computed by elimination as follows by adjoining a dummy indeterminate and viewing as an ideal in ,l] ] and assume this partition compatible with a causal order on , that is if then is not affected by the intervention on .if the probabilistic structure on is a bn then we consider a regular bn .we consider the parametrization for which a probability is seen as a point in - 1}\times \delta _ { r_{i}-1}^{\prod_{j=1}^{i-1}r_{j}}\times \delta _ { \prod_{j = i+1}^{n}r_{j}-1}^{\prod_{j=1}^{i}r_{j } } .\ ] ] the intervention or manipulation operation is defined only for image points for which and returns a point in - 1}\times \delta _ { \prod_{j = i+1}^{n}r_{j}-1}^{\prod_{j=1}^{i-1}r_{j}}\ ] ] namely the point with coordinates for and .note that this map is naturally defined over the boundary .in contrast there is no unique map extendible to the boundary of the probability space in . for binary random variablesit is the orthogonal projection from onto the face which is identified with the hypercube . in general , for a regular bn this is an orthogonal projection in the associated conditional parametrisation , which then seems the best parametrization in which to perform computations .the post manipulation joint mass function on is then which , factorised in primitive probabilities , gives a monomial of degree , one less than in equation ( [ bnfact ] ) . in this sense , under the conditional parametrization , the effect of a manipulation or control gives a much simpler algebraic map than the effect of conditioning .its formal definition depends only on the causal order , the second bullet point in section [ section2bn ] , and not on the probabilistic structured of the bn .in particular it does not depend on the homogeneity of the factorization of the joint mass function on across all settings .this observation allowed us to extend this notion to larger classes of discrete causal models .see and section [ section4 ] .identification problems associated with the estimation of some probabilities after manipulation from passive observations ( manifest variables measured in the idle system ) have been formulated as an elimination problem in computational commutative algebra .for example in the case of bn the case study in , giving a graphical application of the back - door theorem , has been replicated algebraically by matthias drton using the parametrization in primitive probabilities .ignacio ojeda addresses from an algebraic view point a different and more unusual identification problem in a causal bn with four nodes .he uses the parameters and the description of the bn as a toric ideal .both are personal communications at the workshop to which this volume is dedicated . in general, a systematic implementation of these problems in computer algebra softwares will be slow to run . at timessome pre - processing can be performed in order to exploit the symmetries and invariances to various group action for certain classes of statistical models .other times a re - parametrisation in terms of non - central moments loses an order of magnitude effect on the speed of computation and hence can be useful .nevertheless in this algebraic framework many non - graphically based symmetries which appear in common models are much easier to exploit than in a graphical setting .this suggests that the algebraic representation of causality is a promising way of computing the identifiability of a causal effect in much wider classes of models than bn .to recap : 1 .[ order ] a total order on and an associated multiplication rule as in equation ( [ multiplicationrule ] ) are fundamental .these determine a set of primitive probabilities ; 2 .[ studeny ] a discrete bn can be described through a set of linear equations equating primitive probabilities , equations ( [ bneqs ] ) , together with inequalities to express non negativity of probabilities and linear equations for the sum - to - one constraints ; 3 .a bn is based on the assumption that the factorization in equation ( [ bnfact ] ) holds across all values of in a cross product sample space .recall that in it is shown that identification depends on the sample space structure , in particular on the number of levels a variable takes ; 4 . within a graphical framework subsets of whole variables in considered manifest or hidden ; 5 . [ causalcontrol ]mainly the causal controls being studied in e.g. correspond to setting subsets of variables in to take particular values and often the effect of a cause is expressed as a polynomial function of the primitive probabilities , in particular the probability of a suitable marginal ; 6 .[ identpoint ] identification problems formulated in the graphical framework of a bn and intended as the writing of an effect of a cause in terms of manifest variables are basically elimination problems .hence they can be addressed using elimination theory from computational commutative algebra . in particular theorems like the front - door theorem and the back - door theoremare proved using clever algebraic eliminations , see .the above scheme can be modified in many directions to include non - graphical models and causal functions not expressible in a graphical framework , like those in section [ sectioncausal ] .identification problems can still be addressed with algebraic methods as in item [ identpoint ] above .an indispensable point for a causal interpretation of a model is a partial order either on or on , where the sample space may be generalised to be not of product form .a first generalisation is in where the authors substitute the binomials in item [ studeny ] above with linear equations and the inequalities in item [ order ] with inequalities between linear functions in the primitive probabilities .if there exists at least a probability distribution over satisfying this set of equations and inequalities then the model is called a feasible bayesian linear constraint model .of course a mere algebraic representation of a model will lose the expressiveness and interpretability associated with the compact topology of most graphical structures and hence to dispense completely with the graphical constraints might not always be advisable .but a combined use of a graphical representation and an algebraic one will certainly allow the formulation of more general model classes and will allow causality to benefit of computational and interpretative techniques of algebraic geometry as currently happens in computational biology . a causal model structure based on a single rooted tree and amenable of an algebraic formulationis studied in . in there , following the focus of the causal model is shifted from the factors in to the actual circumstances .each node of the tree represents a `` situation '' in the case of a bn a possible setting of the vector and the partial order intrinsic to the tree is consistent with the order in which we believe things can happen .this approach has many advantages , freeing us from the sorts of ambiguity discussed in section [ section2violent ] and allowing us to define simple causal controls that enact a particular policy _ only _ when conditions might require that control .assume a single rooted tree with vertex set and edge set .let be a generic edge from to and associate to a possibly unknown transition probabilities ] where are such that and are in the same chain , say , and there is no such that . that is , there is a chain to which both and belong and precedes immediately in the chain .we call primitive probabilities , collect them in a vector and note that they can be given as labels to the edges of the hasse diagram . moreover , we require that if belongs to more than one chain , then the sum of the transition probabilities is equal to one , i.e. .this defines the domain space of as a product of the simplices in total analogy to the cases of bn s and trees .the probability of a chain is now defined as , in analogy to equations ( [ treefact ] ) and ( [ multiplicationrule ] ) .thus , we have determined a saturated model parametrised with and given by the sum - to - one constraints and the non - negative conditions .a sub - model , say , can be defined by adjoining equalities and inequalities between polynomials or ratios of polynomials in the primitive probabilities , say and , where and are polynomials or ratios of polynomials . of course one must ensure that there is at least one solution to the obtained system of equalities and inequalities ;that is , that the model is feasible .sub - models can also be defined through a refinement of the partial order .next , causality can be defined implicitly by considering a set of edges of the hasse diagram and for adjoining to a new set of primitive probabilities and some equations where is a polynomial .collect the new parameters in the list , where .identifiability problems are now formulated as in previous sections .suppose we observe some polynomial equalities of the primitive probabilities , , and even some inequalities , where is a vector of polynomials .then we are interested in checking whether a total cause , , is identifiable from and compatible with the given observation. this computation could be done by using techniques of algebraic geometry in total analogy to bn s and trees as discussed in item [ identpoint ] .the top - down scheme in table [ summary ] summarises all this . in the top cellwe have a semi - algebraic set - up involving equalities and inequalities in the parameters involving polynomials or ratios of polynomials .we must ensure that the set of values of which solve this system of equalities and inequalities is not empty , i.e. the model is feasible . in the next two cells we add two sets of indeterminates : and , and some equalities and inequalities of polynomials in the .then the effect is uniquely identified if there is a value of satisfying the system and .[ summary ] .summary of section [ extreme ] [ cols= " < , < " , ] all the models considered in this paper fall within this framework and within the class of algebraic statistical models . in particular in ceg models circumstances are defined as sets of vertices of a tree and the partial order is inherited from the tree order .ceg s in a causal context have been studied in and they have been applied to the study of biological regulation models .we conjecture that there are many other classes of causal models that have an algebraic formulation of this type and are useful in practical applications .we end this paper by a short discussion of how the identifiablity issues associated with the non - graphical example of section [ section2violent ] can be addressed algebraically . for the example in section [ section2violent ] assume conditions ( [ nomovie ] ) and ( [ movietable ] ) .hence , for the non - zero probabilities associated with not viewing the movie are and whilst the probabilities associated with viewing it are given in table [ malaimpaginazione ] .[ malaimpaginazione ] consider the two controls described in the bullets in section [ sectioncausal ] .the first , banning the film , gives non - zero probabilities for satisfying the equations and .the second , the fixing of testosterone levels to low for all time , gives manipulated probabilities now consider three experiments .experiment 1 of section [ sectionmanifest ] exposes men to the movie , measuring their testosterone levels before and after viewing the film .this obviously provides us with estimates of , for and . under experiment 2 of section [ sectionmanifest ] a large random large sample is taken over the relevant population providing good estimates of the probability of the margin of each pair of and the level of testosterone on those who fought , , but only the probability of not fighting otherwise .so you can estimate the values of and sample for and the last probability is redundant since it is one minus the sum of those given above .finally experiment 3 is a survey that informs us about the proportion of people watching the movie on any night , i.e tells us .now suppose we are interested in the total cause of fighting if forced not to watch .clearly this is identified from an experiment that includes experiments 2 and 3 by summing and division by , but by no other combination of experiments .similarly , the probability a man with testosterone levels held low watches the movie and fights , is identified from obtained from experiment 1 and 2 by division .the movie example falls within the general scheme of section [ section4 ] .of course a graphical representation of the movie example , e.g. over a tree or even a bn , is possible and useful .but one of the point of this paper is to show that when discussing causal modelling the first step does not need to be the elicitation of a graphical structure whose geometry can then be examined through its underlying algebra .rather an algebraic formulation based on the identification of the circumstances of interest , e.g. the set , and the elicitation of a causal order , e.g. the partial order on , is a more naturally starting point . clearly in such framework on one hand the graphical type of symmetries embedded and easily visualised on e.g. a bn are not immediately available but they can be retrieved ( for an example involving ceg and bn see ) . on the other hand algebraic type of symmetries might be easily spotted and be exploited in the relevant computations. in this example computation was simple algebraic operation while in more complex case we might need to recur to a computer .of course the usual difficulties of using current computer code for elimination problems of this kind remain , because inequality constraints are not currently integrated into software and because of the high number of primitive probabilities involved .caveats in section [ section3 ] for bn s , like the advantages of ad - hoc parametrizations , apply to these structures based on trees and/or defined algebraically .this work benefits from many discussions with various colleagues .in particular we acknowledge gratefully professor david mond for helpful discussion on the material in section [ section3 ] and an anonymous referee of a related paper for a version of our main example . , _ the geometry of causal probability trees that are algebraically constrained _ , search for optimality in design and statistics : algebraic and dynamical system methods ( l pronzato and a a zigljavsky eds . )springer - verlag , 95129 ( to appear ) .
|
the relationship between algebraic geometry and the inferential framework of the bayesian networks with hidden variables has now been fruitfully explored and exploited by a number of authors . more recently the algebraic formulation of causal bayesian networks has also been investigated in this context . after reviewing these newer relationships , we proceed to demonstrate that many of the ideas embodied in the concept of a `` causal model '' can be more generally expressed directly in terms of a partial order and a family of polynomial maps . the more conventional graphical constructions , when available , remain a powerful tool . bayesian networks , causality , computational commutative algebra .
|
the radon transform , which was introduced by johann radon in 1917 , is the integral transform consisting of the integral of a function over either hyperplanes or straight lines .a key application for the radon transform is tomography , which is a technique for reconstructing the interior structure of an object from a series of projections of this object and is based on deep pure mathematics and numerical analysis as well as science and engineering . in general, we will follow the notation in and .the radon transform in is defined by where is a unit vector , , and is the arc - length measure on the line with the usual inner product .the radon transform can be expressed as where is the lebesgue measure on and is the indicator function . + .[ range ] for each the radon transform satisfies the following condition : for the integral can be written as a degree homogeneous polynomial in .we denote the unit vector in direction as with and .thus , the radon transform of can be expressed as a function of : note that since the pairs and give the same line , satisfies the _ evenness _ condition : .[ radon : cont ] the radon transform is a bounded linear operator from to \times { \mathbb{r}}) ] .see .along with the transform we define the dual radon transform of \times { \mathbb{r}}) ] , as the operator with fourier multiplier (see ) : [ thm : inversionf ] let . then note that this theorem is true on a larger domain than .however , can be a distribution rather than a function .a function is said to be in the _ schwartz space _ if and only if and for each polynomial and each integer , where is the norm of .a function is said to be in the schwartz space \times { \mathbb{r}}) ] be even .then , there exists such that if and only if see , theorem 2.4 .the _ hamburger moment sequence _ of a _ positive measure _ on is defined by assuming the integrals converge absolutely .let be the algebra of all polynomial functions on with complex coefficients .a linear map is _ positive semi - definite _ if for all .we assume .a function is a _ moment _ if there exists a positive measure on such that in this case , the measure is called a _ representing measure _ for .a solution of a moment problem is called _ determined _ if the corresponding representing measure is unique .a 2-sequence is said to be _ positive semi - definite _ ( _ moment _ ) if there exists a positive semi - definite linear map ( a moment function ) such that for , respectively .it is trivial that every moment sequence is positive semi - definite .hamburger established that a positive semi - definite 1-sequence is a moment sequence ( see , theorem 2.1.1 . ) .however , in , there are positive semi - definite sequences which are not moment sequences ( see and ) .the _ _ hamburger moment problem in _ _ is stated as : + given a 2-sequence of real numbers , find a positive borel measure on such that a necessary and sufficient condition for existence of such a positive borel measure on is the following : [ thm : vacilescu ] a 2-sequence is a moment 2-sequence if and only if there exists a positive semi - definite 4-sequence satisfying the following properties : + ; + ; + for all .+ in this case , the 2-sequence has a uniquely determined representing measure in if and only if the 4-sequence is unique .a _ partial sequence _ is a sequence in which some terms are specified , while the remaining terms are unspecified and may be treated as free real variables .the _ hamburger moment completion _ of a partial sequence is a specific choice of values for the unspecified terms resulting in the hamburger moment sequence on .the _ pattern _ of a partial sequence is the set of positions of the specified entries .denote the pattern of a partial sequence by the set of ordered pairs of nonnegative integers we say that a pattern is _ hamburger moment completable _ if every partial moment sequence with pattern has a hamburger moment completion . for more information on the trigonometric multidimensional momentcompletion we refer to .note that bakonyi and naevdal deal with truncated fourier coefficient problem and show the following theorem .a positive borel measure on is called a _ positive extension _ of if ^d } e^{-i\langle k , t \rangle } d\mu(t),~k\in\gamma -\gamma.\ ] ] a finite subset has the _ extension property _ if every sequence which is positive with respect to admits a positive extension .a finite subset possesses the extension property if and only if it is an arithmetic progression .in practice it is often required to reconstruct images from partial radon transform data function to be reconstructed is the density of the object .so , we assume that is nonnegative .it is clear that if , then so is .denote the set of moment 2-sequences by .let be the set of moment 2-sequences with a positive borel measures such that the linear subspace spanned by is dense in .it it trivial that .then , the following theorem holds .[ main : thm1 ] let \times { \mathbb{r}}}) ] since \times { \mathbb{r}}) ] .therefore , a.e .the following is how to check whether is a moment 2-sequence : + suppose that for each , where is a 2-sequence .+ fix .then , ( [ eq : homopoly ] ) can be expressed as the following where let be distinct angles .then , system ( [ axb1 ] ) can be written in matrix form as follows : where the determinant of the matrix can be expressed as : where {1\leq i , j \leq k+1} ] be nonnegative , even and be a hamburger completable pattern . if for each , for some 2-sequence .then , there exists nonnegative such that a.e .note that is not required to be a moment sequence .the completion theorem guarantees there exists a moment completion .[ defn : molifier ] if is a smooth function on , satisfying the following four requirements : + ( 1 ) it is compactly supported , + ( 2 ) , + ( 3 ) , + ( 4 ) , for all .+ then is called a * positive mollifier*. + furthermore , if + ( 5 ) for some infinitely differentiable function , then it is called a * symmetric mollifier*. [ boundedl1 ] the modified radon transform is a bounded linear operator from to \times { \mathbb{r}}) ] .using proposition [ modified : form2 ] , one finds that \times { \mathbb{r } } ) } & = \int_{\theta=0}^{2\pi } \int_{p=-\infty}^{\infty } \big|\widehat{r } f(\theta , p)\big|dp d\theta\\ & = \int_{\theta=0}^{2\pi } \int_{p=-\infty}^{\infty } \bigg|\int_{\tau=-\infty}^{\infty } rf(\theta , p+\tau ) \varphi ( \tau ) d\tau \bigg| dp d\theta\\ & \leq \int_{\theta=0}^{2\pi } \int_{p=-\infty}^{\infty } \int_{\tau=-\infty}^{\infty } \big|rf(\theta , p+\tau)\big| \big|\varphi ( \tau)\big| d\tau dp d\theta\\ & = \int_{\tau=-\infty}^{\infty } \big|\varphi(\tau)\big| \bigg ( \int_{\theta=0}^{2\pi } \int_{p=-\infty}^{\infty } \big|rf(\theta , p+\tau)\big| dp d\theta \bigg ) d\tau.\end{aligned}\ ] ] by theorem [ radon : cont ] and definition [ defn : molifier ] , it follows that \times { \mathbb{r } } ) } & \leq \int_{\tau=-\infty}^{\infty } \big|\varphi(\tau)\big| \big ( 2\pi \|f\|_{l^1({\mathbb{r}}^2 ) } \big ) d\tau\\ & = 2\pi \|f\|_{l^1({\mathbb{r}}^2)}.\end{aligned}\ ] ] by theorem [ modified : form2 ] and the convolution theorem , since , by theorem [ thm : pojection - slice ] , it follows that applying the fourier inversion formula and proposition [ modified : form2 ] , one shows that [ modified : range1 ] let \times { \mathbb{r}}})$ ] be nonnegative and even .then , there exists nonnegative such that a.e . if and only if for each , where and for each .suppose that a.e .for some . by lemma[ modified : form1 ] , d\tau\\ & = \int_{\tau=-\infty}^\infty \varphi ( \tau ) \bigg(\int_{{\mathbb{r}}^2 } ( \langle x,\omega\rangle -\tau)^k f(x)dx \bigg ) d\tau.\\\end{aligned}\ ] ] use the following polynomial expansion the result follows .the proof of the converse part is similar to the proof of theorem [ main : thm1 ] . c. berg and j. p. r. christensen , suites completment dfinies positives , majoration de schur et le problme des moments de stieltjes en dimension , c. r. acad . sc .paris , serie i , * 297 * ( 1983 ) , 4548 , mr * 84k:*44013 e. t. quinto , _ an introduction to x - ray tomography and radon transform _ , the radon transform , inverse problems , and tomography , 123 , proc ., * 63 * , amer . math .soc . , providence , ri , 2006 .
|
the radon transform is one of the most powerful tools for reconstructing data from a series of projections . reconstruction of radon transform with missing data can be closely related to reconstruction of a function from moment sequences with missing terms . a new range theorem is established for the radon transform on based on the hamburger moment problem in two variables , and the sparse moment problem is converted into the radon transform with missing data and vice versa . a modified radon transform for missing data is constructed and an inversion formula is established .
|
a pandemic influenza outbreak has the potential to place a significant burden upon healthcare systems .therefore , the capacity to monitor and predict the evolution of an epidemic as data progressively accumulate is a key component of preparedness strategies for prompt public health response .statistical inferential approaches have been used in a real - time monitoring context for a number of infectious diseases .examples include : prediction of swine fever cases in a classical framework ; online estimation of a time - evolving effective reproduction number for sars and for a generic emerging disease ; and bayesian inference on the transmission dynamics of avian influenza in the uk poultry industry .these models rely on the availability of direct data on the number of new cases of an infectious disease over time . in practice , as illustrated by the 2009 outbreak of pandemic a / h1n1pdm influenza in the united kingdom ( uk ) , direct data are seldom available .more likely , multiple sources of data exist , each indirectly informing the epidemic evolution , each subject to possible sources of bias .this calls for more complex modelling , requiring the synthesis of information from a range of data sources in real time . in this paperwe tackle the problem of online inference on an influenza pandemic in this more realistic situation . to address this problemwe develop the work of who retrospectively reconstructed the a / h1n1 pandemic in a bayesian framework using multiple data streams collected over the course of the pandemic . in posterior distributions of relevant epidemic parameters and related quantitiesare derived through markov chain monte carlo ( mcmc ) methods which , if used in real - time , pose important computational challenges .mcmc is notoriously inefficient for online inference as it requires repeat browsing of the full history of the data as new data accrues .this motivates a more efficient algorithm .potential alternatives include refinements of mcmc ( e.g. * ? ? ?* ; * ? ? ?* ) and bayesian emulation as in , where the model is replaced by an easily - evaluated approximation that can be readily prepared in advance of the data assimilation process . here , we explore sequential monte carlo ( smc ) methods as an alternative to the expensive mcmc simulations .as batches of data arrive at times , smc techniques allow computationally efficient online inference by combining the posterior distribution at time with the incoming batch of data to obtain an estimate for . use of smc in the real time monitoring of an emerging epidemic is not new . , , , , and are examples of real time estimation and prediction for deterministic and stochastic epidemic systems describing the dynamics of influenza and ebola epidemics .their models , however , also only include a single source of information that has either been pre - smoothed or is free of any sudden or systematic changes .in what follows we advance existing literature in two ways : we include a number of data streams , realistically mimicking the 2009 pandemic in the uk ; and we consider the situation where a public health intervention introduces a shock to the system , critically disrupting the ability to track the posterior distribution over time .the paper is organised as follows : in section [ sec : recon ] the model in is reviewed focusing on the data available and the computational limitations of the mcmc algorithm in a real time context ; in section [ sec : smc ] the idea of smc is introduced and an algorithm based on the work in described ; in sections [ sec : sim ] and [ sec : analysis ] results are presented from the application of a naive smc algorithm to data simulated to mimic the 2009 outbreak and illustrate the challenges posed by the presence of the informative observations induced by system shocks ; in sections [ sec : inf.theory ] and [ sec : informative ] adapted smc approaches that address such challenges are assessed ; we conclude with section [ sec : discussion ] in which the ideas explored in the paper are critically reviewed and outstanding issues discussed . describe the transmission of a novel influenza virus among a fixed population stratified into age groups and the subsequent reporting of infections .this is achieved through using a deterministic age - structured susceptible ( s ) , exposed ( e ) , infectious ( i ) , recovered ( r ) transmission model , with the e and i states split into two sub - states , and and . at time the vector gives the number of individuals in age group in each model state .the dynamics of the system are governed by a set of difference equations , such that for suitably small increments : where and are the mean latent and the mean infectious periods respectively .transmission is driven by the time- and age - varying force of infection , , the rate at which susceptible individuals become infected : here , is the basic reproduction number , the expected number of secondary infections caused by a single primary infection in a fully susceptible population , parameterised in terms of the epidemic growth rate .the pattern of transmission between age groups is determined by time - varying mixing matrices , with giving relative rates of effective contacts between individuals of each pair of age groups .these matrices are scaled to have elements where is the dominant eigenvalue of the time-0 next generation matrix whose entry is , with being the population size of the age stratum .the initial conditions of the system are determined by : parameter , the total number of infectious individuals across all age groups at time ; an assumed equilibrium distribution of infections over the age groups ; and an assumption of initial exponential growth that determines the relationship between the numbers in the four disease states .for ease of implementation , a reparameterisation is made from to a parameter denoted , the details of which can be found in the supplementary information to .denote by the vector of transmission dynamics parameters where parameterise the mixing matrices , defining any time variation .note , parameter is notoriously difficult to estimate and is therefore assumed fixed at two days .figure [ fig : link_schema ] illustrates how surveillance data from multiple sources relate to the age - structured seir transmission model , allowing estimation of the transmission dynamics parameters .the transmission process is unobserved .however , there are a number of surveillance sources informing aspects of this process .as system dynamics are assumed to be deterministic , there is no system error and outputs ( see equations - ) are deterministic functions of , e.g. .the available surveillance data are ` imperfect observations on a perfect system ' and are linked to the system s outputs via observational models as follows . the number of susceptibles in age group at the end of the time - step , , is informed directly by a series of cross - sectional survey data on the presence of immunity - conferring antibodies in the general population . denoting by the number of blood sera samples tested in time interval , it is assumed that the number of new age - specific infections in interval expressed as are indirectly related to surveillance data on health - care burden . a proportion, ( see figure [ fig : link_schema ] ) of these new infections will develop symptoms .of those symptomatic , a proportion will be virologically confirmed through admission to hospital and/or to an intensive care unit ( icu ) .alternatively a proportion will choose to contact primary care practitioners and will be reported as consultations for influenza - like illness ( ili ) together with individuals attending for non - pandemic pathogen ili . as a result, primary consultation data will be contaminated by a background consultation component strongly influenced by the public s volatile sensitivity to governmental advice .the consultation data are , therefore , less directly related to the severity and incidence of infection than the confirmed cases . to identify the consultations attributable to the pandemic strain , complementary data from a sub - sample of swabbed ili patients provide information on the proportion of consultations with pandemic virus . using a generic to denote counts of confirmed cases or primary care consultations , the model in outputs quantities of the type representing the number of surveillance counts in the interval attributable to the pandemic .expression results from the process of becoming infected and subsequently experiencing a delay ( of mean , variance with discretised probability mass function ) , which includes the time from infection to symptoms ( the incubation period ) , the time from symptoms to the healthcare event , and the time from diagnosis to the report of the healthcare event of interest . note that in the parametric dependence of output quantities has been omitted for ease of notation and will be done throughout .the count data are assumed to have negative binomial distribution expressed here in mean - dispersion parameterisation , such that if , then , and so , for the confirmed cases : and for the primary care consultations , which include contamination by a non - pandemic ili background component : where the contamination is appropriately parameterised in terms of parameters , and the signal is identified by virological data from sub - samples of size of the primary care consultations .the number of swabs testing positive for the presence of the pandemic strain in each sample is assumed to be distributed as : let denote the vector of all free parameters_ i.e. _ . develop a bayesian approach and use a markov chain - monte carlo ( mcmc ) algorithm to derive the posterior distribution of on the basis of days of primary care consultation and swab positivity data , confirmed case and cross - sectional serological data .the mcmc algorithm is a naively adaptive random walk metropolis algorithm , requiring iterations , taking over four hours .mcmc is not easily adapted for parallelised computation , although a small speed up can be achieved by parallelising the likelihood component of the posterior distribution of over a small number of cpus . in total , this required in excess of evaluations of the transmission model and/or convolutions of the kind in equation .implementation of mcmc in an online fashion , as new data arrive involves the re - analysis of the entire dataset , requiring time for multiple markov chains to converge .although , the runtime might not be prohibitive for real - time inference , the current implementation leaves little margin to consider multiple code runs or alternative model formulations . in a future pandemicthere will be a greater wealth of data facilitating a greater degree of stratification of the population . with increasing model complexity comes rapidly increasing mcmc run - times , which can be efficiently addressed through use of smc methods .smc is commonly used for inference from models that can be cast in a state - space formulation , where expressions of the form : govern the evolution of the latent state vector and the its relation to the observed data for . here the are conditionally independent given knowledge of the .on observing batches of data , at times , the main interest in this set - up is the filtering problem i.e. estimating the state vector through posterior distributions .note the conditioning on the parameters , which are typically assumed to be fixed and known , although methods for the estimation of static are very much an active area of research ( for example * ? ? ?the model in section [ sec : recon ] is a deterministic model , designed for use at a time in a pandemic when stochastic effects are uninfluential . in this case with data being imperfect observations distributed around model outputs .the inferential focus is here on .online inference involves the sequential estimation of posterior distributions , where indicates the prior for .estimation of any epidemic feature , _e.g. _ the assessment of the current state of the epidemic or prediction of its future course , follows from estimating ( or components thereof ) .suppose at time a set of particles , where each particle carries a weight , approximates a sample from the target distribution .on the arrival of the next batch of data , is then used as an importance sampling distribution to sample from . in practice , this involves a re - weighting of the particle set . from the conditional independence assumption of , the particles are reweighted according to the importance ratio : which reduces to the likelihood of the incoming data batch . eventually , many particles will begin to carry relatively very low weight , leading to sample degeneracy as progressively fewer particles contribute meaningfully to the estimation of . a measure of this degeneracy is the effective sample size ( ess ) , values for the ess that are small in comparison to are indicative of degeneracy or impoverishment of the current particle set .this degeneracy can be tackled in different ways . introduced a resampling step , removing low weight particles and re - setting particle weights , and proposed jittering the particles .this jittering step was later formalised by with the introduction of metropolis - hastings ( mh ) steps to rejuvenate the sample . and provide more general treatises of this sequential monte carlo method , with labelling the algorithm ` iterated batch importance sampling ' .this approach has since been extended by who unify the static estimation with the filtering problem ( estimation of ) .here we adapt the resample - move algorithm of and investigate its potential efficiency saving when compared to successive use of mcmc .mh steps provide the computational bottle - neck in resample - move as they require the browsing of the whole history of the data to evaluate the full likelihood , not just the latest batch of observations . to achieve fast inference , it is preferable to limit the number of such steps , without introducing monte carlo error through having a degenerate sample .the algorithm is laid out in full below .it is presumed that it is straightforward to sample prior distribution . 1 .* set * .draw a sample from the prior distribution , , set the weights .* set*[pt : init ] .observe a new batch of data .re - weighted the particles so that the particle now has weight 3 .* calculate the effective sample size*. set .if set , , and return to point ( [ pt : init ] ) , else go next .4 . * resample*. choose and sample from the set of particles with corresponding probabilities .here , we have used residual resampling .re - set .[ pt : rejuvenation ] 5 .* move * : for each , move from to via a mh kernel .if , return to point ( [ pt : init ] ) .* end*. there are a number of algorithmic choices to be made , including tuning the parameters of the mh kernel ( above ) or the rejuvenation threshold , . in a real - time setting , it may not be possible to tune an algorithm `` on the fly '' , so the system has to be able to work `` out of the box '' , either through prior tuning or through adaptation . in what follows we set and we focus on the key factor affecting the performance of the algorithm in real - time , _ i.e. _ the mh kernel . [ [ correlated - random - walk ] ] correlated random walk + + + + + + + + + + + + + + + + + + + + + + a correlated random walk proposes values : in the neighbourhood of the current particle , where is the sample variance - covariance matrix for the weighted sample . the parameter can be tuned _ a priori _ to guarantee a reasonable acceptance rate , or , alternatively , asymptotic results for the optimal scaling of covariance matrices can be used .localised moves keep acceptance rates high and will quickly restore the value of the ess . however , if after re - sampling there are few unique particles then the rejuvenation will result in a highly clustered sample , providing an inaccurate representation of the target distribution . [[ approximate - gibbs ] ] approximate gibbs + + + + + + + + + + + + + + + + + + an independence sampler that proposes : where is the sample mean for the . here ,moves are proposed to a region of the sample space only weakly dependent on the current position and proposals are drawn from a distribution chosen to approximate the target distribution .an accept - reject step is still required to correct for this approximation , so it is perhaps more accurate to refer to this proposal kernel as approximate metropolis - within - gibbs. the quality of the approximation depends on being well represented by the current particle set , there being sufficient richness in the particle weights after the re - weighting step and the target density being sufficiently near - gaussian .assuming that the multivariate normal approximation to the target is adequate ( and it should be increasingly so as more data are acquired ) this type of proposal allows for more rapid exploration of the sample space .if the multivariate normal approximation is not good , particles of high posterior or low proposal density will not be easily moved by this kernel , and , as acceptance rates can not be adapted to ensure a minimum level of acceptance , there is no guarantee that the ess will be restored above the level at which rejuvenations are required ( see section [ sec : analysis ] ) .both the correlated random walk and the approximate gibbs methods will be used , both as block updates where a new value for the entire parameter vector is proposed at once , and component - wise updates where individual or small groups of parameter components are proposed in turn , using the appropriate conditional distributions derived from and .the smc algorithm s performance against the gold - standard mcmc is evaluated via simulation , through its application to data arising from an epidemic simulated to mimic the timing and dynamics of the 2009 a / h1n1 pandemic in england .anomalously , this epidemic started with an initial burst of infection in spring , so we assume that the starting date is the may .the epidemic occurs in two waves of infection , the first reaches a peak immediately prior to the summer school holidays . after the holiday ,the growth of the epidemic is far slower , reaching a second peak in the autumn .we consider two scenarios . in the first scenariowe have direct information on confirmed cases , as might arrive in the surveillance of severe disease ( e.g. hospitalisation , icu admissions ) . in the second scenario we observe ili consultations in primary care which are noisy and contaminated by non - pandemic infections ( see section [ sec : intro.inference ] ) . both confirmed case and consultation dataare assumed to exist alongside serological data measuring the overall level of cumulative infection over the course of the pandemic to date . in the second scenario, we also assume the existence of a companion dataset of virological swabbing data from a sub - sample of the noisy data . in both scenarios ,observations are assumed to be made on 245 consecutive days and the underlying epidemic curve is characterised by the same parameters , so both confirmed case and primary care consultation data are subject to similar trends and shocks .one such shock arises from an assumed sudden change in the way case counts , whether they are confirmed cases or gp consultations , are reported .this could occur due to some public health intervention , as happened in 2009 with the launch of the national pandemic flu service ( npfs ) , designed to alleviate the burden placed on primary care services .table [ tbl : parameters ] presents the model parameters common to both scenarios and the values used for simulation . for a given set of parameters ,the number of confirmed cases , , in interval is given by equation .count data are then generated as negative binomially distributed with mean and variance .the degree of overdispersion , defined by the dispersion parameters and ( table [ tbl : parameters ] ) , is piecewise constant over time , with a breakpoint at the time of the system shock , taken to be .these data contain a significant amount of contamination . as with the confirmed case data , the number of consultations due to the pandemic strain is calculated via the convolution equation to give .the contamination component is added by assuming ` background ' consultation rates that follow a log - linear spline with a discontinuity at , with additional age effects to generate separate consultation rates for children ( year - olds ) and adults .the background rates over the interval , , depend on spline parameters , such that , for a suitable design matrix where is a suitably vectorised collection of the .aggregated over ages , the log - linear spline used for simulation is plotted in figure [ fig : gp.data](d ) . in this example , is a 9-dimensional parameter .as already anticipated above , these counts will also drop markedly due to an intervention to reduce the burden on primary care services , resulting in a sudden change in the parameter , the proportion of symptomatic cases that seek consultation . in reality , this parameter will show more heterogeneity over time than its analog for the confirmed case data as it depends on behavioural factors and is not a property of the virus .however , in the examples presented here , and are parameterised similarly ( see table [ tbl : parameters ] ) . [ cols= " < , > " , ]this paper addresses the substantive real world problem of online tracking of an emergent epidemic , assimilating multiple sources of information through the development of a suitable smc algorithm .when incoming data are stable , this process can be automated using standard smc algorithms , confirming current knowledge ( _ e.g. _ * ? ? ?* ; * ? ? ?however , in the likely presence of interventions or any other event that may provide a system shock , it is necessary to adapt the algorithm appropriately . on observing the impact that a new batch of data has on the ess of a particle set , tailoring of the mh - kernel and selection of suitable thresholdscan ensure efficient performance .however , as we have seen , given that not all prior distributions are well chosen and not all models well conceived this might necessitate some careful , yet ad hoc tinkering . the end result is an algorithm that is a hybrid of particle filter and population mcmc . having simulated an epidemic where a public health intervention provides a sudden change to the pattern of case reporting , we have constructed a more robust smc algorithm by tailoring 1 . the choice of rejuvenation times through tempering ; 2 . the choice of the mh - kernel by hybridising local random walk and gibbs proposals ; and 3 . by introducing the use of the intra - class correlation to provide a stopping rule for the mcmc steps to limit the number of mcmc steps within each rejuvenation .our experience suggests , real time epidemic tracking will involve switching between a simple , automated , smc to an smc specifically tailored to the nature of any impending shock .throughout we have inevitably made pragmatic choices and alternative strategies could have been adopted .we reflect on these , lessons learned and outstanding questions in what follows . in the motivating example ,a system `` shock '' occurred at .this shock represents a systematic change in the way the data are generated , affecting a number of parameters that , at this time , have a step - change in their values . the first few observations in the new parametric regime after the shock typically cause the greatest disturbance to the marginal posterior distribution for these parameters .posterior is no longer a good importance distribution to sample from and proposal kernels based on a reweighted sample from may not be useful .this will be reflected in a severe drop in the ess .a low value for the ess is always indicative of depletion , whereas a high value does not guarantee that the sample is adequate .section [ sec : analysis ] illustrated how the ess can be artificially rejuvenated even when the particle set is not . for the ess to be useful , it is essential that previous rejuvenation steps result in a sufficiently independent set of values for the margins of interest .after resampling , many mh - steps may be needed to remove any clustering .this motivates the use of the analysis of variance intra - class correlation coefficient , , to define a stopping rule for the mh - steps .currently this rule relies on two algorithmic choices : the choice of a univariate function of interest , ( see equation ) , and the choice of the threshold , the largest acceptable value for at the end of the rejuvenation process .the function should depend on model outputs of particular relevance .the predicted attack rate of an epidemic is a quantity that will be reported to public health policymakers throughout an epidemic and is dependent on all the transmission parameters .however , when the parameter vector is high - dimensional , as in this case , is it reasonable to condense this into a univariate summary to use as a basis for a stopping rule ?convergence of mcmc is typically diagnosed by looking at marginal distributions , so should we be doing something similar here ?does this necessitate the use of multivariate analogs for the intra - class correlation coefficient ( for example , see * ? ? ?* ; * ? ? ?it is felt here that the univariate is adequate as the parameters introduced at the ` shock ' time are largely nuisance parameters not strongly correlated with the transmission parameters that influence .once has been suitably defined , a suitable stopping threshold , has to be chosen . given the antecendent prescription for defining clusters used here , then truly is a measure of how well the particles have collectively ` forgotten ' their starting points . in situationswhere the target posterior is well - matched by its gaussian approximation , we could use a higher threshold than when starting from a poor estimate for the target distribution .a value of is a sufficiently small threshold except for extreme cases of departure between two successive distributions .a possible alternative to is an extension of the sampling variance in where gives the number of common ancestors of particles and within the interval , and is the function of interest , with an estimate of . was initially proposed to identify suitable rejuvenation times , but it is not clear how this can be done prospectively .it could , however , provide a stopping rule for the mh - sampler .as the clusters are defined by starting position , running mh - steps until there are no longer any cluster effects will minimise this variation .this has been borne out by calculations based on the simulations of section [ sec : informative ] .therefore , one could run the mh - sampler until is suitably small .the alternative to running long mcmc chains within each particle when there are new parameters in the model such as those introduced by the ` shock ' at , is to expand the particle set by cloning each of the particles a number of times , each cloned particle having a fresh draw from the prior for each of the new parameter components . upon observing the next batch of data , the expanded particle setcould then be reduced down to a more manageable size . however, it is not clear _ a priori _ how many cloned copies of each particle to take and if the number of clones required exceeds the length of the parallel mcmc chains , then this does not represent a computational efficiency . furthermore , this would not solve the problem in scenario 2 where some parameters are not immediately identifiable .a hybrid mh - kernel is introduced in section 6 .first , long - range , low - acceptance proposals are made , followed by short - range high - acceptance componentwise proposals . in many instances ,this hybrid is replaced by a mixture distribution , a mixture of similar short and long - range moves .the adaptive proposal distributions of fearnhead and taylor ( 2013 ) might take this a step further , tuning the mixture probabilities so that the moves that have the largest expected jumps are proposed more often .this would be an attractive extension to this case , but through monitoring intra - class correlation it is clear that full block approximate gibbs proposals maximise this expected jump size amongst the kernels we consider .however , we would still suggest moving at least a proportion of the particles according to random walk proposals , to guard against being a degenerate approximation for .a further problem of having step - changes in parameter values is the potential for a lack of identifiability .scenario 2 provides two such examples .firstly , we consider parameter , the dispersion in the immediate aftermath of .the prior for is a distribution chosen independently of ( the dispersion preceding the shock ) .the sheer number of new parameters introduced in the aftermath of the shock ensures that the data on days 84 , 85 , 86 are ( over-)fitted with very little error .the combination of this over - fitting and the unbounded nature of the prior close to zero pushes the initial posterior distributions for very close to zero .as data accumulate , the posterior mass gradually moves towards the value used to generate the data .this movement of posterior mass is difficult for sequential algorithms to track , particularly so because of the non - gaussian nature of the prior , even on the log - scale .when performing real - time inference , therefore , the choice of a prior distribution more robust to this initial over - fitting may be preferred .alternatively , a flat prior would need to be bounded to ensure that it can be sampled from . from a practical point of view , in the example of this paper , the choice of the distribution is meaningful as it attaches significant probability to the data being poisson , rather than negative binomial , distributed .the second example is the case of the background consultation parameters .the background rate of non - pandemic consultation is modelled using a log - linear spline , taking separate value for adults and children , with knots at 84 , 128 , 176 , and 245 days .the value of the spline at these knots is given by with linear interpolation giving the value of the spline at the intervening points .this results in background consultation rates for days 84 , 85 , and 86 respectively of the form ( neglecting the age effects ) : so , over this period there is very little identifiability of parameters and .this parameterisation , as shown in figure [ fig : bij.evo ] , can induce convergence problems for mcmc but not for smc .jasra et al ( 2011 ) claim that , for their example , smc may well be superior to mcmc and this is one case where this is certainly true .the population mcmc carried out in the rejuvenation stage achieves good coverage of the sample space , without the individual chains having to do likewise . to improve the mcmc mixing, this lack of identifiability would require a reparameterisation , which becomes unnecessary when using smc . throughout ,we have compared candidate mh - kernels via the kl - like statistics measuring the divergence between smc posteriors from posteriors generated by the `` gold - standard '' mcmc .we have also constructed a reference distribution for the kl statistic to assess informally the significance of the observed divergences .this , however , rather presumes that the mcmc is the gold - standard .this superiority is , however , called into question by the better performance of the smc algorithm particularly in the presence of the unidentifiability around shock times as discussed above . from a computational efficiency point of view , the smc algorithm , because of its highly parallel nature , is , at its worst , no slower than the full mcmc analysis .however , this may be an unfair comparison as the mcmc algorithm is based on `` plain vanilla '' random - walk metropolis updates and could benefit from significant tuning itself .more sophisticated mcmc algorithms could be used , as exemplified in an epidemic context by jewell et al .the use of differential geometric mcmc ( girolami & calderhead , 2011 ) or advances in the parallelisation of mcmc ( banterle et al , 2015 ) , for example , could assist with improving mcmc run times . on the other hand , as mcmc steps are the main computational overhead of the smc algorithm , any development of the mcmc algorithm may lead to a similar improvement to the smc algorithm also .up to now the discussion has centred on algorithmic development and the availability of all data sources in a timely manner has been assumed .particularly crucial to the feasibility of real - time modelling is the role of the serology data .this is shown in figure [ fig : gradeds ] , where epidemic projections have been sequentially made using only noisy primary care consultation data in the absence of serological data .a clear and realistic picture of the epidemic is not available until the epidemic has almost entirely been observed .this poses some key questions : are serological samples going to be available in a timely manner , in sufficient quantity and quality , and in the right format ?in reality , serological data can be slow to come online .a test has to be developed to identify the antibodies of a ( probably ) novel virus in blood sera ; and there needs to be sufficient time to test samples and report results according to a protocol that ensures unbiased data collection and analysis .+ from a computational point of view , under the assumption that all data become immediately available , each particle , in addition to its likelihood , weight and parameter value , stores a matrix representing the current state of the seir transmission model and a sub - history of values , long enough to evaluate equation at all current and future times . in the more realistic setting to accommodate the ` slow ' serological data , particles will have to store the full historical values of in addition to the current state of the epidemic .finally , should external information that can not be incorporated directly into the model become available at any time , it can easily be assimilated through appropriate adaptation of the prior distributions : particles would be reweighted according to the ratio of the new to old prior ; and depending on the ess , resampling and moving steps could follow .this provides a clear advantage of smc over mcmc where the entire dataset would have to be re - analysed .the analyses in this paper have neglected the first fifty days of the epidemic , concentrating on a period when there is substantial transmission in the population and appropriate data are becoming available . as a result, a deterministic system can adequately describe the future evolution of the pandemic .stochastic effects are significant and need to be incorporated into the model if monitoring is needed in the earlier stages . amongst others , provide a prescription for particle learning in the presence of ` shocks ' in such a setting .alternatively , to improve the robustness of the inferences , the piecewise linear quantities describing population reporting behaviour ( ) could be described by linked stochastic noise processes .this has the potential to reduce the sensitivity of estimates to the presence of changepoints that are not , for whatever reason , foreseeable . in answer tothe question initially posed , we have provided a recipe for online tracking of an emergent epidemic using imperfect data from multiple sources .we have discussed many of the challenges to efficient inference , with particular focus on scenarios where the available information is rapidly evolving and is subject to sudden shocks .we have focused on an epidemic scenario likely to arise in the uk .nevertheless , our approach addresses modelling concerns common globally ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and can form a flexible basis for real - time modelling strategies elsewhere .real - time modelling is , however , more than just a computational problem. it does require the timely availability of relevant data , but also needs a sound understanding of any likely biases , and effective interaction with experts . in any country ,only interdisciplinary collaboration between statisticians , epidemiologists and database managers can turn cutting edge methodology into a critical support tool for public health policy .paul birrell was supported by the national institute for health research ( hta project:11/46/03 ) the uk medical research council ( unit programme numbers u105260566 and mc_up_1302/3 ) and public health england .
|
a prompt public health response to a new epidemic relies on the ability to monitor and predict its evolution in real time as data accumulate . the 2009 a / h1n1 outbreak in the uk revealed pandemic data as noisy , contaminated , potentially biased , and originating from multiple sources , seriously questioning the capacity for real - time monitoring . here we assess the feasibility of real - time inference based on such data by constructing an analytic tool combining an age - stratified seir transmission model with various observation models describing the data generation mechanisms . as batches of data become available , a sequential monte carlo algorithm is developed to synthesise multiple imperfect data streams and iterate epidemic inferences amidst rapidly evolving epidemic environments , heuristically minimising computation time to ensure timely delivery of real - time epidemic assessments . keywords : sequential monte - carlo , resample - move , real - time inference , pandemic influenza , seir transmission model _ biostatistics unit , institute of public health , university forvie site , robinson way , cambridge cb2 0sr , uk _ + _ health england , london , uk _ + _ for research in statistical methodology , university of warwick , coventry , uk _ + e - mail for correspondence : daniela.deangelis-bsu.cam.ac.uk
|
in 1695 , leibniz laid the foundations of fractional - order derivative which means the extension of integer - order derivative concept , but the first systematic studies seem to have been made at the beginning and middle of the nineteenth century by liouville , riemann , and holmgren .liouville has expanded functions in series of exponentials and defined the - order derivative of such a series by operating term - by - term as though were a positive integer .riemann proposed a different definition that involved a definite integral and was applicable to power series with non - integer exponents .it was gr and krug who first unified the results of liouville and riemann .gr , by returning to the original sources and adopting as starting point the definition of a derivative as the limit of a difference quotient and arriving at definite - integral formulas for the - order derivative .krug , working through cauchy s integral formula for ordinary derivatives , showed that riemann s definite integral had to be interpreted as having a finite lower limit while liouville s definition corresponded to a lower limit .it turns out that the riemann - liouville derivatives have certain disadvantages when trying to model real - world phenomena with fractional differential equations . in the second half of the twentieth century caputoproposed a modified concept of a fractional derivative , by inverting the composite order of integer - order derivative operator and integral operator arising in the riemann - liouville definition , the resulting definition is better suited to such tasks .it has been found that many systems in interdisciplinary fields can be described by the fractional differential equations , such as viscoelastic systems , dielectric polarization , electrode - electrolyte polarization , electromagnetic waves , and quantum evolution of complex systems . due to the growing interest of fractional - order derivatives to be applied in different areas, it seems analysis of this type of derivative is of great importance .the existence of periodic solutions is often a desired property in dynamical systems , constituting one of the most important research directions in the theory of dynamical systems , with applications ranging from celestial mechanics to biology and finance . in chaos theory ,the idea of chaos control is based on the fact that chaotic attractors have a skeleton made of an infinite number of unstable periodic orbits which are subjected for stabilization .thus for chaos control in fractional order systems , it is important to show that periodic solutions exist in such systems .tavazoei have proved that the fractional - order derivatives of a periodic function with a specific period can not be a periodic function with the same period , as a consequence , the periodic solution can not be detected in fractional - order systems , under any circumstances .this property limits the applicability areas of fractional - order systems and makes it unfavorable , for a wide range of periodic real phenomena . therefore enlarging the applicability of fractional - order systems to such real areais an important research topic .the most important contribution of this paper is to this end .lets , the gr - letnikov order fractional derivative of function with respect to and the terminal value is given by {c}% h\rightarrow0\\ nh = x - a \end{array } } % h^{-\alpha}\sum\limits_{k=0}^{n}(-1)^{k}\left ( \begin{array } [ c]{c}% \alpha\\ k \end{array } \right ) f(x - kh),\ ] ] where {c}% \alpha\\ k \end{array } \right)=\dfrac{\alpha(\alpha-1)(\alpha-2) ... (\alpha - k+1)}{k!}=\dfrac{\gamma(\alpha+1)}{k!\gamma(\alpha - k+1)},\ ] ] where is the gamma function defined by the euler limit expression where . or the so - called euler integral definition : the following theorem gives negative answer for question of preservation of the periodicity suppose that is -times continuously differentiable and is bounded. if is a non - constant periodic function with period ,then the functions , where and is the first integer greater than , can not be periodic functions with period .the derivative of the sine function is given by where is the two - parameter mittag - laffler function defined by figure [ fg1 ] illustrate numerical approximation of which is not periodic but it converges to the periodic function .as a consequence of the above theorem , the periodic solution can not be detected in fractional - order systems , under any circumstances . a differential equation of fractional - order in the form where , can not have any non - constant smooth periodic solution .this property makes fractional systems not applicable for a wide range of real periodic phenomena .in this section we introduce a new definition of fractional derivative based on the gr - litnikov definition and we call it the gr - letnikov fractional derivative with fixed memory length .[ df1 ] let , , an integer such that and an integrable function in the interval ] we have and our aim is to evaluate the limit where noting then we have it follows that where we denote applying the property ( [ e12 ] ) of binomial coefficients repeated times , starting from * ( [ e13 ] ) * we obtain : let us evaluate the limit of the term in the sum ( [ e14 ] ) because we have and from ( [ e0 ] ) we have to evaluate the limit of the sum ( [ e16 ] ) let us write it in the form taking the notation using ( [ e0 ] ) we obtain furthermore , if , then taking into account ( [ e18 ] ) and ( [ e19 ] ) and applying theorem ( 2.1 ) in we obtain finally using ( [ e17 ] ) and ( [ e20 ] ) we conclude the limit ( [ e6 ] ) . [ p2 ] ( fractional derivative of a power function ) + let , , an integer such that and , then we have if , then , substituting in ( [ e6 ] ) yields the relation ( [ e9 ] ) .if , then from ( [ e6 ] ) and using successive integration by part we obtain the relation ( [ e9 ] ) .[ c1 ] ( fractional derivative of a constant function ) + if is a constant function ( i.e. for all ] ) .furthermore we have suppose that is a constant function ( for all ] .then for all ] such that exists .if is a periodic function with period ( i.e. for all ] + with and .+ the vector function is an exact -periodic solution of ( [ e24 ] ) , namely we have then from ( [ e26 ] ) and ( [ e27 ] ) we get then is an exact -periodic solution of ( [ e24 ] ) .[ ptb ] fdsin.pdf [ ptb ] mfdsin1.pdf [ ptb ] lfdsin1.pdfalthough the idea of fixed memory length is inspired from the short - memory principle introduced by i.podlubny ( 1999 ) for numerical needs , there is a significantly difference between this two ideas near the starting point .namely for the calculation of the fractional derivative of a certain function , on the interval ] , as previously mentioned , but using the second idea , if $ ] then the lower terminal is fixed at , and the memory length is a function of ( ) and this do not allow the preservation of periodicity .the modified definition of gr - letnikov fractional - order derivative reported in this paper posses two useful properties , the first is the preservation of periodicity as it is demonstrated and the second one is the short memory , which reduces considerably the cost of numerical computations .we have proven that contrary to fractional autonomous systems in term of classical fractional derivative the fractional autonomous systems in term of the modified fractional derivative can generate exact periodic solutions .now , this related question may be raised . if a differential equation is described based on the fractional operator proposed in this paper .what hypothesis on the equation are required for guaranteeing the existence and uniqueness of the solution ? and what type of initial conditions are required ? finding the answer of this question and investigate other property of the proposed fractional derivative , can be an interesting topic for future research works .bagley , ra .calico , fractional order state equations for the control of viscoelastically damped structures .j guid control dyn , 14 ( 1991 ) 30411 .abdelwahab , b. onaral .linear approximation of transfer function with a pole of fractional order .ieee trans auto contr , 29 ( 1984 ) 4414. m. ichise , y. nagayanagi , t. kojima , an analog simulation of noninteger order transfer functions for analysis of electrode process .j electroanal chem , 33 ( 1971 ) 25365 .o. heaviside , electromagnetic theory , chelsea , new york , 1971 .d. kusnezov , a. bulgac , gd .dang , quantum levy processes and fractional kinetics , phys rev lett , 82 ( 1999 ) 11369 .e. kaslik , s. sivasundaram , non - existence of periodic solutions in fractional - order dynamical systems and a remarkable difference between integer and fractional - order derivatives of periodic functions , nonlinear analysis : real world applications , 13 ( 2012 ) 14891497 .
|
contrary to integer order derivative , the fractional - order derivative of a non - constant periodic function is not a periodic function with the same period , as a consequence of this property the time - invariant fractional order system does not have any non - constant periodic solution unless the lower terminal of the derivative is , which is not practical . this property limits the applicability areas of fractional derivatives and makes it unfavorable , for a wide range of periodic real phenomena . therefore enlarging the applicability of fractional system to such real area is an important research topic . in this paper we attempt to give a solution for the above problem by imposing a simple modification on the gr - letnikov definition of fractional derivative , this modification consists of fixing the memory length and varying the lower terminal of the derivative . it is shown that the new proposed definition of fractional derivative preserves the periodicity . fractional derivative , memory length , periodic function
|
bak , tang , and wissenfeld ( btw ) introduced the concept of self - organized criticality ( soc ) and the so - called btw sandpile model as a description of power spacial and temporal correlations observed in a wide range of natural phenomenon . during the past two decades , more sandpile models were introduced by different variations of the main paradigm of soc , the btw model , in order to gain more realistic models .these models differ in some properties such as : discrete or continuous height variable , abelian or non abelian toppling rule , stochastic or deterministic toppling rule , directed or non directed ( or even non directed on average ) toppling current , sticky or non - sticky grains , and etc .we now have a large number of different sandpile - like models each model having it s own set of critical exponents .the number of these models are exceeding but after about two decades , a little is known about their universal classification . although some numerical studies are done to address the universality of the critical behavior of different sandpile models but these studies are in contradiction with each other .for example manna s classification is in contrast with ben - hur and biham s one , and the latter is in contradiction with chessa _ classification .also more recently fixed energy sandpiles ( fes ) are introduced in order to study the critical behavior of soc and classifying sandpile models which has not gained any serious success yet and even it does not seem to be a successful career ( for example see ) .the first classification was done by manna .he put his model and the btw model in the same universality class ascribing the observed difference in critical exponents to finite size effects .this result was verified by grassberger and manna .another effort was done by daz - guilera and corral based on renormalization group which resulted in classifying btw and zhang model in the same class ( which confirmed zhang s conjecture ) .ben - hur and biham studied the most complete set of critical exponents , based on the evolution of conditional expectation values ( see christensen _ et al . _ ) .these exponents are related to : ( s ) the size of an avalanche , ( a ) the area of an avalanche , ( t ) the time duration of an avalanche , ( d ) the maximal distance between the origin and the sites that an avalanche cluster touches , ( r ) the radius of gyration of an avalanche cluster , and ( p ) the perimeter of an avalanche cluster .they showed that sandpile models are classified in three groups of _ non directed _ , _ non directed on average _ , and _ directed _ models . as a result of their study , btw andzhang models belong to the same universality class ( non directed ) which manna model ( as a non directed on average model ) does not and directed models belong to another class .made some systematic corrections on ben - hur and biham s method and claimed that both stochastic and deterministic sandpile models belong to the same universality class .on the other hand , bak and chen had investigated the chaotic behavior of a block - spring model ( which was introduced for simulating earthquake dynamics ) .they have shown that although the largest lyapanuv exponent of this model is zero but nearby configurations separate in a power - law manner and they called it `` weak chaos '' .based on this study , bak , tang , and chen ( btc ) conjectured that soc takes the system to the border of chaos .they also argued that this behavior is not because of exponential sensitivity to initial conditions but the critical fluctuations of the system .they conjectured that in this manner , `` weak chaos '' is another aspect of the criticality of the attractor so the `` weak chaos '' exponent _ is _ a characteristic exponent of the system .although vieira and lichtenberg found a counter example for btc s conjecture , we have numerically checked it for different sandpile models .we have shown that btc s conjecture is truly verified in btw ( also in cbtw ) and zhang models but it s not true in manna stochastic model .dhar - ramaswamy directed model behaves more complicated with at least two different regimes . in this paperwe first define some sandpile models and discuss the time evolution of nearby points in their configuration space .we will finally discuss the same behavior in the off critical regime .we have studied sandpile models on two dimensional square , triangular , and honeycomb lattices .a height variable is assigned to each site of the lattice which can be discrete or continuous depending on the model .this height variable could be interpreted as `` energy '' . at each time step ,sand is added to a randomly chosen site .whenever the height of a site , exceeds the critical height , the site would relax through the related toppling rule .relaxation of a site would cause other sites to become unstable so they would topple and a chain reaction called an avalanche continues until all sites become stable .the rate of energy injection is so slow that an unstable configuration will relax before next grain is added . for btw model on three different lattices on log scale for and 10000 samples .the down - right , up - left , and the main graphs correspond to square , honeycomb , and triangular lattices correspondingly . lattice effects which slows down in the first few time steps is mostly seen on square lattice . ]+ * btw model : * the height variable of this model is discrete and its critical height is 4 , 6 , and 3 on square , triangular , and honeycomb lattices correspondingly . when the site relaxes : which means the nearest neighbors. the boundary condition of this model is chosen to be open on all boundaries .+ * zhang model : * the height variable in this model is continuous and its critical height can always be taken to be 1 . at each timestep a random amount of sand , which we take it to be in the set , is added to a randomly chosen site . if this site topples by : is the number of nearest neighbors .boundary conditions in this model is also open on all boundaries . +* continuous btw model ( cbtw ) : * this model is a continuous version of btw model .again as its height variable is continuous , the critical height can be taken to be 1 . at each time stepan amount of sand , with , is added to a randomly chosen site , and the toppling rule is given via is again the number of nearest neighbors .+ * manna model : * in this stochastic model the critical height is taken to be 2 on square and triangular lattices .on the square lattice when an unstable site is going to topple , it either gives its left and right or its up and down neighbors one unit of sand each with equal probability . on the triangular latticewhen a site topples , it gives one grain of sand to each facing neighbors with a probability of .the boundary conditions on this model is chosen to be open on all boundaries . +* directed model : * dhar - ramaswamy model is referred as the directed model .this model is defined on a square lattice in the direction .the critical height is 2 and when a site topples it gives one sand to each two _ down _ neighbors .the vertical boundary condition is cylindrical , sands are added to the system from the most top row and leave it from the lowest row . for zhang model on three different lattices on log scale for and 10000 samples .the main , down - right , and the up - left graphs correspond to square , triangular , and honeycomb lattices correspondingly .lattice effects which slows down in the first few time steps is mostly seen on honeycomb lattice . ]to study chaos in different sandpile models we have monitored the time evolution of the distance of two nearby configurations .the hamming distance of two configurations at time which is defined by ^{2}\big\ } ^{\frac{1}{2}}\ ] ] is used as the distance of two sandpiles .we have first provided a configuration which its height variables are denoted by , then we manipulate the heights of about of the whole sites chosen randomly , to prepare a nearby configuration with heights . in each timestep an amount of sand is added to a _similar _ randomly chosen site of and and is calculated after relaxation of both sandpiles . for manna model on square and triangular lattices on log scale for and 5000 samples .the main and up - left graphs correspond to square and triangular lattices correspondingly .the down - right graph shows a logarithmic fit for this model on square lattice . ]+ * btw model : * figure [ fig : btw ] shows the evolution of for this model on square , triangular , and honeycomb lattices . comparing on different lattices shows that ( ignoring a few first time steps which is attributed to lattice effects ) with , , and .simulations of this model is done for and for the exponents are independent of ( for smaller sandpiles the exponents increase slightly by system size ) .as it s seen in fig [ fig : btw ] and table [ tbl ] , the `` weak chaos '' exponent does not depend on lattice geometry . this is a good evidence for for considering it as universal property of this model .the behavior of cbtw model is exactly the same as this model as expected .+ * zhang model : * figure [ fig : zhang ] shows the evolution of for this model again on square , triangular , and honeycomb lattices .comparing these three shows that ( ignoring the first few points ) zhang model obeys btc s conjecture with exponents , , and .again the exponents are not lattice dependent . and 2000 samples .this model shows a more complicated behavior .there are two power - law regimes with and correspondingly ] .weak chaos exponent for btw and zhang models on square , triangular , and honeycomb lattices.since lattice effects can not be separated from power - law regime in a standard way , the errors are reported on the base of simulating each model five times each time containing 5000 samples . [ cols="^,^,^,^",options="header " , ] + * manna model : * as it s seen in figure [ fig : manna ] the evolution of in this model does not obey eq . [eq : weakchaos ] on square lattice rather it s something like this _ very weak chaos_ behavior , if one calls that so , is also confirmed on triangular ( fig [ fig : manna ] ) and rhombic lattices which shows that manna s attractor is not alike btw one , and they may not be classified in the same universality class . + * dhar - ramaswamy model : * figure shows the evolution of for this directed model .the few first points are attributed to lattice effects ( as it s seen in other models on different lattices too ) and there is two different regimes of power behavior after which saturates because of finite size effects ; the exponents are and correspondingly .the intermediate regime is unknown to us .we have also studied a mixed manna - zhang model . in this model when a site exceeds the critical height ,either the left and right or up and down neighbors are each given half of its energy by equal probability .deviation from `` weak chaos '' behavior is also seen in this model .it seems that the stochastic property is responsible for transition from `` weak chaos '' to `` very weak chaos '' behavior .since `` weak chaos '' is not observed in all sandpile models , particularly in manna model , self - organized criticality is not necessarily accompanied by `` weak chaos '' .therefore btc s conjecture seems not to be correct in general .since the `` weak chaos '' exponent of btw and zhang deterministic models do not depend on the lattice geometry , it may be considered as a universal property of these models and it may be viewed as a characteristic exponent ( table [ tbl ] ) .if so , since btw sweak chaos exponent is and zhang s weak chaos exponent is these two models ( as an abelian model and a non abelian model ) ca nt be classified in the same universality class . in this mannerour simulation based results do not agree with zhang s conjecture about the unification of btw model and his model in the thermodynamic limit , ben - hur and biham s classification , and daz - guilera and corral s classification .it should be noted that these classifications are all based on _static _ considerations , where we have considered a _dynamic _ property of the attractor which might cause this disagreement . on the other hand , although the root of this behavior of manna model is not known , this classification based on the `` weak chaos '' exponent may put manna model in a different class from btw and zhang models which is consistent from this aspect with ben - hur and biham s classification but inconsistent with manna s and chessa _ et al . _ results . for dissipative btw model no square lattice with for 5000 samples .at large times saturates because of finite size effects .the blue graph represents btw without bulk dissipation .the green and red graphs correspond to and dissipative sites with .the up - left graph shows how decreases rapidly by increase of dissipation amount ( for fixed ) .the asymptotic value of is . ]all we have reported above is related to sandpile models at the critical point .what happens to this behavior getting off the critical point ?do they still show weak chaos behavior ? ( it should be noted that since we have not studied fixed energy sandpile models , off critical states here means those states which their mean energy satisfies , where is the critical mean energy . ) adding bulk dissipation to sandpile models can be done using different methods . for discrete models the toppling rule is changed in some randomly chosen sites called _ dissipative sites_. if the site is a dissipative one , it topples when where is a positive integer . in this method dissipationis controlled by two parameters , the number of dissipative sites and . in continuous modelsthere is no need of selected dissipative sites and we can impose the so called new toppling rule to all sites where this time is a real number , therefore dissipation is controlled only by . in discrete models we can also use their continuous version and add arbitrary dissipation to all sites not to impose another stochastic parameter in the system .we have used the first method for the discrete btw model and the second method for ( continuous ) zhang model .although bulk dissipation does not differ in principle from boundary dissipation , its importance is because it imposes a characteristic length in the system which destroys criticality ( for example see ) . since bulk dissipation delays some toppling events we expect the rate of divergence of nearby configurations to decrease when dissipation increases .so we do not expect off critical btw and zhang models to show `` weak chaos '' behavior . for dissipative zhang model on square lattice for and 10000 samples .the critical height is one and each site dissipates unit of sand . in the main graph and .the up - left and down - right graphs show the rapid decrease of both and by increase of dissipation amount .the asymptotic values are and correspondingly ] fig [ fig : disbtw ] shows that in btw model by increase of dissipation , despite the interval of lattice effects which takes more long , the exponent of weak chaos region decreases rapidly .this is exactly what we expect because the characteristic length rapidly becomes comparable with the system size , therefore `` weak chaos '' behavior decays .the up - left graph of figure [ fig : disbtw ] shows how rapid decrease of the exponent versus dissipation magnitude in btw model .fig [ fig : diszhang ] shows that zhang model does not behave like btw .the main graph shows the evolution of for sand unit dissipation at each site .ascribing first few steps to lattice effects , there are to regimes of power - law behavior in this model which both exponents and decrease rapidly as the magnitude of dissipation ( the characteristic length of the system ) increases .these are shown in up - left and down - right graphs of fig [ fig : diszhang ] .why two different regimes appear is steel unknown to us .our simulation results show that both btw and zhang models obey btc s conjecture except some lattice effects where manna model does not because of its stochastic property .dhar - ramaswamy directed model shows a more complicated behavior where it shows two independent regimes of power - law with an unknown regime in between .so our results contain both examples and counter - examples of btc s previous conjecture .although weak chaos exponent does not seem to be a general characteristic of sandpile models but we have found a good evidence to concern it as a test for different universal classifications of these models . by means of this test ( abelian )btw , ( non - abelian ) zhang , stochastic manna , and directed dhar - ramaswamy models all seem to belong to different universality classes ( where non of the offered classifications are in a complete accordance with ) .we have also shown that as we tend to off critical states , weak chaos behavior seems to disappear were in zhang model an unknown split of into two power - law regimes is seen .we would like to thank s. rouhani for his helpful comments and careful reading of the manuscript .
|
we have investigated the `` weak chaos '' exponent to see if it can be considered as a classification parameter of different sandpile models . simulation results show that `` weak chaos '' exponent may be one of the characteristic exponents of the attractor of _ deterministic _ models . we have shown that the ( abelian ) btw sandpile model and the ( non abelian ) zhang model posses different `` weak chaos '' exponents , so they may belong to different universality classes . we have also shown that _ stochasticity _ destroys `` weak chaos '' exponents effectiveness so it slows down the divergence of nearby configurations . finally we show that getting off the critical point destroys this behavior of deterministic models .
|
tagging is a popular methodology for many user - driven document organisation applications such as social bookmarking and publication sharing websites . on websites such as citeulike , delicious and bibsonomy, tags provide an unstructured organization method where each user has the liberty of choosing or making up any string of characters to be used as a tag for a document .the automatic generation of tag recommendations aids the social tagging process by reducing the effort required by the user , and making them aware of which existing tags are relevant to the document they are tagging .tag recommenders encourage the use of the system and lead to a more homogeneous document organisation overall .the task of tag recommendation is to automatically suggest a set of tags to a user for a document that he is in the process of tagging .the data contained in social tagging systems is often described as a folksonomy .a folksonomy is a tuple where is the set of users , is the set of documents , is the set of tags and is the set of tag assignments .a tag assignment is a triplet and indicates that user has assigned tag to document .thus a folksonomy can be modelled as a hyper - graph with the adjacency tensor given by a 3-dimensional binary matrix {|u| \times |d| \times |t|} ] only , ignoring the influence of post ) ] and $ ] to distinguish between documents and . if any deduction is made about the relevance of to , it should be that the graph indicates a negative relationship and the weight of with regard to should be reduced rather than increased . the counter - argument to utilising even longer paths , which leads to folkrank s ranking of above ,is the highly personal tagging behaviour of users in ( broad ) folksonomies .folkrank uses the path to deduce that is more relevant than to the query consisting of user and document .however , this deduction is based on the fact that was used by a different user for a different document , and the only link to the query is given by having tagged which has also been tagged by the query user .the shared document is taken as an indication that and have similar interests and that should give some authority to all of the other opinions / tag assignments made by .argue that tag assignments can not be transferred as easily across users and provide evidence for the highly personalised tagging behaviour of users in broad folksonomies .they show that users who have tagged the same documents rarely assigned the same tags to these documents . even though the usersareas of interest are similar due to the shared documents , only a small overlap can be observed in their tag vocabulary which indicates that the users views of the documents are highly personal . in the following paragraphswe analyse the iterative weight spreading method of folkrank in detail and address issues which we believe to hinder or cascade its ability to effectively utilise the information contained in the deeper graph .an important preliminary observation about folkrank is that the impact of each preference node on the final weights in the graph is independent of the influence of other preference nodes .having multiple nodes with preference weight in the preference vector of folkrank is virtually equivalent to performing the weight spreading computation for each of the preference nodes separately , and then doing a linear combination of the resulting weight vectors to give the final ranking .as long as the end condition of the weight - spreading iterations is set sufficiently small , the only nodes which could end up with a different weight are the ones at the very bottom of the ranking .this method can be used to speed up the prediction time of folkrank in a live tag recommendation scenario .for each user in the system , the tag scores can be pre - calculated offline and stored .the same can be done for each document . during the online tag recommendation phase , the pre - calculated tag scores for the query user and query documentwould then be retrieved and a weighted average of the scores would be taken per tag in order to quickly create tag recommendations . for the following discussionwe assume that this method of performing a separate weight spreading computation for each of the preference nodes is used , so that each individual weight spreading run will have only one node in the preference vector .[ [ swash - back - problem ] ] swash - back problem + + + + + + + + + + + + + + + + + + a problem of folkrank as discussed in is `` swash - back '' of weights .since the graph is undirected , weight is spread from a node to a connected node in one iteration and then spread back from to in the next iteration .this means that the weight of , the node from which the weight - spreading originates , is adjusted in the second iteration based on the ( number of and weight of ) edges of , which does not seem intuitive and is not desirable .we illustrate the consequences of this in figure [ fig : folkrank_swashback ] .user has tagged documents and with tags and respectively .tag is also used by a very active user who is connected to a large number of other nodes , where is also used by user who has only one tag assignment .if we want to recommend tags for and activate that node in the graph , and would get the same weight in the first iteration . in the second iteration weight to , and spreads weight to ( as well as all of their other connected nodes ) , where the weight received by and is equal .the third iteration is when the swash - back with regard to and occurs , denoted by the empty arrows .tag gets half of the weight of ( times the dampening factor ) back since is connected to two nodes .however , is connected to many other nodes , and so the weight spread back from to would be much less than the weight spread back from to . in the final tag predictions for user ,tag would have a higher score than due to the behaviour of users and .this is contrary to our intuition that the weights should be equal up to this point since the query user has used both tags with equal frequency in the past . in the final ranking, when the node weights of the query user are combined with the node weights produced by the query document , the weights of and are expected to change to reflect the influence of the deeper graph . however , in this weight spreading operation for the query user only , the only source of preference weight is .the change in weights due to swash - back might outweigh the later influence of other preference nodes and prevent folkrank from utilising the information contained in the deeper graph .[ [ triangle - spreading - problem ] ] triangle - spreading problem + + + + + + + + + + + + + + + + + + + + + + + + + + another issue , which we refer to as triangle - spreading of weights , is illustrated in figure [ fig : folkrank_triangle_spreading ] .user has tagged document and with tags and respectively .document is a popular document tagged by many other users , whereas has only been tagged by .if we activate in order to recommend tags for this user , tags and would get the same weight in the first iteration .in the second iteration , would spread half of its weight to ( times the dampening factor ) , however , would spread less of its weight to since is also connected to several other nodes .this would mean that in the tag weights for query user , tag would get a higher weight than even though the user has used both tags with equal frequency .a similar problem would arise with regard to the weight of documents and if one of the tags was very popular .due to graph being undirected and the folksonomy consisting of triplet relationships ( user , document , tag ) , if two nodes and are connected , there is always at least one indirect path from to via a third node .the weight spread from to over the indirect path via is influenced by the ( number of and weight of ) edges of .this is undesirable since the weight of the direct edge from to already completely describes the relationship between and .moreover , the influence of the triangle - spreading is likely to cascade the effect of the deeper graph on final tag weights , since the indirect path along which the undesired spread happens has a length of only two hops . in order to address the swash - back and triangle spreading problems we present our adapted weight - spreading approach for undirected folksonomy graphs , which we call pathrank . rather than doing iterative weight spreading, pathrank assigns scores to each node in the graph based on the shortest path(s ) from the preference nodes .the weight spreading computation works in a similar manner to a breadth - first search , where edges which were already explored in previous iterations are not re - visited .pathrank is akin to spreading activation which is usually applied to directed graphs , and where nodes also spread their weight only once .however , pathrank is used on the un - directed folksonomy graph and gives the edges a personalised direction starting from the query nodes , where the edge direction can be different for each query .pathrank can be described as activation - directed weight spreading .in contrast to the original iterative weight spreading approach of folkrank , we set the initial weight of all nodes in the graph to zero instead of initialising nodes with random starting weights .pathrank thus only uses personalised weights , originating from the preference nodes , and there are no general importance weights in the graph ( which makes the dampening factor parameter obsolete ) .we compute node weights separately for each preference node and then take a weighted average of the resulting weights for each node in the graph to produce the final node weights , taking into account all preference nodes . because of the separate calculation per preference node and the setting of all starting weights in the graph to zero , each individual weight - spreading calculation has only one node , the preference node , from which all of the weight in the graph originates .the swash - back and triangle spreading of weights can then be prevented by adapting the iterative weight spreading algorithm with a simple rule : _ if the weight of a node has been updated in a previous iteration ( i.e. is not equal to zero ) , then do not re - calculate the node s weight in subsequent iterations_. thus , the pathrank weight spreading algorithm is in effect not an iterative update calculation like in pagerank / folkrank , but rather assigns a weight to each node based on the edges of the shortest path(s ) from the each of the preference nodes to .the parameter specifies the maximum path length from the preference nodes to be explored by pathrank .the end condition of pathrank weight spreading is that either the maximum path length has been reached , or that all nodes in the graph have been explored and assigned a weight greater than zero .the benefits of pathrank are that the problems of swash - back and triangle - spreading of weights are removed , which allows the algorithm to fully utilise the information contained in the deeper graph. since there are no general importance weights , these also can not interfere with and cascade the influence of the weight spread through the deeper graph .intuitively we would assume that weights spread from preference nodes through the deeper graph would result in a better re - ranking of the tag nodes in comparison to using general importance weights , since the general importance of nodes is not personalised and constant across all query posts .setting different values for the maximum path length to be explored allows for a direct evaluation of the value of including the deeper graph in the recommendation process . in our evaluation in section [ sec : tuning_pl ] we address the question of how much value there is in exploring each step deeper into the graph when calculating tag predictions . regarding runtime , as long as we only have one preference node ,the complexity of weight spreading is greatly reduced in pathrank compared to folkrank , since once a node s score is set it does not need to be re - calculated in every subsequent iteration . if we take the same graph , let denote the total number of iterations and denote the number of edges in the graph , folkrank s iterative weight spreading has a complexity of . in each iteration, weight is spread in both directions along each edge , partly because the nodes are initialised with random starting weights .pathrank has a worst - case complexity of if the weight spreading is performed until all nodes in the graph are explored .weight is only spread once along each edge in one direction .however , in the case that there are several preference nodes , pathrank needs a separate weight - spreading calculation for each of them , meaning the complexity would be where is the number of nodes with weight in the preference vector , whereas the runtime of folkranks s iterative algorithm would not change . for the expensive folkrank algorithm to be applicable in practice ,the individual tag scores per user and per document have to be pre - calculated offline , and then combined in the online recommendation phase to quickly generate predictions . in this scenario , where each of the pre - calculation runs has only one node in the preference vector , pathrankis guaranteed to outperform folkrank regarding runtime . moreover , by limiting the maximum path length via the parameter , the runtime can be further reduced .as we later show in the evaluation , can be set to almost minimal values without a decrease in prediction accuracy .here we present our methods for extending folkrank with content data .these content - aware recommenders include the textual content of documents in the recommendation process as well as utilising the full folksonomy graph .this allows us to relate new unseen documents to already tagged ( different ) documents in the system and make recommendations based on the tag assignments related to those known documents .we can thus overcome the new document problem and make the solely folksonomy - based recommenders applicable to full real - world datasets .for test posts where the query user is new ( as well ) , we have to default to the most popular tags found to be related to the content of the query document and can not personalise these to the user , which is acceptable since the user does not have a tagging profile yet . in the following sections we first describe the document content model we use and then present our content - aware graph recommenders . for including data from the content of documents in the tag recommendation algorithms , we consider two sources of content words : document title and fulltext content .we convert all words to lower - case , remove stop - words as well as all words which have a length of less than 3 or more than 20 characters , and use the remaining words without stemming .each document is represented by a bag - of - words vector of content words with tf - idf scores for each word .tf - idf stands for _ term frequency - inverse document frequency _ and we compute it as where is the set of all documents , is the term count equal the number of occurrences of word in document , and is the document count equal to the number of documents in the collection containing word .we normalise the tf - idf scores to sum to 1 per document .a factor to consider is that the content data of websites can change over time .the title , content and meta - data of a website which is bookmarked can be updated and differ from one post to the next .this presents a problem , as well as additional data for analysis .the fulltext content of the bookmarked website itself is only available in the current version at the time of retrieval , however , the bibsonomy dataset provides different versions of metadata for a document at the time of each post . where available , we concatenate the title variations of a document from all its posts and treat the resulting text string as the single document title .this makes the term count measure in our tf - idf calculation more powerful as words which persist over several title variations will end up with a higher score than words which only appear in a few of the variations .contentfolkrank ( which we first presented in ) includes the content of documents directly into the graph .we adapt the original folksonomy graph of folkrank to model triplets instead of .each tag assignment in the training data is converted to a set of tag assignments with words instead of documents , , , where each of the words is in the content of .figure [ fig : folkrank_document_vs_word_nodes ] shows the standard folkrank graph with document nodes on the left , and the contentfolkrank graph modelling the same data on the right , where document is represented by word nodes , and , and document is represented by word nodes and .in contentfolkrank we use custom rules for setting the weights of different types of edges , namely user - word edges , word - tag edges and user - tag edges . to prevent the content length of a document , and thus the number of word nodes representing the document in the graph , from influencing the recommendation process , we utilise tf - idf scores when setting the edge weights .the tf - idf scores are normalised to sum to 1 per document .this provides suitable weights for the edges of word nodes representing the document in the graph and ensures that the number of words itself does not impact the generated recommendations .additionally , the tf - idf scores provide appropriate weights for capturing the varying importance of different content words to the document , and have been shown to be beneficial in . since several documents , in our example and , tagged by the same user can contain the same word , the weight of the edge between and is set to the sum of the normalised tf - idf scores of in and .the same holds for edges between word and tag nodes .the final edge weights thus take the content importance of words as well as tagging co - occurence in the folksonomy into account .the weight of the edges between user and tag nodes are solely based on co - occurrence since only complete documents ( and not individual words ) can be tagged by users , and are set to the number of posts in which the user has used the tag .the formulae for calculating the weights of the different types of edges are the following . for user - word edges ,the edge weight is given by where is the set of posts by user where the document contained word .similarly , for word - tag edges we calculate the weight using where is the set of posts tagged with ( by any user ) where the document contained word . for user - tag edgesthere is no need to include tf - idf scores as complete documents are tagged by users and words can not be tagged on their own , so we use the formula where is the set of posts where user used tag .the preference vector for each test post is given by where is the query user and each word is in the content of the query document .the preference weight for each word is set proportional to its tf - idf score in , and is given by where the tf - idf weights are normalised to sum to 1 per document , is the parameter for setting the balance in preference weight between the query user and document , and is the total preference weight .the preference weight of the query user is the same as before without content : .our second approach of including content into the recommendation process is to utilise a content - based document similarity measure and include content information implicitly rather than introducing words directly into the graph .the graph model of simfolkrank does not contain content data itself and documents are represented by document nodes , using either the original folksonomy graph ( simfolkrank ) or the post graph model ( simfolkrank_pg ) .however , for each test post , we construct the preference vector to include not only the query document ( if it already exists in the graph ) but also a predefined number of training documents most similar in content to the query document . in our experimentswe evaluate the effects of including different numbers of most similar documents in the preference vector .the similarity between documents is calculated based on the words in either the title or the full text of the documents .the metric we use is cosine similarity of the bag - of - words document vectors with normalised tf - idf scores . due to the problemthat document content data can vary over time ( as discussed in section [ sec : doc_model ] ) , it can be the case that a query document which also exists in the graph as a training document ends up with a low content similarity score with itself . to overcome this issue, we include an additional step where we set the similarity of query documents with themselves to 1 , provided that they appear as training documents as well .once the cosine similarity of a query document to each training document is calculated , we normalise the similarity scores to sum to 1 for the query document .this ensures that the number of similar training documents with cosine similarity greater than zero does not affect the recommendation process .the preference weight of each training document included in the preference vector is a function of its similarity to the query document and is given by where is the content - based similarity between and normalised to sum to 1 over all documents similar to , parameter determines the balance in preference weight between the query user and document , and is the total preference weight .the preference weight of the query user is the same as before : .we apply and evaluate this approach of including content with iterative weight spreading ( simfolkrank , simfolkrank_pg ) as well as our pathrank weight spreading algorithm ( simpathrank , simpathrank_pg ) .our datasets consist of tagging data from the social bookmarking websites citeulike , delicious and bibsonomy , and additionally downloaded content data for our content - aware recommenders .official snapshots of citeulike and bibsonomy are available on their respective websites .we use the citeulike 2012 - 05 - 01 snapshot , and the bibsonomy 2012 - 07 - 01 snapshot .the bibsonomy social bookmarking website and dataset is split into two separate sections : bibsonomy bookmark which are website bookmarks and bibsonomy bibtex which are publication bookmarks .we treat these two subsets of bibsonomy as separate datasets .delicious does not provide snapshots of their data .here we use a dataset that was obtained by crawling the delicious website in 2005 , the specifics of which are given in . additionally , we downloaded all of the available pages from the urls in the delcious and bibsonomy bookmark datasets , and all of the bibtex entries for citeulike . for our content - aware recommendersthe two content data sources are the title and fulltext for websites , and the title and abstract for publications . our delicious crawl and the bibsonomy bookmark and bibsonomy bibtex snapshots provide the titles of documents . for citeulike we extracted the titles from the downloaded bibtex entriesthe fulltext content for delicious and bibsonomy bookmark is the page text of the bookmarked websites , which we extracted from the downloaded pages . for citeulike and bibsonomy bibtex , where the bookmarked documents are publications, we use the abstracts from the bibtex entries as the fulltext content .we pre - processed all of the datasets by casting all tags to lower case , removing duplicate tag assignments that might occur as a result of this , and removing posts which have no tags . additionally , for citeulike there are some automatically generated tags which occur very frequently . in order to clean the dataset of these tags , we removed all tag assignments where the tag equals no - tag " or bibtex - import " , or matches the regular expressions * file - import * " or * import- * " . for bibsonomy bibtex we removed all tag assignments where the tag is jabrefnokeywordassigned " or myown " since these occur with disproportional frequently and can single - handedly skew results .we use recall@ , precision@ and f1@ as our success measures , where is the predefined number of tags to be recommended . recall measures the ratio of correct recommendations to the number of true tags of a test post , whereas precision measures the ratio of correct to false recommendations made .recall and precision are given by where ( true positives ) is the number of correct tags recommended , ( false positives ) is the number of wrong recommendations and ( false negatives ) is the number of true tags which were not recommended .f1 is a combination of recall and precision and is given by since we believe recall to be more important than precision in the context of tag recommendation , as long as is kept reasonably low ( ) , we use recall in the evaluation phase to identify the best recommenders and configurations .we then give recall as well as f1 for the final results . to construct a training and test set for the experiments on the full / unpruned tagging data , we use the following date - split approach for each of the datasets . the test set consist of all posts in the most recent two months of the data which provides us with a large enough test set size .the resulting numbers of test posts are 76,491 for citeulike , 1.7 m for delicious , 9,506 for bibsonomy bookmark and 2,843 for bibsonomy bibtex .the training set is a sample of the data prior to the two test months .we use a sample and not the complete historical data for our training set since the folkrank - type algorithms have a high computational complexity and expense . note that we only apply sampling for the training dataset , while in the test set all posts made in the test time - frameare included .the aim of our sampling methodology for the training set is to achieve a small enough sample size for our models to generate recommendations within a reasonable time while introducing as little bias into the models as possible . to create the training sample we start by selecting the 12 months of data prior to the test months .social tagging datasets have been shown to be time - sensitive where popular post topics as well as users interests change over time , and we believe that posts which are older than a year from the test period provide less predictive data for generating recommendations . we then take a stratified sample of documents , where the stratification is based on the number of posts that the documents appears in .finally , we retrieve all posts related to the sampled documents to create our training posts sample .this approach ensures that our training sample contains documents which are tagged frequently as well as documents which are tagged infrequently , and reduces the bias towards documents which are only tagged once that would exist if sampling the documents uniformly at random .the resulting sample has the same distribution of documents over number of posts as the full dataset .we employ this approach of first sampling documents and then retrieving the related posts since the number of documents and the resulting size of the content data is the limiting factor which impacts recommendation speed the most in our content - based approaches .moreover , documents do nt suffer from other issues that exist when sampling users or tags and then retrieving all related posts . with users ,the number of posts per user varies much more than the number of posts per document , partly due to some users using bulk imports and automatic post submission plug - ins which make them much more frequent users of the system than others . with tags ,there is also much more variance in the number of posts per tag than with documents , where the issue is that tags such as toread " , which hold no collaborative value have , a high number of related posts .we aim to find a sample size which strikes a good balance between improving the recommendation speed of the algorithm and reducing sample bias .we want to select a sample size at which we achieve a low variation in prediction quality for different samples of the same size , and at which the increase to a larger sample is not justified by a significant increase in prediction quality .to find an appropriate sample size , we create 5 different training samples of the same size and evaluate models built on them against the test set to find the amount of variation in prediction results .we then increase the sample size to a larger value and repeat the same process , until we are confident that the sample size gives a low variation in different samples of the same size and the move to a larger sample does not significantly increase results .we start at a sample size of 100,000 posts , and increase the number of posts by 50,000 until we are satisfied with the resulting samples .figures [ fig : sample_sizes_citeulike ] and [ fig : sample_sizes_delicious ] give the results with standard folkrank for each of the examined training sample sizes on the citelike and delicious datasets respectively .the left side shows the recall graph of each of the individual sample runs , where runs of the same sample size are plotted in the same line style .the box plot on the right gives the average recall per sample size .the more we increase the sample size , the less variation in results on samples of the same size , and the improvement in average recall is also smaller . as an outcome of this process we have found that a sample size of ( roughly ) 250,000 posts gives acceptable results . for bibsonomy bookmark and bibsonomy bibtex we do not sample the training data and use all posts from the year previous to the test time - frame , as the number of posts in these datasets is sufficiently small .the statistics of the final training and test sets used in our experiments are given in table [ tab : pcore1_train_test ] .the sampling does have the effect that some of the tags in the test set of citeulike and delicious will not be present in their respective training sets , and thus can not be recommended successfully by any of the evaluated recommenders . in figure[ fig : max_possible ] we show the theoretical maximum possible recall that could be achieved on the test set of each dataset with recommending tags that exist in the training set . for citeulike and deliciousthe maximum recall is given for our training set sample and as well as the full training data .the theoretical maximum at recommended tags is calculated by assuming that for each test post correct tags are recommended at each value of , where is the minimum of and the number of true tags for the test post which also exist in the training data .the extent of the problem of not including all training data tags in the samples is not too great , and we do not believe that this will impact the validity of our conclusions as all of the evaluated recommenders will suffer from this problem to the same degree .in addition to the training - test split , we create a separate evaluation split for each dataset that we use for comparison of individual methods and for parameter tuning . the evaluation test and training sets are created from the datasets prior to the two months of real test data , in the same fashion as described above . for completeness we also evaluate our approaches on each of the datasets at post - core level 2 .post - cores at level have the constraint that each of the users , document and tags has to appear in at least posts . for each dataset, we create the post - core by iteratively removing posts where the user , document or one of the tags does not satisfy the condition that they appear in at least two posts .we then use a leave - one - out per user split to create the training and test sets by selecting the most recent post for each user and placing it in the test set . for parameter tuningwe create an additional evaluation split from the resulting training data .in our evaluation we aim to find the best combination of our proposed approaches by answering the research questions given below . in order to achieve this we run experiments on the evaluation set where we set default values for the parameters of dampening factor ( ) and balance in query preference weight ( ) .having identified the best strategies , we evaluate the remaining parameters , and finally give results on the real test set with tuned parameters in section [ sec : results ]. * content inclusion * * how should content be included : directly into the graph at the word level or indirectly at the document level ? * * what is the most predictive source of content : document title or fulltext content ? * * how much content should be included ?* folksonomy graph model * * which of the examined graph models provides the most accurate representation of the tagging data ?* deep folksonomy graph * * is iterative weight spreading worth the computational expense ? ** does exploring the deeper folksonomy graph provide an improvement to tag predictions ? in figure[ fig : content_inclusion ] we show the results of evaluating our two methods for including content into folkrank . on all datasets ,the indirect content inclusion method of adding similar documents to the preference vector ( simfolkrank ) gives better results than incorporating the document content directly into the graph ( contentfolkrank ) .the biggest difference is on the bibsonomy datasets , while for citeulike the results are almost identical , with simfolkrank performing slightly better .we assume that contentfolkrank gives worse results due to the word nodes in the graph being connected to many more tags compared to the document nodes in the standard folksonomy graph used by simfolkrank .the same individual word can appear in a variety of documents from different domains and thus be connected to many tags which are themselves unrelated . to accurately capture the query document several words are required in combination .the predictions generated by contentfolkrank can be influenced by the edge configuration of individual words , which might be most connected to tags from a different domain than the query document whilst being connected to appropriate tags with less edge weight . in simfolkrank ,the similarities to training documents are calculated based on the whole representation of the query document , and in the graph each of the similar documents is likely to be connected to tags from one or a few domains only . in the larger datasets of citeulike and deliciousthe difference between the two approaches is smaller .contentfolkrank comes close in results to simfolkrank , but does not outperform it .this suggest that with more data the weighting methods used in contentfolkrank , which are based on tf - idf scores and include a co - occurrence element , can more accurately model the query document as well as the edge weights of words in the graph . with sufficient datathe outcome of contentfolkrank is thus very similar to simfolkrank .however , in addition to producing better results simfolkrank is also computationally less expensive than contentfolkrank since the contentfolkrank graph is much larger due to the many word nodes .when comparing the title and fulltext content of documents as potential document representations ( figure [ fig : content_sources ] ) , the title performs better in most cases .the biggest difference is on the delicious and bibsonomy bookmark datasets , as here the fulltext content is the crawled page content of the bookmarked websites . in citeulike and bibsonomy bibtex, the fulltext representation is given by the abstract of the bookmarked research papers which we expect to be a more accurate document description .on bibsonomy bibtex , the fulltext content actually performs slightly better than the title . to evaluate the amount of content to be included we vary the number of similar documents in the preference vector of simfolkrank and give the results in figure [ fig : simfolkrank_numsimdocs ] .the content source in these experiments is the document title .the x - axis gives the number of most similar documents included in the preference vector and the y - axis is recall when recommending five tags .the left - most point and the horizontal line in each graph gives the recall without including content .the results indicate that prediction results improve the more content is added , where the biggest gain is achieved by the most similar documents .the only exception is bibsonomy bibtex where including content does not give a significant gain . except for bibsonomy bibtex, the shape of the plots and the fact that the results do not decrease at higher numbers of similar documents also confirms that normalised cosine similarity is an appropriate metric for measuring document similarity in our scenario . before comparing the graph construction methods we first evaluate the two alternative scores retrieval methods of the post graph model described in section [ sec : postrank ] .the approach of retrieving post node weights from the graph and then calculating the tag scores based on these gives slightly better results than retrieving the tag node weights directly from the graph , although it does not seem to make a significant difference .since it also makes sense that the number of tags in each post should not influence the scores of the tags they contain , we use the strategy of calculating tag scores from post nodes for all approaches using the post graph model in the subsequent experiments .as shown in figure [ fig : graph_construction ] , without content data there is no real difference in results with the different models , and the folksonomy graph ( folkrank ) , adapted graph ( folkrank_ag ) and post graph ( folkrank_pg ) give almost identical results .however , when including content data , the post graph model performs consistently better than the folksonomy graph , indicated by simfolkrank_pg performing better than simfolkrank across all datasets .we believe the improved results to be due to the more accurate data representation of the post graph model , as discussed in section [ sec : graph_construction ] . with more nodes in the preference vector ,the implicit assumptions of the folksonomy model have a relatively greater impact on tag predictions scores and the post graph proves to be the more robust model .we compare the iterative spreading algorithm of folkrank to our pathrank weight spreading approach on the folksonomy graph ( figure [ fig : weight_spreading_folkrank ] ) and the post graph model ( figure [ fig : weight_spreading_postrank ] ) .the two weight spreading methods produce very similar results on both models across all datasets .however , pathrank is a much quicker weight spreading algorithm . it does not adjust the weight of each node in several iterations to find the optimal distribution of weights reflecting the overall edge connections in the graph .in other words , it does not consider the general ( non - personal ) importance weight of nodes which is implied by the graph structure itself .this suggests that the impact of the general importance ( or authority ) of nodes in the graph does not provide a significant benefit to the tag predictions , and the expensive iterative spreading of the non - personalised weights can be omitted to speed up the recommendation process .our evaluation of the dampening factor in section [ sec : tuning_d ] further confirms this conclusion as the best results with folkrank s iterative weight spreading are achieved at the lowest setting for , which translates to giving the least relevance to general importance weights . to examine the value of including the general node weights in the recommendation process, we evaluate different settings for the dampening factor and give our results in figure [ fig : parameter_tuning_d ] . without the inclusion of content datathere is not much impact on the results for the examined values of .this is because without content the whole preference weight is given to a maximum of two preference nodes , the query user and document , which means that there is a huge difference in weight between the preference nodes and any one of the other nodes in the graph .non - preference nodes , and thus general importance weights , do nt have a chance to impact the predictions except for extreme values of such as 0.9 , at which setting we observe a very slight decrease in results . with content data the preference weight is distributed among a maximum of 101 preference nodes , which include the query user and potentially 100 training documents similar to the query document . herethe impact of the general non - personalised weights can be observed at lower values of . in all cases ,the best results are achieved with setting to the lowest examined value of 0.1 .this indicates that the general weights in the graph do not provide a benefit to the accuracy of tag predictions , and in fact have a negative impact when given too much relevance .we conclude that to maximise the tag prediction accuracy , should be set to the lowest value , in effect ignoring the general / non - personalised weights of nodes in the graph . with the lowest examined setting of , the general weights can still act as tie - breakers for tags in the candidate set which have otherwise equal personalised weights .however , our results in the comparison with pathrank weight spreading , which does not utilise general weights , suggest that there is no significant improvement over randomly ranking tags which have equal weights . this comparison is made in the previous section [ sec : eval_iterative_vs_pathrank ] and in the evaluation on the real test set with tuned parameters in section [ sec : results ] .on recall ] the parameter of maximum path length in our pathrank weight spreading approach is especially interesting since it allows us to examine the value of exploring the graph beyond the immediate neighbourhood of the query user and nodes related to the query document .we show the outcome of setting different values of in figure [ fig : parameter_tuning_pl ] on the folksonomy graph and post graph models .the x - axis gives the value of and the y - axis is recall . with the lowest setting of only the immediate neighbourhoodis explored , where as we move to the right of the x - axis longer paths are also traversed by the weight spreading algorithm . with the post graph model we retrieve tag scores as the sum of weights of post nodes they are connected to . here, the next posts and thus additional tags beyond the immediate neighbourhood ( of path length 1 ) are encountered at a path length of 3 .overall , our results suggest that there is actually not much value in considering the graph beyond the immediate neighbourhood of the preference nodes .there is a small difference that can be observed between path lengths 1 and 3 which we explore in detail below . in general , the conclusion that no significant increase is achieved is in line with our previously published results , where our less expensive co - occurrence recommender ( exploring only the immediate neighbourhood of the query user and document ) performed equally well to folkrank .moreover , with the pathrank weight spreading algorithm , we have now removed the other influences on the weight spreading calculation which could have cascaded or reduced the impact of the deeper graph . even without swash - back , triangle - spreading of weights and general importance scores , the weights spread through long paths in the deep graph do not provide a significant improvement .the results indicate that the deeper graph does not provide a beneficial re - ranking of existing candidate tags in the immediate user or document neighbourhood . with the setting of where only the immediate neighbourhood is considered, tag nodes which have equal weight will be ranked randomly in the final predictions .utilising the deep graph to re - rank these tags does not significantly improve results over this random ranking .[ sec : tuning_pl ] ] our first detailed observation is that after the first influence of the deeper graph at path length 3 , we can not observe any significant impact , positive or negative , caused by exploring longer paths . along the lines of , this suggest that users of ( broad ) folksonomies have a highly personal tagging behaviour .it is thus very difficult to traverse more than a few edges in the graph and still weigh the encountered nodes in a manner relevant to the the preference node at which the path started .the only small change that can be observed is up to a path length of three . as a side note, the post graph model gives better results than the folksonomy graph at a path length of one . at this settingthe only difference in the tag scores calculation between the two models is that for the post graph the tag scores are given as the sum of weights of post nodes they are connected to . as discussed in section [ sec : graph_construction ] , this follows from the post graph model s assumption that the number of tags of each post should not influence tag scores , whereas the plain folksonomy graph assumes that if there are many tags in a post then each of them is less important .this again suggests that the assumptions made by the post graph model provide a more accurate representation of the underlying social bookmarking data . another interesting observation in figure [ fig : parameter_tuning_pl ]can be made from the results with the folksonomy graph model on the delicious dataset . in this casethere is a small improvement at a path length of 3 .what is interesting here is that the increase does not occur at but at . in the folksonomy graph , the tags found at a path length of 2have paths of the form or from the user preference node or the document s preference node(s ) respectively . including these additional tags is conceptually similar to tag expansion via the document or user nodes related to the preference node . at a path length of 3 , paths of the form and also included which is conceptually similar to performing tag expansion by using tag - tag co - occurence measure .the small improvement in prediction accuracy seems to be due to using tag - tag co - occurence , rather than giving weight to tags which are related to non - tag nodes from the preference node s immediate neighbourhood . on the bibsonomy bookmark datasetwe can observe a small decrease at when including content with simpathrank . with the post graph model and content ( simpathrank_pg ), there is also a decrease on bibsonomy as well as citeulike when going from to .as there are no paths with length 2 leading to additional tags in the post graph , the influence of tag expansion both via non - tag nodes and tag - tag co - occurrence is included at the same time at .it seems to be the case that tag expansion via non - tag nodes decreases results . along the lines of our discussion in section [ sec : weight_spreading_discussion ] , this seems to suggest that tags found related to non - tag nodes of the preference node but not directly connected to the preference node itself should not be given an increased weight . as they seem to worsen results it might be appropriate to decrease their weight instead .this suggest a potential that negative feedback could be extracted via a more complex analysis of the graph , which we intend to investigate in the future .overall , we conclude that spreading weight into the deeper graph does not provide a significant benefit to tag recommendations and can in some cases even harm prediction scores . the only increase in scores is given by spreading weight from tags to further tag nodes , essentially performing a tag set expansion via tag - tag co - occurrence .given the complete graph model this is very difficult to separate from expanding the tag set via non - tag nodes , which seems to decrease prediction accuracy . to still utilise the tag - tag co - occurrence data we believe that separate approaches which directly model the tag - tag relationships would be more appropriate and produce better resultshowever , even though the assumptions made by conventional positive - reinforcement weight spreading methods do not seem to hold for the social bookmarking domain , some useful information could potentially be gained from the deep folksonomy graph by different approaches .a rule - driven analysis of small subsections of the graph could be used to make deductions about implied negative feedback , to either aid the recommendation process directly or to improve the accuracy of a tag - tag similarity metric by including negative scores . between query user and query document ] in figure [ fig : parameter_tuning_b ] we present the results for different settings of , which determines the balance in preference weight between the query user and the query document .once again there is not much difference in results without including content data ( folkrank , folkrank_pg , pathrank_pg ) .since most of the query documents in the test sets are new , the preference vector without content will only include the query user in the majority of cases .for the cases where the document does exist in the graph , and thus will be included in the preference vector , each of the tags connected to the query document will usually receive more weight than each of the tags connected to the query user since users are usually connected to many more tags than documents are .the tags connected to the query user only have a chance to outweigh the tags connected to the query document for high values of , at which settings we see a slight decrease in results .however , with content data ( simfolkrank , simfolkrank_pg , simpathrank_pg ) the preference vector contains the query user as well as several documents related to the query document and we can clearly observe the impact of .the results confirm that there is value in introducing the parameter to explicitly set this balance instead of using the strategy of the original folkrank algorithm of setting , which results in values lower than 0.1 for all of the datasets except delicious where it would be 0.2 .the best results are achieved with setting to 0.5 for citeulike , 0.3 for delicious , and 0.6 for both bibsonomy bookmark and bibsonomy bibtex . herewe present our final results with our best approaches and with tuned parameters on the test set of each of the datasets .the content source in all of the content - aware approaches is the document title . for approaches using folkrank s iterative weight spreading the dampening factoris set to , and for approaches using pathrank the maximum path length is set to .the balance in preference weight is set per dataset to the best value that was found in the parameter tuning runs .figures [ fig : results_test_set ] and [ fig : results_test_set ] show the recall and f1 respectively , on the test set for each of the datasets with tuned parameters . including content into the recommendation processprovides a significant increase in results .the results on the test set are in line with our previous conclusions on the evaluation set .simfolkrank_pg produces better results than simfolkrank over all datasets , suggesting that the post graph is a more accurate model of the tagging data than the folksonomy graph .furthermore , pathrank_pg and simpathrank_pg give almost equivalent results to folkrank_pg and simfolkrank_pg respectively which suggests that the iterative computation and general importance weights in folkrank s weight spreading approach do not provide a significant benefit to tag predictions . while producing comparable results , the pathrank weight spreading method is much less computationally expensive .furthermore , the results with simpathrank_pg , which is among the best recommenders across all datasets , are achieved with a parameter setting of . at this setting only the immediate neighbourhood of preference nodes is considered .none of the approaches improve results by utilising the deep graph over simpathrank_pg with which is essentially a user - tag and document - tag co - occurrence recommender at this setting . the results on bibsonomy bookmark without including content data are due to the fact that a large portion of the test posts in bibsonomy bookmark contain new users as well as new documents .for these test posts the algorithms which do nt include content data ( folkrank , folkrank_pg and pathrank_pg ) default to recommending the overall highly - ranked tags in the graph without personalisation .the three approaches have different rankings for the top three tags in the general recommendations which leads to the results shown . for completeness, we show the recall and f1 for each of the datasets post - cores at level 2 in figures [ fig : results_test_set_pc2 ] and [ fig : results_test_set_pc2_f1 ] respectively .the results with all methods are very similar except on the smaller bibsonomy datasets . however , there is no improvement with including content data .in fact , including content data makes results worse in most cases .since ( almost ) all of the documents in the test set for post - cores also exist in the training data , they all have previously assigned tags available which can be recommended .there is no need to additionally include similar documents in the preference vector as well since the exact query document exists in the graph . adding the content in this case has a negative effect on tag predictionsthis is an interesting result and suggests that the best strategy for the future might be to only include the content if the query document does not exist in the training data , for experiments on post - core 2 as well as unpruned datasets .in this paper we have presented novel adaptations and extensions to folkrank and conducted an in - depth analysis of the accuracy of the folksonomy graph model , the iterative weight spreading algorithm of folkrank and the value of exploring the deep folksonomy graph .the extension of folkrank with content data resulted in a significant increase in tag recommendation accuracy and addressed the new item problem in tag recommendation , as well as providing further insight into the folkrank algorithm .as part of our examination of the folksonomy graph structure , we have proposed an improved model which captures the tagging data more accurately and produces better tag recommendation results . in our analysis of the iterative weight spreading method of folkrank , we have shown that the general un - personalised node weights do not provide a positive impact on tag recommendations , and if given too much relevance hurt the accuracy of the algorithm . since the general node weights are one of the main reasons for folkrank s high complexity , we think it is an important finding that they can be safely omitted .furthermore , we have shown that a simpler weight spreading algorithm , pathrank , which works in a similar manner to breadth - first search , produces comparable results to the much more complex iterative weight spreading algorithm employed by folkrank while being computationally less expensive .the most intriguing result of our analysis was that even though both folkrank s iterative weight spreading and our simpler pathrank spreading algorithm have the potential to utilise the deep folksonomy graph , they do not benefit from doing so in practice .moreover , we have presented an in - depth discussion as well as a direct evaluation of the value of exploring the deep folksonomy graph .we conclude that exploring the graph beyond the immediate neighbourhood of the query nodes with conventional weight spreading methods does not provide a significant increase in tag recommendation accuracy and can in some cases even hurt recommendations .the assumption that closeness in the graph always implies a positive relationship does not hold beyond the immediate neighbourhood of nodes in social tagging graphs .this suggests that the foundation of graph - based recommenders ( and to a lesser extent collaborative filtering ) , which are traditionally applied to two - dimensional datasets , does not apply to the three - dimensional user - document - tag relationships found in social tagging data .in summary our main conclusions are as follows .* content inclusion in tag recommendation * * including content into the recommendation process addresses the new document problem and significantly increases results on full / unpruned datasets . * * the title of documents is a better content source and provides a more accurate description of documents than the fulltext content . ** including content at the document level produces a more accurate recommender than including content at the word level and constructing user - word and word - tag relationships , especially for smaller sized social tagging datasets .* folksonomy graph model * * explicitly including post - membership information into the graph provides a model which makes more accurate assumptions about the relationships in the tagging data and produces improved results over the traditional folksonomy model . * deep graph exploration * * general importance / authority scores , which make iterative weight spreading computationally expensive , do not provide an improvement to the accuracy of tag recommendations and can be omitted to reduce complexity . ** the expensive exploration of the deep tagging data graph with conventional weight spreading methods does not provide an improvement to tag recommendations and can in some cases decrease results . * * the assumption that closeness in the graph always implies a positive relationship does not hold in social tagging datasets beyond the immediate neighbourhood of nodes . in the future we plan to further explore methods to leverage the potential benefit of including the information contained in the deep folksonomy graph for tag recommendation .we think that by using rule - based methods which analyse smaller subgraphs of the folksonomy , implicit negative feedback could be extracted . this could be used to include negative scores in user - tag and especially document - tag relationships in order to reduce the scores of tags which are likely to be incorrect for a specific user or document .moreover , the negative feedback could be incorporated into tag - tag similarity measures to make these more accurate .another interesting research direction are the sampling methods used in tag recommendation . as social bookmarking websites and tagging datasetsget larger , it is becoming infeasible to build models on and analyse all of the training data , especially with methods which examine complex relationships in the data .we plan to further explore this problem and evaluate different sampling methods in their ability to produce unbiased and predictive samples of training data .gemmell2009 jonathan gemmell , thomas schimoler , maryam ramezani , and bamshad mobasher . 2009 . .in _ proceedings of the 7th workshop on intelligent techniques for web personalization and recommender systems_. heymann2008 paul heymann , daniel ramage , and hector garcia - molina . 2008 . .in _ sigir 08 : proceedings of the 31st annual international acm sigir conference on research and development in information retrieval_. acm , new york , ny , usa , 531538 .andreas hotho , robert jschke , christoph schmitz , and gerd stumme .2006 . . in _ the semantic web : research and applications_ ( lecture notes in computer science ) _ , vol . 4011 .springer , 411426 .jaeschke2007 robert jschke , leandro balby marinho , andreas hotho , lars schmidt - thieme , and gerd stumme .in _ knowledge discovery in databases : pkdd 2007 , 11th european conference on principles and practice of knowledge discovery in databases_. 506514 .landia2012 nikolas landia , sarabjot singh anand , andreas hotho , robert jschke , stephan doerfel , and folke mitzlaff .2012 . . in _ proceedings of the 4th acm recsys workshop on recommender systems and the social web_ ( rsweb 12)_. acm , new york , ny , usa , 18 . liu2009 feifan liu , deana pennell , fei liu , and yang liu . 2009 . .naacl 09 : proceedings of human language technologies : the 2009 annual conference of the north american chapter of the association for computational linguistics_. association for computational linguistics , morristown , nj , usa , 620628 .rendle2009 steffen rendle , leandro balby marinho , alexandros nanopoulos , and lars schmidt - thieme .2009 . . in _ proceedings of the 15th acmsigkdd international conference on knowledge discovery and data mining _ _( kdd 09)_. new york , ny , usa , 727736 .renz2003 ingrid renz , andrea ficzay , and holger hitzler .nldb 2003 : natural language processing and information systems , 8th international conference on applications of natural language to information systems , june 2003 , burg ( spreewald ) , germany_. 228234 .song2008 yang song , ziming zhuang , huajing li , qiankun zhao , jia li , wang - chien lee , and c. lee giles .in _ sigir 08 : proceedings of the 31st annual international acm sigir conference on research and development in information retrieval_. 515522 .symeonidis2008 panagiotis symeonidis , alexandros nanopoulos , and yannis manolopoulos .2008 . . in _ proceedings of the 2008 acm conference on recommender systems __ ( recsys 08)_. acm , new york , ny , usa , 4350 .wetzker2010 robert wetzker , carsten zimmermann , christian bauckhage , and sahin albayrak . 2010 . .in _ proceedings of the third acm international conference on web search and data mining _ _ ( wsdm 10)_. acm , new york , ny , usa , 7180 . witten1999 ian h. witten , gordon w. paynter , eibe frank , carl gutwin , and craig g. nevill - manning .1999 . . in _ proceedings of the 4th acm conference on digital libraries , august 11 - 14 , 1999 , berkeley , ca , usa_. acm , 254255
|
the information contained in social tagging systems is often modelled as a graph of connections between users , items and tags . recommendation algorithms such as folkrank , have the potential to leverage complex relationships in the data , corresponding to multiple hops in the graph . we present an in - depth analysis and evaluation of graph models for social tagging data and propose novel adaptations and extensions of folkrank to improve tag recommendations . we highlight implicit assumptions made by the widely used folksonomy model , and propose an alternative and more accurate graph - representation of the data . our extensions of folkrank address the new item problem by incorporating content data into the algorithm , and significantly improve prediction results on unpruned datasets . our adaptations address issues in the iterative weight spreading calculation that potentially hinder folkrank s ability to leverage the deep graph as an information source . moreover , we evaluate the benefit of considering each deeper level of the graph , and present important insights regarding the characteristics of social tagging data in general . our results suggest that the base assumption made by conventional weight propagation methods , that closeness in the graph always implies a positive relationship , does not hold for the social tagging domain . [ information filtering ] author s addresses : n. landia ( n.landia.ac.uk ) , s. s. anand ( sarabjot.singh.com ) and n. griffiths ( nathan.griffiths.ac.uk ) , department of computer science , university of warwick , coventry cv4 7al , uk ; s. doerfel ( doerfel.uni-kassel.de ) , faculty of electrical engineering and computer science , university of kassel , wilhelmshher allee 73 , 34121 kassel , germany ; r. jschke ( jaeschke.de ) , l3s research center , appelstrasse 9a , 30167 hannover , germany ; a. hotho ( hotho.uni-wuerzburg.de ) , department of computer science , university of wrzburg , am hubland , wrzburg , germany .
|
path tracking is the task of tracing out a 1 real dimensional solution curve described implicitly by a system of equations , typically equations in variables , given an initial point on , or close to , the path .this can arise in many ways , but our motivation is the solution of systems of polynomials via homotopy continuation ( see ) . in this method , to find the isolated solutions of the system for given polynomials , one constructs a homotopy , , such that is the target system to be solved while is a starting system whose isolated solutions are known .there is a well - developed theory on how to construct such homotopies to guarantee , with probability one , that every isolated solution of is the endpoint in the limit as of at least one smooth path , where on ] is rank , there is a well - defined tangent direction and tracking may proceed .the predictor adds a constraint on the length of the step along the tangent , whereas corrector steps are constrained to move transverse to the tangent .the extra constraints are particularly simple in the case where is rank , for then the path progresses monotonically in , and the step can be controlled via the advance of .accordingly , one has a linear system to be solved for : {{\delta z}}= -\left ( h(z_1,t_1 ) + \frac{\partial h}{\partial t}(z_1,t_1){{\delta t}}\right).\ ] ] for prediction , we set , the current step size , and for correction , we set .since the neglected terms are quadratic , the prediction error is order .thus , in the case of a failed step , cutting the step size from to reduces the prediction error by a factor of . in this way , cuts in the step size quickly reduce the prediction error until it is within the convergence region of the corrector . with a order predictor ,the prediction error scales as , potentially allowing larger step sizes . in any case, the adaptive approach quickly settles to a step size just small enough so that the corrector converges , while the next larger step of fails . with and ,the step size adapts to within a factor of 2 of its optimum , with an approximate overhead of 20% spent checking if a larger step size is feasible .failure of path tracking with an adaptive step size can be understood from the discussion of newton s method in [ sec : newton ] . for small enough initial error and infinite - precision arithmetic, the newton corrector gives quadratic convergence to a nonsingular root .near a singularity , is large , which can lead to a small quadratic convergence zone and a slower rate of quadratic convergence .inexact arithmetic can further shrink the convergence zone , degrade the convergence rate from quadratic to linear , and introduce error into the final answer . from these considerations , we see that there are two ways for the adaptive step size path tracker to halt prematurely near a singularity . 1 . the predictor is limited to a tiny step size to keep the initial guess within the convergence zone of the corrector .if this is too small , we may exceed the allotted computation time for the path .2 . the path may approach a point where the final error of the corrector is as large as the requested path tracking tolerance .the first mode of failure can occur even with infinite precision , but degradation of the convergence properties with too low a precision increases the occurrence of this failure .the second mode of failure is entirely a consequence of lack of precision . by allocating enough precision, we can eliminate the second mode of failure and reduce the occurrence of the first mode .it is important to note that in some applications there is flexibility in the definition of the homotopy , which can be used to enlarge convergence zones and thereby speed up path tracking .for example , re - scaling of the equations and variables can sometimes help .however , such maneuvers are beyond the scope of this paper , which concentrates only on tracking the path of a given homotopy .the use of high precision can largely eliminate both types of path tracking failure identified above . however , high precision arithmetic is expensive , so it must be employed judiciously .one might be tempted to rachet precision up or down in response to step failures as in the adaptive step size algorithm .this presents the difficulty that there is just one stimulus , step failure , and two possible responses , cut the step size or increase precision . in the following paragraphs, we outline two possible algorithms for adapting both step size and precision . the simplest approach to adapting precision , shown in figure 1 ,is to run the entire path in a fixed precision with adaptive re - runs .that is , if the path tracking fails , one re - runs it in successively higher precision until the whole path is tracked successfully or until limits in computing resources force termination .the advantage of this approach is that adaptation is completely external to the core path tracking routine .thus , this strategy can be applied to any path tracker that enables requests for higher precision .for example , in the polynomial domain , the package phc offers multiple precision , although the precision must be set when calling the program .the adaptation algorithm of figure 1 has two main disadvantages .first , when too low a precision is specified , the tracker may waste a lot of computation near the point of failure before giving up and initiating a re - run in higher precision .second , the whole path is computed in high precision when it may be needed only in a small section of the path , often near the end in the approach to a singular solution .a slightly more sophisticated treatment can avoid re - computing the segment of the path leading up to the failure point by requesting the tracker to return its last successful path point .the re - run in higher precision can then be initiated from that point on .[ fig : fixed_flow ] ( 14.000000,11.000000)(0.000000,-11.000000 ) ( 3.0000,-1.0000)(6.0000,2.0000 ) ( 0.0000,-2.0000)(6.0000,2.0000)[c ] ( 3.0000,-2.0000)(0,-1)1.0000 ( 1.0000,-5.0000)(4.0000,2.0000)[c ] ( 3.0000,-5.0000)(0,-1)1.0000 ( 1.6250,-7.3750)(1,1)1.3750 ( 1.6250,-7.3750)(1,-1)1.3750 ( 4.3750,-7.3750)(-1,-1)1.3750 ( 4.3750,-7.3750)(-1,1)1.3750 ( 1.6250,-8.7500)(2.7500,2.7500)[c ] ( 4.3750,-6.9625)(0,0)[lt]no ( 3.4125,-8.7500)(0,0)[lb]yes ( 3.0000,-8.7500)(0,-1)1.0000 ( 1.5000,-11.0000)(3.0000,1.2500)[c ] ( 4.3750,-7.3750)(1,0)4.0000 ( 8.3750,-7.3750)(0,1)1.0000 ( 6.7500,-4.7500)(1,1)1.6250 ( 6.7500,-4.7500)(1,-1)1.6250 ( 10.0000,-4.7500)(-1,-1)1.6250 ( 10.0000,-4.7500)(-1,1)1.6250 ( 6.7500,-6.3750)(3.2500,3.2500)[c ] ( 8.8625,-3.1250)(0,0)[lt]y ( 10.0000,-4.2625)(0,0)[lt]n ( 10.0000,-4.7500)(1,0)1.0000 ( 11.0000,-5.3750)(3.0000,1.2500)[c ] ( 8.3750,-3.1250)(0,1)0.5000 ( 8.3750,-2.6250)(0,1)1.0000 ( 6.8750,-1.6250)(3.0000,1.2500)[c ] ( 6.8750,-1.0000)(-1,0)0.8750 instead of waiting for the adaptive step size method to fail before initiating higher precision , we propose to continuously monitor the conditioning of the homotopy to judge the level of precision needed at each step . in this way , the computational burden of higher precision is incurred only as needed , adjusting up and down as the tracker proceeds , while obtaining superior reliability . to decide how much precision is needed, we turn to the analysis of newton s method from [ sec : newton ] .we wish to ensure that the achievable accuracy is within the specified tolerance and that convergence is fast enough . in what follows ,we need to evaluate and .these do not need to be very accurate , as we will always include safety margins in the formulas that use them . is readily available in the max norm , where we use the maximum magnitude of any of its entries . is more difficult , as we do not wish to compute the full inverse of the matrix .this issue has been widely studied in terms of estimating the condition number .a relatively inexpensive method , suggested in and elsewhere , is to choose a unit vector at random and solve for .then , we use the estimate .although this underestimates , tests of matrices up to size show the approximation to be reliably within a factor of 10 of the true value , which is easily absorbed into our safety margins .one requirement is that should be small enough to ensure that the error - perturbed jacobian is nonsingular .minimally , we require , but by requiring it to be a bit smaller , say for some , we force .this removes the growth of as one possible source of failure .suppose that the error function in eq .[ eq : errorebound ] is of the form .then , our first rule is to require using decimal digits of arithmetic results in precision , so we may restate this rule as .\ ] ] a second requirement is that the corrector must converge within iterations , where we keep small as in the usual adaptive step size algorithm , typically 2 or 3 .let us say that the tolerance for convergence is . recall thatin each step of newton s method , we compute and take the step . the best estimate available of the accuracy is , so we declare success when .suppose that after iterations this is not yet satisfied .we still have iterations to meet the tolerance , and we would like to be sure that a lack of precision does not prevent success .pessimistically , we assume that the linear factor in in eq .[ eq : dbar ] dominates the quadratic one and that the rate of convergence does not improve with subsequent iterations .we force , and we have . including the same safety margin as before , , the requirement becomes as before , let s assume .taking logarithms , the number of decimal digits of precision must satisfy since we only apply this formula when the tolerance is not yet satisfied , we have , or equivalently , .this implies that between corrector iterations , requirement [ eq : setconvergencelog ] is always more stringent than eq .[ eq : setklog ] . however , we still use eq .[ eq : setklog ] outside the corrector , because is not then available .our third requirement is that the precision must be high enough to ensure that the final accuracy of the corrector is within the tolerance at full convergence . for this , eq .[ eq : finalerror ] is binding , so including a safety margin of and using the norm of the current approximate solution , , as the best available estimate of , we require suppose the error in evaluating the homotopy function is given by . if the function is evaluated in the same precision as the rest of the calculations , i.e. , , we have the requirement if instead we evaluate the function to higher precision , say , we have the dual criteria the effect of adding the two errors is absorbed into the safety factor .conditions a , b , and c ( or c ) allow one to adjust the precision as necessary without waiting for the adaptive step size to fail . if necessary , the precision can even be increased between corrector iterations .an algorithm using these criteria is described by the flowchart in figure 2 . in this flowchart ,`` failure '' in the predictor or corrector steps means that the linear solve of eq .[ eq : basicstep ] has aborted early due to singularity .using the magnitude of the largest entry in as , gaussian elimination with row pivoting may declare such a failure when the magnitude of the largest available pivot is smaller than , for then the answer is meaningless .this is more efficient than completing the linear solve and checking condition a or b , as these are sure to fail .[ fig : stepwiseflowchart ] ( 15.375000,24.250000)(0.000000,-24.125000 ) ( 2.5000,-0.7500)(5.0000,1.5000 ) ( 0.0000,-1.5000)(5.0000,1.5000)[c ] ( 5.0000,-0.7500)(1,0)1.0000 ( 6.8750,-0.7500)(1.7500,1.7500 ) ( 6.0000,-1.6250)(1.7500,1.7500)[c ] ( 6.8750,-1.6250)(0,-1)1.0000 ( 4.8750,-4.6250)(1,1)2.0000 ( 4.8750,-4.6250)(1,-1)2.0000 ( 8.8750,-4.6250)(-1,-1)2.0000 ( 8.8750,-4.6250)(-1,1)2.0000 ( 4.8750,-6.6250)(4.0000,4.0000)[c ] ( 8.8750,-4.0250)(0,0)[lt]yes ( 7.4750,-6.6250)(0,0)[lb]no ( 8.8750,-4.6250)(1,0)1.0000 ( 9.8750,-5.3750)(2.0000,1.5000)[c ] ( 6.8750,-6.6250)(0,-1)1.0000 ( 5.6250,-8.8750)(1,1)1.2500 ( 5.6250,-8.8750)(1,-1)1.2500 ( 8.1250,-8.8750)(-1,-1)1.2500 ( 8.1250,-8.8750)(-1,1)1.2500 ( 5.6250,-10.1250)(2.5000,2.5000)[c ] ( 5.6250,-8.5000)(0,0)[rt]failure ( 7.2500,-10.1250)(0,0)[lb]success ( 5.6250,-8.8750)(-1,0)1.0000 ( 4.6250,-8.8750)(-1,0)1.0000 ( 1.6250,-9.6250)(2.0000,1.5000)[c ] ( 2.6250,-8.1250)(0,1)1.0000 ( 2.6250,-7.1250)(1,0)4.2500 ( 6.8750,-10.1250)(0,-1)1.0000 ( 5.8750,-12.1250)(1,1)1.0000 ( 5.8750,-12.1250)(1,-1)1.0000 ( 7.8750,-12.1250)(-1,-1)1.0000 ( 7.8750,-12.1250)(-1,1)1.0000 ( 5.8750,-13.1250)(2.0000,2.0000)[c ] ( 5.8750,-11.8250)(0,0)[rt]violated ( 7.1750,-13.1250)(0,0)[lb]ok ( 5.8750,-12.1250)(-1,0)3.2500 ( 2.6250,-12.1250)(0,1)2.5000 ( 6.8750,-13.1250)(0,-1)1.0000 ( 5.6250,-15.3750)(1,1)1.2500 ( 5.6250,-15.3750)(1,-1)1.2500 ( 8.1250,-15.3750)(-1,-1)1.2500 ( 8.1250,-15.3750)(-1,1)1.2500 ( 5.6250,-16.6250)(2.5000,2.5000)[c ] ( 5.6250,-15.0000)(0,0)[rt]failure ( 7.2500,-16.6250)(0,0)[lb]success ( 5.6250,-15.3750)(-1,0)3.0000 ( 2.6250,-15.3750)(0,1)5.0000 ( 6.8750,-16.6250)(0,-1)1.0000 ( 5.6250,-18.8750)(1,1)1.2500 ( 5.6250,-18.8750)(1,-1)1.2500 ( 8.1250,-18.8750)(-1,-1)1.2500 ( 8.1250,-18.8750)(-1,1)1.2500 ( 5.6250,-20.1250)(2.5000,2.5000)[c ] ( 5.6250,-18.5000)(0,0)[rt]violated ( 8.1250,-18.5000)(0,0)[lt]ok ( 5.6250,-18.8750)(-1,0)3.0000 ( 2.6250,-18.8750)(0,1)5.0000 ( 8.1250,-18.8750)(1,0)2.0000 ( 10.1250,-18.8750)(0,1)6.0000 ( 10.1250,-12.8750)(1,0)1.0000 ( 11.1250,-12.8750)(1,0)1.0000 ( 12.1250,-12.8750)(1,1)1.2500 ( 12.1250,-12.8750)(1,-1)1.2500 ( 14.6250,-12.8750)(-1,-1)1.2500 ( 14.6250,-12.8750)(-1,1)1.2500 ( 12.1250,-14.1250)(2.5000,2.5000)[c ] ( 13.7500,-11.6250)(0,0)[lt]no ( 13.7500,-14.1250)(0,0)[lb]yes ( 13.3750,-14.1250)(0,-1)1.0000 ( 12.3750,-16.6250)(2.0000,1.5000)[c ] ( 13.3750,-16.6250)(0,-1)1.0000 ( 11.3750,-19.6250)(1,1)2.0000 ( 11.3750,-19.6250)(1,-1)2.0000 ( 15.3750,-19.6250)(-1,-1)2.0000 ( 15.3750,-19.6250)(-1,1)2.0000 ( 11.3750,-21.6250)(4.0000,4.0000)[c ] ( 11.3750,-19.0250)(0,0)[rt]no ( 13.9750,-21.6250)(0,0)[lb]yes ( 13.3750,-21.6250)(0,-1)1.0000 ( 12.3750,-24.1250)(2.0000,1.5000)[c ] ( 11.3750,-19.6250)(-1,0)1.0000 ( 10.3750,-19.6250)(0,-1)2.0000 ( 10.3750,-21.6250)(-1,0)1.0000 ( 6.3750,-22.6250)(3.0000,2.0000)[c ] ( 6.3750,-21.6250)(-1,0)1.0000 ( 2.3750,-22.6250)(3.0000,2.0000)[c ] ( 2.3750,-21.6250)(-1,0)1.0000 ( 1.3750,-21.6250)(0,1)14.5000 ( 1.3750,-7.1250)(1,0)2.0000 ( 13.3750,-11.6250)(0,1)1.0000 ( 11.8750,-9.1250)(1,1)1.5000 ( 11.8750,-9.1250)(1,-1)1.5000 ( 14.8750,-9.1250)(-1,-1)1.5000 ( 14.8750,-9.1250)(-1,1)1.5000 ( 11.8750,-10.6250)(3.0000,3.0000)[c ] ( 13.8250,-7.6250)(0,0)[lt]no ( 11.8750,-8.6750)(0,0)[rt]yes ( 11.8750,-9.1250)(-1,0)2.2500 ( 9.6250,-9.1250)(0,-1)6.2500 ( 9.6250,-15.3750)(-1,0)1.5000 ( 13.3750,-7.6250)(0,1)3.7500 ( 13.3750,-3.8750)(0,1)1.0000 ( 12.1250,-2.8750)(2.5000,1.5000)[c ] ( 12.1250,-2.1250)(-1,0)5.2500 the algorithm does not attempt corrections at . this is because in our applications the target system often has singular solutions .it is safer to sample the incoming path while it is still nonsingular and predict to based on these samples . in this situation, it helps to employ a more sophisticated predictor than euler s method .for example , endgames that estimate the winding number of the root and use it to compute a fractional power series can be very effective . to use the foregoing procedures , we need the function evaluation error , , and the errors contributing to , namely , and .there is a trade - off between using rigorously safe bounds for highest reliability or using less stringent figures reflecting typical behavior to avoid the overuse of high precision .rough figures are acceptable as this is just a means of setting the precision .also , a user of the path tracker will not usually wish to expend a lot of effort in developing error bounds .a rigorous and automated way of establishing error bounds is to use interval arithmetic .following that approach , one may wish go all the way and use interval techniques to obtain a path tracker with fully rigorous step length control , as in .however , this can be expensive , due partially to the cost of interval arithmetic but more significantly due to the cost of overconservative error bounds , which slow the algorithm s progress by driving the step size smaller than necessary .still , when rigorous results are desired , it may be worth the cost .the method of does not explicitly include adaptive precision , so something along the lines discussed here could be useful in modifying that approach . instead of using interval methods , we may approximate errors by accumulating their effects across successive operations .suppose the program to evaluate has been parsed into a straight line program , that is , a sequence of unary and binary operations free of branches or loops .suppose that at some intermediate stage of computation , we have computed a value for a real number , such that lies between and .( for a floating point complex number , this applies to both the real and imaginary parts . )let s use the shorthand to mean this entire interval . if and , the product is computed in floating point with unit round - off as ) ] .similarly , is computed as which has an absolute error bounded by $ ] . using just these relations , an error bound for any straight - line polynomial function can be calculated in parallel with the function evaluation itself .similar relations can be developed for any smooth elementary function , such as the basic trigonometric functions . assuming that the inputs to the function , including both the input variables and any internal parameters of the function , are all known either exactly or with relative round - off error , the output of the error analysis is such that the error in the computed value is .when the result is rounded off to a possibly lower precision , the total error becomes the form shown in eq .[ eq : errorf ] .it is important to note that the error in the function depends on the error in its parameters .for example , consider the simple function .if this is rounded off to before we use high precision to solve , we will obtain an accurate value of but we will never get an accurate value of . although this is an obvious observation , it can easily be forgotten in passing a homotopy function from some application to the adaptive precision path tracking algorithm .if coefficients in the function are frozen at fixed precision , the algorithm tracks the solutions of the frozen function , not the exact problem that was intended . whether the difference is significant depends on the nature of the application and the sensitivity of the function . while and concern the errors in evaluating the function and its jacobian , the factor concerns the stability of the linear solve .round - off errors can accumulate through each stage of elimination .when gaussian elimination with partial pivoting is used , the worst - case error bound grows as for solving an system .however , as indicated in , rarely exceeds with the average case around or .setting should therefore be sufficient for almost all cases . to avoid program complexity and save computation time , it is preferable not to perform a full error analysis of the type just described .in many cases a rough analysis is sufficient and easily derived .this is indeed possible for the case of most interest to us : polynomial systems .suppose is a degree homogeneous polynomial where is just an index set for the coefficients . since is homogeneous , , so if , then also .consequently , the solution set of a system of homogeneous polynomials can be said to lie in projective space , the set of lines through the origin in .similarly , the solutions of multihomogeneous polynomials lie in a cross product of projective spaces , see . any inhomogeneous polynomial can be easily homogenized to obtain a related function , with hence , for any solution of there is a corresponding solution of .one advantage of homogenization is that we can re - scale any solution of to make , which often helps numerical conditioning .error bounds for homogeneous polynomials can be estimated easily . if we rescale so that the maximum entry in has magnitude 1 , then the error in evaluating the degree homogeneous polynomial as in eq .[ eq : homogh ] is approximately similarly , the derivatives have an approximate error bound of at first glance , it may seem that errors can be reduced by simply scaling the functions , and thereby scaling their coefficients , by some small factor .but will scale oppositely , so the error predicted by eq .[ eq : finalerror ] is unchanged .this section contains a brief discussion of the implementation details for multiprecision arithmetic and for evaluating the rules for adapting precision .then we discuss the results of applying the adaptive precision path tracker to three example polynomial systems .bertini is a software package for computation in numerical algebraic geometry currently under development by the authors and with some early work by christopher monico .bertini is written in the c programming language and makes use of straight - line programs for the representation , evaluation , and differentiation of polynomials .all the examples discussed here were run using an unreleased version of bertini on an opteron 250 processor running linux .the adaptation rules , a , b , and c ( or c ) , leave some choices open to the final implementation . for the runs reported here ,we chose to evaluate function residuals to the same precision as the computation of newton corrections , so rule c applied , not rule c . also , in rules a and b , we chose to use , where is the number of variables , which is somewhat conservative for typical cases but underestimates the worst pathological cases .( see section [ sec : errorestimates ] for more on this issue . )the rules require formulas for evaluating the error bounds and .these are problem dependent , so we report our choices for each of the example problems below . to adaptively change precision ,bertini relies on the open source mpfr library for multiprecision support .bertini has data types and functions for regular precision ( based on the ieee `` double '' standard ) and higher precision ( using mpfr ) .although the program would be simpler if mpfr data types and functions were used exclusively , the standard double precision types and functions in c are more efficient , so bertini uses these whenever the adaptation rules indicate that double precision is sufficient .additional details regarding the use of multiple precision may be found using links from the bertini website .since the use of adaptive precision variables is highly implementation - specific , no other details are described here .mpfr requires the addition of precision to the mantissa in packets of 32 bits . since the discussion of the examples below involvesboth binary and decimal digits , table [ tab : bits2digits ] shows how to convert between the two ..number of digits for mantissas at various levels of precision [ cols="^,^,^,^,^,^,^",options="header " , ] the level of precision used by the step - adaptive precision path tracking algorithm developed above was degree- and path - dependent for the chebyshev polynomials , although all paths for a given degree needed approximately the same level of precision . in each degree that was considered ,the path ending nearest was one of the paths needing the highest level of precision for that degree .( this occurs because the spacing between the roots is smallest near . ) the levels of precision used for that path are displayed in figure 4 .it should be noted that for every degree considered , a complete solution set with all solutions correct to at least 10 digits was found .[ fig : chebyprec ] we note that to solve the high degree chebyshev polynomials , a small initial step size was required to get the path tracker started . with too large an initial step , the predicted point was so far from the path that the adaptive precision rules increased precision to an unreasonable level without ever exiting the corrector loop .as diagrammed in figure 2 , the algorithm must exit the corrector loop before a decrease in step length can be triggered .various ad hoc schemes could detect and recover from this type of error , but we would prefer a step size control method based on an analysis of the predictor . for the moment , we defer this for future work .on some problems , endgames can speed convergence to the point that singular endpoints can be estimated accurately in double precision .it should be noted that is not generally the case . without enough precision , the `` endgame operating zone '' is empty .likewise , endgames based on deflating the system to derive a related nonsingular one may need higher than double precision to make a correct decision on the rank of the jacobian at each stage of deflation . moreover , if some other sort of singularity is encountered during path tracking , away from , endgames will not be useful while adaptive precision will be . in the case of tight final tolerances or endpoints of paths having high multiplicity, endgames will again need assistance from higher ( and therefore adaptive ) precision methods .conversely , high precision is expensive and floating point precision can never be made truly infinite , so to get the most out of whatever precision one uses , endgames are indispensable .the theory going into this new adaptive precision method revolves around newton s method or corrector methods in general .however , corrector methods are only one half of basic path tracking . a careful study of predictor methods is certainly warranted .the use of different predictor schemes , e.g. , adams - bashforth rather than euler , is well worth considering .a careful analysis of the predictor might be combined with the convergence criteria of the corrector to automatically determine a safe step length in place of the trial - and - error step length adaptation method we have used here .this might give an efficient alternative to , which presents a rigorous step length control algorithm based on interval arithmetic .allgower and k. georg . , volume 13 of _ springer ser . in comput ._ springer verlag , berlin heidelberg new york , 1990 . reprinted in 2003 by siam as volume 45 in the classics in applied mathematics series .a. leykin , j. verschelde , and a. zhao .evaluation of jacobian matrices for newton s method with deflation for isolated singularities of polynomial systems . in _snc 2005 proceedings .international workshop on symbolic - numeric computation ._ xian , china , july 19 - 21 , 2005 . edited by dongming wang and lihong zhi .19 - 28 .li . numerical solution of polynomial systems by homotopy continuation methods . in _ handbook of numerical analysis .volume xi .special volume : foundations of computational mathematics _ , edited by f. cucker , pp . 209304 , 2003 .
|
a path tracking algorithm that adaptively adjusts precision is presented . by adjusting the level of precision in accordance with the numerical conditioning of the path , the algorithm achieves high reliability with less computational cost than would be incurred by raising precision across the board . we develop simple rules for adjusting precision and show how to integrate these into an algorithm that also adaptively adjusts the step size . the behavior of the method is illustrated on several examples arising as homotopies for solving systems of polynomial equations . * 2000 mathematics subject classification . * primary 65h10 ; secondary 65h20 , 65g50 , 14q99 . * key words and phrases . * homotopy continuation , numerical algebraic geometry , polynomial systems . homotopy continuation , numerical algebraic geometry , polynomial systems
|
since 1997 the da of ug ( daug ) is housed at 2200 m above sea level , overlooking the city of guanajuato , close to the `` geometrical center '' of mexico , and declared as `` heritage of mankind '' by unesco .the research staff of eight astronomers participates in undergraduate teaching within the physics and other programs at ug , and hopes to offer a postgraduate program in astrophysics soon .the da maintains the _ observatorio la luz _ with a 57-cm optical reflector , used for public outreach purposes and being prepared as a student laboratory .the two founder - members of the da had already collected ( by donations ) a few decades of _ apj , aj _ , and _ mnras _ , but other major journals ( like _ a&a _ ) had been subscribed only since 1994 . since my arrival ,in late 1996 , i volunteered as `` provisional '' librarian , given the lack of a professional department librarian hired by ug .the central library of ug was too far away to manage efficiently the library of the da .as a matter of fact this situation continues until now . after preparing the first inventory of our holdings in early 1997 , i posted an inquiry to astrolib for donations to fill the holes of our journal coveragethis caused a wave of offers from professional librarians all over the world .hundreds of kilos of journals were received over the following months . in june 1997the da moved to its present building with only 6 offices for 10 people , and no library room .however , our neighbor , the maths research center `` cimat '' , maintained by the mexican science foundation conacyt , generously offered of their library `` provisionally '' for the da .cimat is m away ( m vertical ! ) , _ but _ it _ closes _ during nights and weekends which never made it attractive for a `` leisure visit '' .after five years nothing has changed !as it was not practical to store all our holdings `` up there '' at cimat , we decided to use all possibly available space in the offices and corridors for the most recent journals ( year back ) and modern books ( for teaching and research ) . until now our hopes to obtain a dedicated library were not fulfilled , despite a written promise and money allocation by ug in spring 2001 .thus the journals have grown hopelessly beyond their initially assigned growth space .maintaining the alphabetic order of journals would now imply a full rearrangement of our holdings which is beyond our available manpower .moreover , in the meantime most of our journal holdings have become available freely from ads , making a visit to the physical library each time less attractive to our staff .however , downloading of articles is often limited to weekends given our slow internet line . in the absence of a librarian , or other suitable personnel ,i only seal the books with a da stamp but do not afford to assign them a catalog number , so they have no `` reproducible '' shelf location yet .moreover , in mid-1999 a heavy rain and a leaky window affected part of our shelves at cimat , and about 15% of our book holdings , many of them of historic interest , were waterlogged .most of the book holdings had to be removed in a hurry ( in my absence ) and since then only very limited `` order '' was re - established .i visit the library at most once a month to accommodate what does not fit any more in our office building .since its foundation the da received the six major astronomy journals by personal subscription .the idea was that we would gradually be able to pay institutional rates , but with a 2002 budget of usd3000 we still can not afford a single journal at the institutional rate .thanks to the generous permit by some publishers to continue with the personal rate , we manage to subscribe to _ apj , apjs , aj , mnras , a&a , nature , pasa , pasj , pasp , baas , s&t _ , and _mercury_. these regular funds also allow us to buy very few conference proceedings ( mainly from the rather economic _asp conf .series _ ) and iau symposia .the budget for books and other proceedings depends on the allocation of special funds from the federal secretary of public education ( sep ) and on personal research projects ( usually from conacyt ) , and fluctuates between 0 and 8000 usd / yr . delays in either the invoices of journal publishers , or in the payment by our central library , usually cause an interruption of our subscriptions during the first few months of each year ( does this sound familiar to you ... ? ) .altogether , we are far from the ideal situation described in my earlier wishlist ( andernach 1998 ) .thanks to frequent offers of duplicate items from professional astronomy librarians ( e.g. va astrolib ) we acquired an impressive amount of ( mostly older ) journals and monographs .naturally many of these items are of interest to either the bibliophile or historians of astronomy and physics .while filling me with pride , it is a shame to store these books some 200 m away from our offices where hardly any of us finds the leisure to go and browse the shelves .i maintain the inventory as a single ascii file ( of kb ) , readable by all members of the da .it saves me to learn dedicated library software and allows an easy search of items using unix s grep command .for journals i use bibcode - style , and free format for the `` rest '' ( monographs , theses , manuals , etc ., currently items ) .a small excerpt follows : + ' '' '' 1834mmras 7 + | 1879mnras 39 - # 2 suppl .( p.489 - 560 ) + 1843mmras 13 + | 1891mnras 52 - # 2 suppl .( p. 67 - 121 ) + 1843mmras 14 + | 1933mnras 93 - 4 - 6,8,9 + 1847mmras 17 + | 1942mnras 101 - 8 only + 1849mmras 19 + | 1942mnras 102 + + ... + 1879mnras 39 - no . 2 suppl .( p.489 - 560 ) + 1891mnras 52 - no . 2 suppl .( p. 67 - 121 )+ annals of the cape observatory vi , darling & son , 1897 .+ p.s.barrera,e.castro,j.r.garza,j.j.martinez,r.aguirre:memoriasdelgraneclipsedelsol , + montemorelos , nuevo len , 28 mayo 1900 , universidad autnoma de nuevo len , 114pp . + m.debroglie:x-rays,transl . by j.r .clarke , e.p.dutton & co.publishers , 1922 , 204 pp .+ d.s.deyoung:the physics of extragalacticradiosources , univ.chicagopress,2002,558 pp .+ ' '' ''there are now eight places in mexico where professional astronomers work ( phillips et al . , 2002 ) . a rough map of their location is shown in fig . 1 .only ia - unam , inaoe , and oan ( # 13 ) have long histories and stable budgets for library and librarian .all others ( # 48 ) were established during the last decade .the following table gives an overview of the library situation at these five `` new '' places .the last column gives a comparison with the oan library at ensenada , maintained by unam .numbers in brackets are either uncertain or very variable . the budget is listed for journals books . [ cols="<,^,^,^,^,^,^",options="header " , ] astronomy acquisitions for iam - udg and monterrey are made by their central libraries , causing a lack of transparency and communication between these and the astronomers . at daugwe have no budget for binding journals .however , in 2001 we used some left - over monies for binding a few years of _ apj _ to find out that it was times cheaper to bind them in mexico city than in guanajuato .the changes in the job market and the internet have affected radically not only the way astronomers work , but also how an astronomy library is run , especially at small and `` poor '' places .today small groups of astronomers are established independently of favorable sky conditions and rely mainly on an adequate internet connection , but often have to work without a professional librarian .while this may work `` well '' , i.e. with little effect on research output as in our case , it certainly relies heavily on the services provided by a few professional librarians thinking far beyond their own institution .i see a dangerous trend for a future 2-class system of astronomy institutions : those with professional librarians working almost `` behind the scenes '' and those which have to survive without a local librarian altogether .+ p. phillips , l.f .rodrguez , a. snchez - ibarra , p. valds sada , and m .- e.jimnez kindly provided data on their research centres . thanks to u.grothkopf and s.stevens-rayburn for comments , to a.roy and n.loiseau for printing the poster , and to l.hdz .mendieta for help with figure1 .my attendance of lisaiv was financed by conacyt grant e-27602 .
|
virtually every `` serious '' place where professional astronomy is done has a librarian , even if shared with the physics or math department . since its creation in 1994 of _ departamento de astronoma _ ( da ) of universidad de guanajuato ( ug ) it was neither provided with a librarian , nor with proper space for its holdings , nor with a budget allowing institutional journal subscriptions . i describe my experience of now five years as `` amateur '' librarian , and present information on other small astronomy institutions in mexico in a similar situation . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
|
wireless communication channels are characterized by their time - varying fading nature that has significant effect on the performance of wireless networks .various algorithms have been proposed to design efficient resource allocation schemes that optimize the system performance over fading channels , e.g. , by minimizing the transmission power , minimizing the delay , or maximizing the system throughput .resource allocation over fading channels has been studied for point - to point communication in different contexts , e.g. , . in the expected shannon capacity for fading channelswas obtained when the channel state information ( csi ) is known causally at the transmitter and the receiver .furthermore , it has been shown that the `` water - filling '' algorithm achieves the maximum expected capacity .the authors of considered the problem of minimizing the expected energy to transmit a single packet over a fading channel subject to a hard deadline . in , a dynamic program formulation was proposed to maximize a general throughput function under constraints on the delay and the amount of energy available at the transmitter . in ,the work of was extended to energy harvesting systems where the transmitter has causal csi .the capacity region of the multiple access channel ( mac ) has been studied in various settings , see for example . in ,the capacity region of the gaussian multiple - input multiple - output ( mimo ) mac was characterized .the authors of proposed an iterative water - filling algorithm to obtain the optimal transmit covariance matrices of the users that maximize the weighted sum capacity . in the capacity region of the fading mac was characterized by tse and hanly .furthermore , the power allocation policy that maximizes the long - term achievable rates subject to average power constraints for each user was introduced . in ,hanly and tse introduced an information - theoretic characterization of the capacity region of the fading mac with delay constraints .in addition , they provided the optimal power allocation policy that achieves the delay - limited capacity . in ,wang developed the optimal energy allocation strategy for the fading mac with energy harvesting nodes by assuming that the csi is _ non - causally _ known before the beginning of transmission . in , the capacity region of the fading mac with power constraint on each codeword was investigated .however , the authors of focused their work on the low signal - to - noise ratio ( snr ) regime where they showed that the one - shot power allocation policy is asymptotically optimal . in this paper , we consider a system composed of multiple users transmitting to a single base station ( bs ) over a fading mac .the transmission occurs over a limited time duration in which each user has a fixed amount of energy .some motivating scenarios and applications for this system model are introduced in , e.g. , satellites , remote sensors , and cellular phones with limited amount of energy transmitting delay - sensitive data to a single receiver .we develop energy allocation strategies to maximize the expected sum - throughput of the fading mac subject to hard deadline and energy constrains .first , we consider the offline allocation problem in which the channel states are known a priori to the bs .we show that the optimal solution of this problem can be obtained via the iterative water filling algorithm .next , a dynamic program formulation is introduced to obtain the optimal online allocation policy when only causal csi is available at the bs . since the computational complexity of the optimal online policy increases exponentially with the number of users ,we develop a suboptimal solution for the online allocation problem by exploiting the proposed offline allocation policy .moreover , we investigate numerically the performance of the proposed policies and compare them with the equal - energy allocation and the one - shot energy allocation policy of . the rest of the paper is organized as follows . in section [ system ] , we present the system model and formulate the maximum sum - throughput optimization problem .the offline energy allocation is introduced in section [ offline ] .we study the online allocation in section [ online ] , where dynamic programming is utilized to obtain the optimal policy and a suboptimal policy with reduced computational complexity is proposed . in section [ results ] , we present our numerical results and compare the performance of different policies in various scenarios . finally , we conclude the paper in section [ conclusion ] .we consider a discrete - time mac as shown in fig .[ f1:system ] , where users communicate with a single bs in a slotted wireless network .we assume a flat - fading channel model in which the channel gain of each user is constant over the duration of the time slot and changes independently from time slot to another according to a known continuous distribution .thus , the received signal by the bs at time slot is given by where is a zero - mean white gaussian noise with variance , and is the transmitted signal of user at time slot .the channel gain between the user and the bs at time slot is denoted by , where the channel gains of each user , , are independent identically distributed with the cumulative distribution function ( cdf ) .let denote the maximum amount of energy that can be expended by user during time slots , where denotes the transmission window in which each user must transmit his data .let denote the set of users communicating with the bs , and denote the set of the time slots during which communication occurs .our goal is to maximize the sum - throughput of the mac over the transmission window under constraints on the available energy for each user .let denote the consumed energy by the user at time slot .hence , the maximum achievable sum - throughput of the mac at time slot , when the channel gains of all users at time slot are known , is given by where and are the channel bandwidth , and the time slot duration , respectively , and is the noise power in watts . in , ] are the channel gains vector and the consumed energy vector of all users at time slot , respectively .let be the available energy for user at time slot .thus , the evolution of the energy queue of the user is given by where the initial state of the energy queue is .in addition , the energy vector ] , is also obtained via the water - filling algorithm . however , in this case , the interference signals of the other users at each time slot are considered as noise .hence , the energy allocation policy of the user is significantly affected by the energy allocation policy of the other users , where the allocated energy for the user in time slot depends on which represents the ratio between the interference - plus - noise power and the channel gain of the user at time slot .* initialization : * , let , , note that a closed - form expression for the optimal solution introduced in theorem [ th1 ] can not be found .nevertheless , the optimal solution can be obtained by applying the iterative water filling algorithm ( iwf ) described in algorithm [ iwf ] to iteratively solve equations where is the maximum number of iterations . in each iteration, the iwf algorithm successively updates the optimal energy allocation of each user using the water - filling algorithm while assuming that the allocation policy of the other users are fixed .hence , at each iteration the algorithm tries to maximize the objective function of the problem by adapting the energy allocation of a single user while considering the signals of the other users as noise .since the objective function is monotonically increasing in the energy allocation policy of each user , the objective function can not decrease after any iteration . as a result, the iwf solution approaches the optimal solution of problem as the number of iterations increases where determines the error tolerance .the iwf algorithm was applied in to find the optimal transmit covariance matrices of the users that achieve the boundary of the gaussian mimo - mac capacity . in a similar manner to , we can assume the channel gains of the user over the time window ( ) as effective channel gains of transmit antennas of the user .therefore the results of the iwf algorithm obtained in can be applied here . for a finite number of iterations , the iwf algorithm described in algorithm [ iwf ] converges to the optimal allocation policy which is the solution of the optimization problem in .furthermore , the iwf algorithm achieves a sum - throughput lower than the optimal within nats after a single iteration .see theorem and theorem in .in this section , we assume that the channel vector is causally known to the bs and the users at the beginning of time slot while future channel states are not known . let denote the state of the system which is comprised of the channel gains and the energy levels of all users at time slot .we aim to obtain the energy allocation policy ] .* optimization step * : after the certainty step , the recursive optimization problem in at time slot can be reformulated as the following deterministic optimization where the solution of the optimization problem in is obtained in a similar way to the offline allocation problem introduced in section [ offline ] by applying the iwf algorithm in algorithm [ iwf ] over slots with an amount of energy available at each user for .* allocation step * : we set and compute the energy levels of all users at using equation [ energylevel ] .then , we go to the next time slot .and .,width=9,height=7 ] in this section , we numerically evaluate the performance of various energy allocation policies introduced throughout the paper . for comparison , we consider a simple energy allocation policy namely the equal - energy allocation , where each user allocates an equal amount of energy for each time slot of the transmission window regardless the effect of the channel fading and the allocation policy of the other users , i.e. , notice that this policy is optimal in case of time - invariant channels , where the channel gain of each user is constant over the deadline . and ,width=9,height=7 ] .5 for db and db , title="fig:",width=9,height=7 ] .5 for db and db , title="fig:",width=9,height=7 ] for simplicity , we consider a symmetrical case , where all users are equipped with an equal amount of energy , i.e. , , , and the channel gains of all users are i.i.d . , where the channel gains are generated according to the exponential distribution with parameter , i.e. , , . also , we consider the following parameters : the bandwidth mhz , the noise power watts , and the slot length seconds , and hence , the transmit snr of each user , .we use the performance of the offline allocation policy as an upper bound on the performance of online policies . in the following figures ,the performance of the optimal offline , suboptimal , one - shot , and equal - energy policies are obtained by averaging over randomly generated channel realizations , while the performance of the optimal online policy is obtained by using the discretization method .figs [ fig1 ] and [ fig2 ] show the average sum - throughput of the mac versus the transmit snr of each user for a system composed of users and transmission window length equal to time slots .[ fig1 ] focuses on the low snr regime where the snr is varied from db to db .it is clear that the performance of the proposed suboptimal and the one - shot policies is close to the optimal one , although , the proposed suboptimal policy performs better when the snr approaches db . moreover , the equal - energy allocation policy has the worst performance . in fig .[ fig2 ] , the snr varies from db to db to investigate the performance of the different policies in the medium and high snr regimes .we can see from this figure that the one - shot policy deviates from the optimal solution , since the linear approximation of the throughput function is no longer valid at high snr . however , the performance of the proposed suboptimal policy is still very close to that of the optimal solution .next , we investigate the effect of the number of users on the performance of different policies .[ fig3:1 ] and fig .[ fig3:2 ] show the average sum throughput of the system ( for slots ) at snr db and snr= db , respectively .we can see from fig .[ fig3:1 ] that both the proposed suboptimal policy and the one - shot policy almost have the same performance for any number of users in the low snr regime .when the number of users is much larger than the time slots of the transmission window , i.e. , , each time slot of the transmission widow would be shared with a lot of users .in other words , each user would suffer from high interference signals at each time slot of the transmission window .therefore the best choice is to allocate the available energy of each user to a single time slot of the transmission window that has a favorable channel gain .hence fig .[ fig3:2 ] shows that the one - shot policy converges to the proposed suboptimal policy in the high snr regime for .however , the equal - energy allocation policy has better performance than the one - shot policy when the number of users is small . on the other hand , fig .[ fig3:1 ] and fig .[ fig3:2 ] show that the gap between the equal - energy allocation policy and the suboptimal policy increases as the number of users increases since the competition on the available resources ( the time slots of the transmission window ) increases as the number of users increases .in this paper , we have proposed energy allocation strategies for the -user fading mac with delay and energy constraints under two different assumptions on the channel states information . in the offline allocation ,a convex optimization problem is formulated with the objective of maximizing the sum - throughput of the fading mac within the transmission window where the optimal solution is obtained by applying the iterative water filling algorithm . in the online allocation ,the problem is formulated via dynamic programming , and the optimal solution is obtained numerically by using the discretization method when the number of users is small .in addition , we have proposed a suboptimal solution with reduced computational complexity that can be used when the number of users is large .numerical results have been provided to show the superiority of the proposed algorithms compared to the equal - energy allocation and the one - shot allocation algorithms .the optimal offline transmission policy is obtained by solving the optimization problem .since the objective function of is the sum of concave functions , and the constrains are affine functions , then the optimization problem is a convex optimization problem that can be solved using lagrange method .the lagrangian is given by where is the lagrange multiplier associated with the equality constraint in , and is the lagrange multiplier associated with the inequality constraint in .slater s condition is satisfied for this problem , and hence , the karush - kuhn - tucker ( kkt ) conditions are necessary and sufficient for optimality .the kkt conditions are given by a. fu , e. modiano , and j. n. tsitsik , `` optimal transmission scheduling over a fading channel with energy and deadline constraints , '' _ ieee transactions on wireless communications _, vol . 5 , no . 3 , pp .630641 , 2006 .d. n. tse and s. v. hanly , `` multiaccess fading channels .i. polymatroid structure , optimal resource allocation and throughput capacities , '' _ ieee transactions on information theory _ , vol .44 , no . 7 , pp . 27962815 , 1998 .z. rezki and m .- s .alouini , `` on the capacity of multiple access and broadcast fading channels with full channel state information at low snr , '' _ ieee transactions on wireless communications _ , vol . 13 , no . 1 , pp . 464475 , 2014 .r. devassy , g. durisi , j. ostman , w. yang , t. eftimov , and z. utkovski , `` finite - snr bounds on the sum - rate capacity of rayleigh block - fading multiple - access channels with no a priori csi , '' _ ieee transactions on communications _ , vol .63 , no .10 , pp . 36213632 , 2015 .z. wang , v. aggarwal , and x. wang , `` iterative dynamic water - filling for fading multiple - access channels with energy harvesting , '' _ ieee journal on selected areas in communications _ ,33 , no . 3 , pp .382395 , 2015 .
|
in this paper , we consider a multiple - access fading channel where users transmit to a single base station ( bs ) within a limited number of time slots . we assume that each user has a fixed amount of energy available to be consumed over the transmission window . we derive the optimal energy allocation policy for each user that maximizes the total system throughput under two different assumptions on the channel state information . first , we consider the offline allocation problem where the channel states are known a priori before transmission . we solve a convex optimization problem to maximize the sum - throughput under energy and delay constraints . next , we consider the online allocation problem , where the channels are causally known to the bs and obtain the optimal energy allocation via dynamic programming when the number of users is small . we also develop a suboptimal resource allocation algorithm whose performance is close to the optimal one . numerical results are presented showing the superiority of the proposed algorithms over baseline algorithms in various scenarios . resource allocation , multiple - access channels , fading , dynamic programming .
|
the symmetric simple exclusion process ( to which we sometimes refer simply as _ the simple exclusion _ ) is one of the simplest particle system with local interactions .it can be considered as a toy model for the relaxation of a gas of particle and was introduced by spitzer in . since then, it has been the object of a large number of studies by mathematicians and theoretical physicists , who investigated many of its properties such as the evolution rules for the particle density and tried to derive fick s law from microscopic dynamics , studied to motion of an individual tagged particle ( see for reviews on the subject and references therein ) .more recently an interest has been developed for the convergence to equilibrium of the process on a finite graph in terms of mixing time , which is the object of our study .we consider , the discrete circle with sites and place particles on it , with _ at most _ one particle per site . with a slight abuse of notation ,we sometimes use elements of to refer to elements of .the simple exclusion on is a dynamical evolution of the particle system which can be described informally as follows : each particle tries to jump independently on its neighbors with transition rates , but the jumps are cancelled if a particle tries to jump on a site which is already occupied ( see figure [ partisys ] in section [ fluctuat ] for a graphical representation ) . more formally , our state - space is defined by given define the configuration obtained by exchanging the content of site and the exclusion process on with particle is the continuous time markov process on whose generator is given by the unique probability measure left invariant by is the uniform probability measure on which we denote by . given we let denote the trajectory of the markov chain starting from .we want to know how long we must wait to reach the equilibrium state of the particle system , for which all configurations are equally likely .we measure the distance to equilibrium is measured in terms of total variation distance .if and are two probability measures on , the total variation distance between and is defined to be where is the positive part of .it measures how well one can couple two variables with law and .we define the worst - case distance to equilibrium at time as follows similarly we define the typical distance from equilibrium at time as for a given we define the -mixing - time ( or simply the mixing time when ) to be the time needed for the system to be at distance from equilibrium let us mention that the convergence to equilibrium has also been studied in terms of asymptotic rates : it has been known for a long time ( see e.g. ( * ? ? ?* corollary 12.6 ) ) that for any reversible markov chain exists and that is the smallest nonzero eigenvalue of , usually referred to as the spectral gap .note that the knowledge of the spectral - gap also give an information on for finite , as we have ( cf .[theorem 12.3 ] ) the exclusion process can in fact be defined on an arbitrary graph and its mixing property have been the object of a large number of works .let us mention a few of them here .let us start with the mean - field case : in , the study of the exclusion on the complete graph with particles is reduced to the study of the birth and death chain and a sharp asymptotic for the mixing time is given using a purely algebraic approach ( see also for a probabilistic approach of the problem for arbitrary ) .the problem on the lattice is much more delicate .let us mention a few results that were obtained on the torus : in ( and also independently in ) , comparisons with the mean - field model were used to prove that , and thus via that there exists a constant such that in , the related problem of the _ log - sobolev constant _ for the process was studied . in particular , in ,a sharp bound ( up to multiplicative constant ) on the log - sobolev constant was proved for the exclusion process on the grid which allowed to improve into in , using the _ chameleon process _ , this upper bound is improved in the case of small by showing that ( see also where the technique is extended to obtain estimates on the mixing time for arbitrary graphs in terms of the mixing time of a single particle ) . in another direction : in , it is shown that the spectral gap for the simple - exclusion on any graph is equal to that of the underlying simple random walk ( e.g. in our case ) . finally concerning the case of dimension : in , the mixing time of the exclusion process on the segment is proved to be larger than and smaller than , with the conjecture that the lower bound is sharp .this conjecture was proved in .the first result of this paper is a sharp asymptotic for the mixing time of the exclusion process on the circle . for a fixed ,when and goes to infinity we are able to identify the asymptotic behavior of .we obtain that when ( which by symmetry is not a restriction ) note that here the dependence in is not present in the asymptotic equivalent .this means that on a time window which is the distance to equilibrium drops abruptly from to .this sudden collapse to equilibrium for a markov chain was first observed by diaconis and shahshahani in the case of the ( mean - field ) transposition shuffle ( see also for the random walk on the hypercube ) .the term cutoff itself was coined in and the phenomenon has since been proved to hold for a diversity of markov chains ( see e.g. for some recent celebrated papers proving cutoffs ) .it is believed that cutoff holds with some generality for reversible markov chains as soon as the mixing time is much larger than the inverse of the spectral gap , but this remains a very challenging conjecture ( see ( * ? ? ?* chapter 18 ) for an introduction to cutoff and some counterexamples and for recent progress on that conjecture ) .a natural question is then of course : `` on what time scale does decrease from , say , to ? '' .this is what is called the cutoff window .we are able to show it is equal to .let us mention that , this is result is , to our knowledge , the first sharp derivation of a cutoff window for a lattice interacting particle system .[ mainres ] for any sequence satisfying and tending to infinity .we have for every more precisely we have and the window is optimal in the sense that for any the result above can be reformulated in the following manner : the second line states that the cutoff window is at most while the third implies not only that this is sharp , but also that one the time scale , the `` cutoff profile '' has infinite support in both directions. our result does not cover the case of a bounded number of particles .in this case there is no cutoff and the mixing time is of order for every with a pre - factor which depends on ( a behavior very similar to the random - walk : case ) .we also show that on the other - hand that starting from a typical configuration , the relaxation to equilibrium is not abrupt and occurs on the time - scale .[ secondres ] for any sequence satisfying and tending to infinity , we have for all and note that we will not prove that which would complete the picture by showing that the system does not mix at all before the time - scale. however we point out to the interested reader that such a result can be obtained combining ingredients of section [ lowerbounds ] together with ( * ? ? ?* lemma 3.1 ) which asserts that the fluctuations of defined in are asymptotically gaussian .the convergence of when tends to infinity remains an open question .let us , in this section , briefly sketch the proof or at least recall the ingredients used in to derive the cutoff for the exclusion on the segment .the state - space of particle configuration on the segment comes with a natural order we set it turns out that not only this order is preserved by the dynamics in a certain sense ( see but also ) , and but it has additional properties : if denotes the maximal configuration for then is an increasing function ( for any ) ; we also have positive correlation of increasing events ( a.k.a the fkg inequality after ) .these monotonicity properties are first used to show that after time of , starting the dynamics from , we can couple a finite dimensional ( _ i.e. _ whose dimension remains bounded when grows ) projection of together with the corresponding equilibrium distribution .the monotonicity is then used again to check that the peres - winckler censorship inequality ( * ? ? ?* theorem 1.1 ) is valid in our context .the latter result establishes that , if one starts from the maximal configuration for the order described in , ignoring some of the updates in the dynamics only makes the mixing slower ( as shown in , this can fail to be true if there is no monotonicity ) .we use this statement to show that the system mixes in a time much smaller than once a finite projection is close to equilibrium .this method establishes a sharp upper bound ( at first order ) for the mixing time starting from and some additional work is necessary to show that this is indeed the worse initial condition ( we refer to the introduction for more details ) . in the view of the previous section ,the proof in for the mixing time for the exclusion on the segment heavily relies on monotonicity arguments in every step of the process .the drawback of this approach is that it is not very robust , and can not be used for either higher dimension graphs ( for instance with either free or periodic boundary condition ) .it even breaks down completely if one allows jump between site and . with this in mind , our idea when studying the exclusion on the circle is also to develop an approach to the problem which is more flexible , and could provide a step towards the rigorous identification of the cutoff threshold in higher dimensions ( see section [ higherdef ] for conjectures and rigorous lower - bounds ) .this goal is only partially achieved as , even if we do not require monotonicity , a part of our proof relies on the interface representation of the process ( see section [ fluctuat ] ) which is a purely one - dimensional feature .however let us mention that a dimensional generalization of proposition [ smallfluctu ] can be shown to remain valid for .a missing ingredient in higher dimension is thus a coupling which allows to couple particle configuration with typical fluctuation with equilibrium in a time of order .another positive point is that by relying much less on monotonicity , we are able to prove statements about the mixing time starting from an arbitrary position ( cf . theorem [ secondres ] ) instead of focusing only on the extremal ones . finally note that the method developed in this paper gives more precise results than the one in as we identify exactly the width of the cutoff window ( and it also extends to the segment ) .however , we could not extract from it the asymptotic mixing time for the adjacent transposition shuffle , which seem to require novel ideas . in , by combining the technique of the present paper with some aditionnal new ideas ,the author improved theorem [ mainres ] by describing the full cutoff profile , that is , identified the limit of .proposition [ smallfluctu ] as well as the multiscale analysis used in section [ multis ] play a crucial role in the proof . in section [ lowerbounds ]we prove the part of the results which corresponds to lower - bounds for the distance to equilibrium , that is to say , the first lines of and .the proof of this statement is very similar to the one proposed by wilson in , the only significant difference is that we have to work directly with the particle configuration instead of the height - function .doing things in this manner underlines that the proof in fact does not rely much on the dimension ( see section [ higherdef ] ) . while the proof does not present much novelty , we prefer to mention it in full as it is relatively short and it improves the best existing bound in the literature ( see ) . the main novelty in the paper is the strategy to prove upper - bound results ( second lines of and ) . in section [ pubdecompo ]we explain how the proof is decomposed . in section [ arratia ] ,we use a comparison inequality of liggett to control the ( random ) fluctuations of the local density of particle after time .finally we conclude by showing that configuration which have reasonable fluctuations couples with equilibrium within time , using interface representation for the particle system , and a coupling based on the graphical construction .the construction is detailed in section [ fluctuat ] , and the proof is performed using a multi - scale analysis in section [ multis ] .the aim of this section is to prove the some lower bounds on the distance to equilibrium . following the method of ( * ? ? ?* theorem 4 ) , we achieve such a bound by controlling the first two moments of the first fourier coefficient of .[ dastate ] for any sequence satisfying and tending to infinity .we have and for any moreover we have for any the main idea is to look at the first " fourier coefficient ( a coefficient corresponding to the smallest eigenvalue of the discrete laplacian on ) , of . for , we define it is an eigenfunction of the generator , ( the reason for this being that each particle performs a diffusion for which is an eigenfunction ) , associated to the eigenvalue where the function is an eigenfunction of the generator with eigenvalue , and as a consequence , for any initial condition is a martingale for the filtration defined by in particular we have =e^{-t{\lambda}_1}a_1(\chi).\ ] ] furthermore one can find a constant such that for all {{\;\leqslant\;}}2k.\ ] ] using the notation and set .we have where the second equality comes from reindexing the sum and the last one from the identity from the markov property , we have for every positive , |_{s=0}= { \lambda}_1 m_{t}+ e^{t{\lambda}_1}({{\ensuremath{\mathcal l } } } a_1)(\eta^\chi_t)=0.\ ] ] which implies that it is a martingale . in particular we have = e^{-{\lambda}_1 t}a_1(\chi).\ ] ] now let us try to estimate the variance of : for the process with particle , the maximal transition rate is ( each of the particles can jump in an most directions independently with rate one ) .if a transition occurs at time , the value of varies at most by an amount with this in mind we can obtain a bound on the bracket of ( that is : the predictable process such that is a martingale ) then using the fact that ] is much larger than then is much larger than with large probability which implies has to be large .we need to use this reasoning for a which maximizes . using ( * ? ? ?* proposition 7.8 ) ( obtained from the cauchy schwartz inequality ) and the estimates - , we have \right)^{2}}{\left({{\ensuremath{\mathbb e } } } \left[a_1(\eta_t^{\chi})\right]\right)^{2}+ 2\left[{{\rm var}}\left ( a_1(\eta_t^{\chi})\right)+{{\rm var}}_{\mu } \left ( a_1(\eta ) \right ) \right]}\\ & { { \;\geqslant\;}}\frac{1}{1 + 8k \exp(2{\lambda}_1 t ) a_1(\chi)^{-2}}. \end{split}\ ] ] consider being the configuration which minimizes . it is rather straight - forward to check that for any , , we have . thus using the above inequality for ( the reader can check that which implies both and .to prove we need to use for the set we have + \frac{\mu ( a_{\delta})}{1 + 8 \delta^{-2}\exp(2{\lambda}_1n^2 s ) } .\ ] ] to conclude the proof , it is sufficient to prove that for some small .this can be done e.g. by showing that {{\;\leqslant\;}}ck^2 ] to be the smallest ( for the inclusion ) subset of which contains and which satisfies let be a function defined on we use the notation } f(z).\ ] ] we define the _ length _ of the interval ( which we write ] .given , we can decompose in base as follows where .we set and , for .+ using the triangle inequality again we have and hence thus , the proof of proposition [ smallfluctu ] can be reduced to show the following .[ fixeddev ] let us define for every we have {{\;\leqslant\;}}2e^{-cs^2}\ ] ] indeed from and the reasoning taking place before , one has we have by union bound {{\;\leqslant\;}}\sum_{q = q_0}^p \sum_{y=1}^{\lfloor n 2^{-q } \rfloor } { { \ensuremath{\mathbb p } } } \left [ |s_{2^{q}(y-1 ) , 2^{q}y}(t)| { { \;\geqslant\;}}\left(\frac{3}{4}\right)^{p - q } s\sqrt{k } \right ] \\ { { \;\leqslant\;}}\sum_{q=0}^p 2^{p+1-q } \max_{y } { { \ensuremath{\mathbb p } } } \left [ |s_{2^{q}(y-1 ) , 2^{q}y}(t)| { { \;\geqslant\;}}\left(\frac{3}{4}\right)^{p - q } s\sqrt{k } \right].\end{gathered}\ ] ] thus we have to find a bound on \ ] ] which is uniform in and is such that the sum in the second line of is smaller than . for what follows we can , without loss of generality consider only the case , as all the estimates we use are invariant by translation on .using lemma [ laplacetrans ] and the markov inequality , we have for any positive { { \;\leqslant\;}}\exp\left ( 2^{q+1}\alpha^{2}\frac{k}{n}-\alpha\sqrt{k}\left(\frac{3}{4}\right)^{p - q } s\right).\ ] ] we can check that the right - hand side is minimized for note that for all , ( recall ) one has this ascertains the validity of , and hence { { \;\leqslant\;}}e^{-s^{2}2^{q-3}n\left(\frac{3}{4}\right)^{2(p - q)}}.\ ] ] using the fact that we have { { \;\leqslant\;}}e^{-\frac{s^{2}}{8}\left(\frac{9}{8}\right)^{(p - q)}}.\ ] ] using this in allows us to conclude ( choosing appropriately ) .we use a result of liggett which provides a way to compare the simple exclusion with a simpler process composed of independent random walkers . if is a symmetric function on and we set where the above relation defines modulo permutation which is sufficient for the definition .we say that a function defined on is positive definite if and only if for all such that , we have we say that a function defined on is positive definite if all its two dimensional marginals are . note in particular a function that can be written in the form where is a positive constant and is a non - negative function on , is definite positive .given , let denote a set of independent random walk on , starting from initial condition which satisfies of course defines only modulo permutation but this has no importance for what we are doing ( e.g. we can fix to be minimal for the lexicographical order ) .[ ligett ] if is a symmetric definite positive then we have for all {{\;\leqslant\;}}{{\ensuremath{\mathbb e } } } [ f({\bf x}^{\chi}_t)]\ ] ] the proof in the case is detailed in ( * ? ? ?* proof of lemma 2.7 ) perfectly adapts to the case of general .we include it here for the sake of completeness .we let denote the generator of independent random walks and the associated semigroup .we define the action of and on functions as follows as is invariant by permutations of the labels , is also is a symmetric function and we can also consider for ( cf . ) using the standard property of markov semi - group , we have (\chi)= \int^t_0 \partial_s \left [ p_{t - s } q_s(f)(\chi)\right ] { \,\text{\rm d}}s\\= \int_{0}^t p_{t - s } \left ( { { \ensuremath{\mathcal l } } } ' -{{\ensuremath{\mathcal l } } } \right ) q_{s } f(\chi){\,\text{\rm d}}s.\ ] ] and our conclusion follows if we can prove that for all ( as composition by preserves positivity ) .first note that is definite positive .indeed if we have where is the discrete heat kernel on the circle .and satisfies .thus the l.h.s of is non - negative as a sum of non - negative terms .then , notice that the generator includes all the transition of but also allows particle to jump on a neighboring site even if it is occupied .hence we have , for all with distinct coordinates where in the right - hand side , only the -th and -th coordinate appears in the argument of .note that each term in the r.h.s . of is of the form with ( recall that is symmetric ) and thus is positive .we want to apply proposition [ ligett ] to the function \}}-{{\ensuremath{\mathbb p } } } \left[x_i(t)\in[1,y ] \right ] \right ) } \ ] ] for note that it is of the form and thus is definite positive .[ laplacetrans2 ] for all sufficiently large for all , and for all , we have \}}-{{\ensuremath{\mathbb p } } } \left[x_i(t)\in[0,y ] \right]\right)}\right ] { { \;\leqslant\;}}\exp\left ( \frac{2ky}{n } \alpha^2\right).\ ] ] to deduce from we just have to remark that \right]=\sum_{z=1}^y \sum_{i=1}^k p_t(x^0_i , z)= \sum_{z=1}^y { { \ensuremath{\mathbb p } } } [ \eta_t=1].\ ] ] we using the inequality for the variable }-{{\ensuremath{\mathbb p } } } [ x_1(t)\in [ 1,y]\right) ] and from lemma [ bound ] we have for all , {{\;\leqslant\;}}{{\ensuremath{\mathbb p } } } \left[x_1(t)\in[1,y ] \right ] { { \;\leqslant\;}}2y / n,\ ] ] the integrated inequality gives \}}-{{\ensuremath{\mathbb p } } } [ x_1\in [ 1,y]\right ) } \right]{{\;\leqslant\;}}1+\frac{2\alpha^2y}{n } { { \;\leqslant\;}}\exp(2\alpha^2 ( y / n)).\ ] ] the independence of the is then sufficient to conclude .in this section we present the main tool which we use to prove proposition [ csdf ] : the corner - flip dynamics . the idea is to associate to each an height function , and consider the dynamics associated with this rate function instead of the original one and use monotonicity properties of this latter dynamics . this idea is is already present in the seminal paper of rost investigating the asymetric exclusion on the line and became since a classical tool in the study of particle system .in particular it is used e.g. in to obtain bounds on the mixing time for the exclusion on the line .it has also been used as a powerful tool for the study of mixing of monotone surfaces starting with , and more recently in .let us stress however that in , the interface represention is used mostly as graphical tool in order to have a better intuition on an order that can be defined directly on . in the present we use the interface representation to construct a coupling which can not be constructed considering only the original chain . in particular , note that our coupling is markovian for the corner - flip dynamics but not for the underlying particle system .[ fluctuat ] let us consider the set of height functions of the circle . given in , we define as and we let be the irreducible markov chain on whose transition rates are given by we call this dynamics the corner - flip dynamics , as the transition corresponds to flipping either a local maximum of ( a `` corner '' for the graph of ) to a local minimum _ e vice versa_. it is , of course , not positive recurrent , as the state space is infinite and the dynamics is left invariant by vertical translation. however it is irreducible and recurrent .the reader can check ( see also figure [ partisys ] ) that is mapped onto , by the transformation defined by and that the image of the corner - flip dynamics under this transformation is the simple exclusion , down and up - flips corresponding to jumps and of the particles respectively . there is a natural order on the set defined by which has the property of being preserved by the dynamics in certain sense ( see section [ grafff ] for more details ) .= 9.5 cm given , we define to be a process with transitions starting from initial condition it follows from the above remark that fora all we have =p^\chi_{t}.\ ] ] our idea is to construct another dynamic which starts from a stationary condition ( the gradient is distributed according to ) and to try to couple it with within time .the difficulty here lies in finding the right coupling .in fact we define not one but two stationary dynamics and , satisfying ={{\ensuremath{\mathbb p } } } \left[{\nabla}\xi^2_t\in \cdot\right]=\mu.\ ] ] as is invariant for the dynamics , is satisfied for all as soon as it is satisfied for .as we wish to use monotonicity as a tool , we want to have then our strategy to couple with equilibrium is in fact to couple with and remark that if holds then we first have to construct the initial condition and which satisfies let us start with variable which has law .we want to construct and which satisfies somehow , we also want the vertical distance between and to be as small as possible .we set for arbitrary , or finally set we set and note that with this choice , is satisfied for ( see figure [ highfunk ] ) .= 8.5 cm now we present a coupling which satisfies .note that in this case if and only if the area between the two paths , defined by equals zero .the idea is then to find among the order - preserving possible one for which the `` volatility '' of is the largest possible , so that it reaches zero faster .we want to make the corner - flips of and _ as independent as possible _( of course making them completely independent is not an option since would not hold ) we introduce now our coupling of the , which is also a grand - coupling on , in the sense that it allows us to construct starting from all initial condition on the same probability space .the evolution of the is completely determined by auxiliary poisson processes which we call clock processes .set and set and to be two independent rate - one clock processes indexed by ( and are two independent poisson processes of intensity one of each ) .the trajectory of given is given by the following construction * is a cd - lg , and does not jump until one of the clocks indexed by , .* if rings at time and is a local maximum for , then . *if rings at time and is a local minimum for , then .the coupling of , and is obtained by using the same clock process for all of them .the reader can check that with this coupling is a consequence of .in fact , the corner flip dynamics which is considered here , is in one to one correspondence with the zero - temperature stochastic ising model on an infinite cylinder . with this in mindthe coupling we have constructed just corresponds to the the graphical construction of this spin flip dynamics .see e.g. ( * ? ? ?* section 2.3 and figure 3 ) for more details for the dynamics on a rectangle with mixed boundary condition .let us stress here that the coupling we use here is not the one the one of or for which the updates of pair of neighbors are done simultaneously for the coupled chains .in particular it is not a markovian coupling for the particle system ( as the height function is not encoded in the particle configuration ) .this is a crucial point here as this is what allows the coupling time to be much shorter .recall in particular in ( see table 1 ) , it is shown that with the usual markovian coupling , the coupling time is twice as large as the mixing time . to prove proposition [ csdf ]it is sufficient to prove that and typically merge within a time .more precisely , [ nahnou ] for all , given there exists such that for all sufficiently large , {{\;\leqslant\;}}{\varepsilon}.\ ] ] similarly for all , there exists such that {{\;\leqslant\;}}1-c(s , u).\ ] ] given consider , , constructed as above .then given , we construct the dynamics starting from the initial condition and using the same clock process as and . by definition of ,is satisfied and thus so is from the graphical construction . recalling and we have {{\;\leqslant\;}}{{\ensuremath{\mathbb p } } } \left [ \xi^0_{t}\ne \xi^1_{t}\right ] { { \;\leqslant\;}}{{\ensuremath{\mathbb p } } } \left[\xi^1_{t}\ne \xi^2_{t}\right].\ ] ] for any , where the last inequality is a consequence of . according tothe above inequality , proposition [ csdf ] obviously is a consequence of proposition [ nahnou ] .in order to facilitate the exposition of the proof , we choose to present it in the case first .we also chose to focus on . the necessary modifications to prove and to adapt the proofs for general explained at the end of the section .we are interested in bounding ( recall ) with our construction , , the area between the two curves is a valued martingale which only makes nearest neighbor jumps ( corners flip one at a time ) .hence is just a time changed symmetric nearest neighbor walk on which is absorbed at zero . in order to get a bound for the time at which hits zero , we need to have reasonable control on the jump rate which depends on the particular configuration the system sits on .the jump rate is given by the number corners of and that can flip separately .more precisely , set the jump rate of is given by for let us define by construction , the process defined by is a continuous time random walk on which jumps up and down with rate .we have from the definition of the which is of order and hence needs a time of order to reach .what we used to estimate is the following bound which can be derived from and the definition of , ={{\ensuremath{\mathbb p } } } \left[h_0{{\;\geqslant\;}}(s+r)n^{1/2 } \right ] { { \;\leqslant\;}}{{\ensuremath{\mathbb p } } } \left[h(\eta_0){{\;\geqslant\;}}rn^{1/2 } \right ] { { \;\leqslant\;}}2 e^{-cr^2}.\ ] ] if were of order for all this would be sufficient to conclude that reaches zero within time .this is however not the case : the closer and get , the smaller becomes .a way out of this is to introduce a multi - scale analysis where the bound we require on depends on how small already is .we construct a sequence of intermediate stopping time as follows . we are interested in for with note that a number of can be equal to zero if .we set for convenience . to bound the value of , our aim is to control each increments for as well .a first step is to get estimates for the equivalent of the for the time rescaled process ( recall [ defx ] ) .we set for as is diffusive , is typically of order , and is of order . with this in mind , it is not too hard to believe that [ cromican ] given there exists a constant such that {{\;\leqslant\;}}{\varepsilon}.\ ] ] let denote a nearest neighbor walk on starting from and the first time reaches .it is rather standard that there exists a constant such that for every and every {{\;\leqslant\;}}c_1 u^{-1/2}.\ ] ] note that for , ignoring the effect of integer rounding , has the same law as with and thus applying we obtain that {{\;\leqslant\;}}c_1 u^{-1/2 } ( 3/4)^{i/2}.\ ] ] in the same manner we have {{\;\leqslant\;}}c_1 u^{-1/2}.\ ] ] we can then choose large enough in a way that concerning , from , one can find such that {{\;\leqslant\;}}{\varepsilon}/4.\ ] ] conditionally on the event , is stochastically dominated by with and hence using and fixing large enough ( depending on , and ) we obtain {{\;\leqslant\;}}{{\ensuremath{\mathbb p } } } \left[a(0){{\;\geqslant\;}}c_2 n^{3/2}\right]+{{\ensuremath{\mathbb p } } } \left [ { { \ensuremath{\mathcal t } } } _ 0 { { \;\geqslant\;}}u a(0)^2\right]{{\;\geqslant\;}}{\varepsilon}/2.\ ] ] then we conclude by taking .what we have to check then is that the value of is not too small in the time interval for all .what we want to use is that for any , is at equilibrium so that has to present a `` density of flippable corners '' .we introduce an event which is aimed to materialize this fact . given and in we set \ | \\xi(z ) \text { is a local extremum } \}.\ ] ] we have }{2(n-1)}.\ ] ] we define {{\;\geqslant\;}}n^{1/4}\rightarrow j(x , y,\xi^1_t){{\;\leqslant\;}}\frac{1}{3}\#[x , y ] \big\},\ ] ] the event that a large " interval with an anomalously low density of corner does not appear before time .[ smalla ] for all sufficiently large { { \;\leqslant\;}}\frac{1}{n}.\ ] ] note that for any given time , is distributed according to ( because this is the case for and is the equilibrium measure for the dynamics ) .now let us estimate the probability of {{\;\geqslant\;}}n^{1/4}\rightarrow j(x , y,\eta){{\;\leqslant\;}}\frac{1}{3}\#[x , y]\right\},\ ] ] under the measure where is defined like its counter part for the height function replacing is a local extremum `` by ' ' " .we consider an alternative measure on , under which the are i.i.d .bernoulli random variable with parameter . by the local central limit theorem for the randomwalk we have let us now estimate first we remark that we can replace {{\;\geqslant\;}}n^{1/4} ] " . indeed , by dichotomy , if the proportion of local maxima is smaller than on a long interval , it has to be smaller than on a subinterval whose length belong to ] .then , summing over all intervals and using , we deduce that there exists such that , for all sufficiently large now , we set to be the times where the chain makes a transition . the chain is a discrete time markov chain with equilibrium probability and hence by union bound { { \;\leqslant\;}}i e^{-c_3n^{1/4}}.\ ] ] this implies = { { \ensuremath{\mathbb p } } } \left[\exists t{{\;\leqslant\;}}n^3 { \nabla}\xi^1_t(t ) \notin { { \ensuremath{\mathcal e } } } \right ] { { \;\leqslant\;}}i e^{-c_3n^{1/4}}+{{\ensuremath{\mathbb p } } } \left [ t_i{{\;\leqslant\;}}n^3\right].\ ] ] as the transitions occur with a rate which is at most , the second term is exponentially small e.g. for and this concludes the proof. then when holds , we can derive an efficient lower bound on which just depend on .recall [ fromzea ] when holds we have for all if and have no contact with each other , then is equal to the total number of flippable corners in and . if holds , this number is larger than , which , by definition of , is a lower bound for the number of corners on alone .when there exists such that , we consider the set of active coordinates note that when one of the ( or both ) have a local maximum at then when the corresponding corner flips , f it changes the value of .our idea is to find a way to bound from below the number of in for which has a flippable corner , using the assumption that holds .let us decompose into connected components ( for the graph ) which are intervals as defined in .assume that ] is attained ) .if { { \;\leqslant\;}}n^{1/4} ] ) we can use the fact that hold .first , let us control the area : as and are in contact we have and hence }(t){{\;\leqslant\;}}\#[a , b ] ( h(\xi^1_t)+h(\xi^2_t)).\ ] ] by the definition of there are at least /3 ] .thus }(t){{\;\geqslant\;}}\frac{a_{a , b}(t)}{3 ( h(\xi^1_t)+h(\xi^2_t))}.\ ] ] and we can deduce ( recall ) that for any value ] , we are going to use a classical estimate of first hitting time of a given level for a simple symmetric random walk : the there exists a constant such that for all , for all one has ( using the notation of ) {{\;\geqslant\;}}e^{-c_1 \max(v^{1/2},v)},\ ] ] ( it is sufficient to check the estimate when is large as for close to zero it is just equivalent to ) .the time is stochastically dominated by and thus {{\;\geqslant\;}}e^{-c_1 s}.\ ] ] neglecting the effect of integer rounding , for , is equal in law to .hence from we have {{\;\geqslant\;}}e^{-c_1 \max((4/3)^{-i/2}s^{7/2},(4/3)^{-i}s^{7}))}.\ ] ] then note that there exists a constant such that for all for , depends on initial value of the area .now let us note that from we have for sufficiently large {{\;\geqslant\;}}1- 2e^{-cs^2}{{\;\geqslant\;}}1/2.\ ] ] then conditioned on the event , is dominated by and hence {{\;\geqslant\;}}\exp\left(-16 c_1 s^{9}\right).\ ] ] combined with , this gives us {{\;\geqslant\;}}e^{-16c_1 s^{9}}/2.\ ] ] using the independence and multiplying the inequalities , and ( and using ) we obtain the result for some appropriate .we also need an estimate on the probability that ( recall ) is too large which slightly differs from lemma [ coniduam ] [ coniduam2 ] recall .there exists a constant such that for all sufficiently large and all { { \;\leqslant\;}}e^{-c s^{10}}.\ ] ] as for lemma [ coniduam ] we just use lemma [ cromeski ] and the markov inequality .set and then from lemma [ cromican ] , [ smalla ] , and [ coniduam ] we have for sufficiently large , for large enough ( depending on ) {{\;\geqslant\;}}\exp(-c_1 s^9)/2.\ ] ] what remains to prove is that on the event .first , we notice that from the definition of , readily implies that hence to conclude we need to show that assume the statement is false and let be the smallest index such that it is not satisfied .using lemma [ fromzea ] we have then one has from the definition of hence from and thus ( recall ) for an explicit .this brings a contradiction to if is chosen sufficiently large .the overall strategy is roughly the same , except that we start with an area which is of order .hence most of the modifications in the proof can be performed just writing instead of .however lemma [ smalla ] does not hold for small values of and one needs a deeper change there .we define the as follows ( ) and we set note that a number of can be equal to zero if .the time changed version , of are defined as in .we first write down how lemma [ cromican]-[cromican3 ] and [ coniduam]-[coniduam2 ] can be reformulated in the context of particles ( the proofs are exactly the same and thus are not included ) .[ cromican2 ] given there exists a constant such that {{\;\leqslant\;}}{\varepsilon}.\ ] ] and a constant independent of the parameters {{\;\geqslant\;}}\exp\left(-c_2s^{8}\right).\ ] ] [ coniduam3 ] recall for any , exists a constant such that for any { { \;\leqslant\;}}{\varepsilon}.\ ] ] moreover there exist a constant such that for any , {\,\text{\rm d}}t { { \;\leqslant\;}}e^{-c_2 s^{10}}.\ ] ] a significant modification is however needed for lemma [ smalla ] , as for small , we can not define an event similar to which holds with high probability . set ( recall ) {{\;\geqslant\;}}n / k ( \log k)^2 \rightarrow j(x , y,\xi){{\;\geqslant\;}}\frac{k}{10n}\#[x , y]\right\}\\ \cup \left\ { \xi \ | \ \#[x , y]{{\;\leqslant\;}}n / k ( \log k)^2 \rightarrow |\xi(x)-\xi(y)|{{\;\leqslant\;}}(\log k)^4\right\}=:\mathfrak{a}_1\cup \mathfrak{a}_2.\end{gathered}\ ] ] note that is invariant by vertical translation of and thus only depends on .hence we can ( improperly ) consider it as a subset of .[ smalla2 ] we have as a consequence one has for every {{\;\leqslant\;}}1/k\ ] ] in this proof it is somehow easier to work with the particle system , hence we let be the uniform measure on .we consider an alternative measure on , under which the are i.i.d .bernoulli random variable with parameter . from the local central limit theorem ( which in this simple case can be proved using the stirling formula ), there exists a constant such that for all choices of and hence for any event , we have hence to prove the result , we just have to prove a slightly stronger upper - bound for the probability .we start proving that in terms of particle , holds any interval of length smaller than contains at most particles . set it is a standard large deviation computation ( computing the laplace transform and using the markov inequality ) to show that there exists a constant such that hence by translation invariance , the probability that there exists an interval of the form ] we have we now prove in terms of particle , having a local extremum at just corresponds to .note that if occurs then , by dichotomy , there exists necessarily an interval of length comprised between and in which the density of extrema is smaller than and hence the total number of extrema in it is smaller than . setting interval must include an interval of the type ] with {{\;\leqslant\;}}n / k ( \log k)^2 ] , we have }{n } { { \;\leqslant\;}}\xi^i_t(x){{\;\leqslant\;}}\xi^1_t(b)+\frac { k\#[x+1,b]}{n},\ ] ] and hence } ( \xi^2_t-\xi^1_t)(x){{\;\leqslant\;}}(\xi^1_t(b)- \xi^1_t(a))+\frac { k\#[a , b]}{n}\ ] ] from the definition of , the right - hand side is smaller than and hence }(t){{\;\leqslant\;}}2(\log k)^4\#[a , b]{{\;\leqslant\;}}(n / k ) ( \log k)^6.\ ] ] and hence ( recall , ) }(t){{\;\geqslant\;}}1{{\;\geqslant\;}}\frac{a_{[a , b]}(t ) k}{n ( \log k)^6}.\ ] ] for large bubbles with {{\;\geqslant\;}}\frac{2n } k ( \log k)^2 $ ] }(t){{\;\leqslant\;}}h(t ) \#[a , b]\ ] ] and thus from the definition of }(t ) { { \;\geqslant\;}}\frac{k}{10n } \#[a , b]{{\;\geqslant\;}}\frac{a_{[a , b]}(t)k}{10n h(t)}.\ ] ] the lemma is the proved by summing and over all bubbles .we are now ready to combine the ingredients for the the of proposition [ nahnou ] for arbitrary .we prove only , as can also be obtained in the same manner by adapting the technique used in the case .we fix a constant such that holds for instead of . is chosen so that holds for . where ,\\ t'&:=t+c_2n^2 . \end{split}\ ] ] from our definitions one has { { \;\geqslant\;}}1- { \varepsilon}.\ ] ] then we conclude by doing the same reasoning as in the case that using that for sufficiently large , on the event we have concerning we set and use , and to have for all sufficiently large , { { \;\geqslant\;}}e^{-cs^{-6}}\ ] ] and conclude as in the proof for that using that if ( and thus ) is sufficiently large we have on the event * acknowledgement : * the author is grateful to franois huyveneers and simenhaus for enlightening discussions and in particular for bringing to his knowledge the existence of the comparison inequalities for the exclusion process of lemma [ ligett ] .let denote the heat kernel of on the discrete circle ( corresponds to the probability distribution of a simple random walk starting from at time ) . for fixed ,the function is the solution of the discrete heat equation on without loss of generality one restrict to the case .the solution of the above equation can be found by a decomposition on a base of eigenvalues the discrete laplacian .we set for all to be the eigenvalue associated to the normalized eigenfunction , , the factor in front being instead of for .we have to add also sine eigenfunctions to have a base but the projection of on these eigenfunction is equal to zero . by fourier decomposition , we have where in the last inequality we just used . when , we have and hence we have p. caputo , f. martinelli , f. simenhaus and f. toninelli , _ zero temperature stochastic 3d ising model and dimer covering fluctuations : a first step towards interface mean curvature motion , _ comm .pure appl . math . *64 * ( 2011 ) , no .6 , 778 - 831 .a. e. holroyd , _ some circumstances where extra updates can delay mixing ,* 145 * ( 2011 ) 1649 - 1652 .c. kipnis and c. landim , _ scaling limits of interacting particle systems _ , grund .wissen . * 320 * springer ( 1999 ) .
|
in this paper , we investigate the mixing time of the simple exclusion process on the circle with sites , with a number of particle tending to infinity , both from the worst initial condition and from a typical initial condition . we show that the worst - case mixing time is asymptotically equivalent to , while the cutoff window , is identified to be . starting from a typical condition , we show that there is no cutoff and that the mixing time is of order . + _ keywords : markov chains , mixing time , particle systems , cutoff window _
|
in recent years , independent component analysis ( ica ) has seen an explosion in its popularity in diverse fields such as signal processing , machine learning , and medical imaging , to name a few . for a wide - ranging list of algorithms and applications of ica ,see the monograph by . in the ica paradigm, one observes a random vector that can be expressed as a non - singular linear transformation of mutually independent latent factors ; thus where and is a full rank matrix often referred to as the mixing matrix .as such , ica postulates the following model for the probability distribution of : for any borel set in , where is the so - called unmixing matrix , and are the univariate probability distributions of the latent factors respectively . the goal of ica , as in other blind source separation problems , is to infer from a sample of independent observations of , the independent factors , or equivalently the unmixing matrix .this task is typically accomplished by first postulating a certain parametric family for the marginal probability distributions , and then optimising a contrast function involving .the contrast functions are often chosen to represent the mutual information as measured by kullback leibler divergence or maximum entropy ; or non - gaussianity as measured by kurtosis or negentropy .alternatively , in recent years , methods for ica have also been developed which assume have smooth ( log ) densities , e.g. , , and .although more flexible than their aforementioned parametric peers , there remain unsettling questions about what happens if the smoothness assumptions on the marginal densities are violated , which may occur , in particular , when some of the marginal probability distributions have atoms .another issue is that , in common with most other smoothing methods , a choice of tuning parameters is required to balance the fidelity to the observed data and the smoothness of the estimated marginal densities , and it is notoriously difficult to select these tuning parameters appropriately in practice . in this paper , we argue that these assumptions and tuning parameters are unnecessary , and propose a new paradigm for ica , based on the notion of nonparametric maximum likelihood , that is free of these burdens .in fact , we show that the usual nonparametric ( empirical ) likelihood approach does not work in this context , and instead we proceed under the working assumption that the marginal distributions of are log - concave . more specifically , we propose to estimate by maximising over all non - singular matrices , and univariate log - concave densities .remarkably , from the point of view of estimating the unmixing matrix , it turns out that it makes no difference whether or not this hypothesis of log - concavity is correctly specified .the key to understanding how our approach works is to study what we call the log - concave ica projection of a distribution on onto the set of densities that satisfy the ica model with log - concave marginals . in section [ sec : notation ] below , we define this projection carefully , and give necessary and sufficient conditions for it to make sense . in section [ sec : pdica ] , we prove that the log - concave projection of a distribution from the ica model preserves both the ica structure and the unmixing matrix. finally , in section [ sec : pd ] , we derive a continuity property of log - concave ica projections , which turns out to be important for understanding the theoretical properties of our ica procedure . our ica estimating procedure uses the log - concave ica projection of the empirical distribution of the data , and is studied in section [ sec : estproc ] . after explaining why the usual empirical likelihood approach can not be used , we prove the consistency of our method .we also present an iterative algorithm for the computation of our estimator .our simulation studies in section [ sec : sim ] confirm our theoretical results and show that the proposed method compares favourably with existing methods .our proposed nonparametric maximum likelihood estimator can be viewed as the projection of the empirical distribution of onto the space of ica distributions with log - concave densities . to understand its behavior , it is useful to study the properties of such projections in general .let be the set of probability distributions on satisfying and for all hyperplanes , i.e. the probability measures in that have finite mean and are not supported in a translate of a lower dimensional linear subspace of . here and throughout, denotes the euclidean norm on , and we will be interested in the cases and .further , let denote the set of non - singular real matrices .we use upper case letters to denote matrices in , and the corresponding lower case letters with subscripts to denote rows : thus is the row of .let denote the class of borel sets on .then the ica model is defined to be the set of of the form for some and . as shown by (* theorem 2.2 ) , the condition is necessary and sufficient for the existence of a unique upper semi - continuous and log - concave density that is the closest to in the kullback leibler sense .more precisely , let denote the class of all upper semi - continuous , log - concave densities with respect to lebesgue measure on .then the projection given by is well - defined and surjective . in what follows, we refer to as the log - concave projection operator and as the log - concave projection of . by a slight abuse of notation, we also use to denote the log - concave projection from to .although the log - concave projection operator does play a role in this paper , our main interest is in a different projection , onto the subset of consisting of those densities that satisfy the ica model .this class is given by note that , in this representation , if has density , then has density .the corresponding log - concave ica projection operator is defined for any distribution on by we also write . [ prop : cases ] 1 . if , then and .2 . if , but for some hyperplane , then and .3 . if , then and defines a non - empty , proper subset of . in view of proposition[ prop : cases ] , and to avoid lengthy discussion of trivial exceptional cases , we henceforth consider as being defined on .in contrast to , which defines a unique element of , the log - concave ica projection operator may not define a unique element of , even for .for instance , consider the situation where is the uniform distribution on the closed unit disk in equipped with the euclidean norm . here, the spherical symmetry means that the choice of is arbitrary .in fact , after a straightforward calculation , it can be shown that consists of those where , in the representation ( [ eq : fdica ] ) , is arbitrary and are given by \}} ] given by .we update by moving along a geodesic in , but need to choose an appropriate skew - symmetric matrix , which ideally should ( at least locally ) give a large increase in the log - likelihood .the key to finding such a direction is proposition [ prop : diff ] below . to set the scene for this result , observe that for ] .the top left panel of figure [ fig : uniform ] plots the simulated signal pairs , while the top right panel gives the rotated observations .the bottom left panel plots the recovered signal using the proposed nonparametric maximum likelihood method . also included in the bottom right panel of the figureare the estimated marginal densities of the two sources of signal .figure [ fig : exp ] gives corresponding plots when the marginals have an distribution .we note that both uniform and exponential distributions have log - concave densities and therefore our method not only recovers the mixing matrix but also accurately estimates the marginal densities , as can be seen in figures [ fig : uniform ] and [ fig : exp ] . to investigate the robustness of the proposed method when the marginal components do not have log - concave densities ,we repeated the simulation in two other cases , with the true signal simulated firstly from a -distribution with two degrees of freedom scaled by a factor of and secondly from a mixture of normals distribution . figures [ fig : t2 ] and [ fig : mix ] show that , in both cases , the misspecification of the marginals does not affect the recovery of the signal .also , the estimated marginals represent estimates of the log - concave projection of the true marginals ( a standard laplace density in this case ) , as correctly predicted by our theoretical results .signal : top left panel , top right panel and bottom left panel give the true signal , rotated observations and the reconstructed signal respectively .the bottom right panel gives the estimated marginal densities along with the true marginal ( grey line).,scaledwidth=50.0% ] as discussed before , one of the unique advantages of the proposed method over existing ones is its general applicability .for example , the method can be used even when the marginal distributions of the true signal do not have densities . to demonstrate this property, we now consider simulating signals from a distribution . to the best of our knowledge ,none of the existing ica methods are applicable for these types of problems .the simulation results presented in figure [ fig : bin ] suggest that the method works very well in this case . to further conduct a comparative study , we repeated each of the previous simulations 200 times and computed our estimate along with those produced by the fastica and prodenica methods .fastica is a popular parametric ica method ; prodenica is a nonparametric ica method proposed by , and has been shown to enjoy the best performance among a large collection of existing ica methods .both the fastica and prodenica methods were implemented using the ` r ` package ` prodenica ` . to compare the performance of these methods ,we follow convention and compute the amari metric between the true unmixing matrix and its estimates .the amari metric between two matrices is defined as where .boxplots of the amari metric for all three methods are given in figure [ fig : comp ] .it is clear that both nonparametric methods outperform the parametric method .several further observations can also be made on the comparison between the two nonparametric methods .for both uniform and exponential marginals , the proposed method improves upon prodenica .this might be expected since both distributions have log - concave densities .it is , however , interesting to note the robustness of the proposed method on the marginals as it still outperforms prodenica for marginals , and remains competitive for the mixture of normal marginals .the most significant advantage of the proposed method , however , is displayed when the marginals are binomial .recall that prodenica , and perhaps all existing nonparametric methods , assume that the log density ( or density itself ) is smooth .this assumption is not satisfied with the binomial distribution and as a result , prodenica performs rather poorly .in contrast , our proposed method works fairly well in this setting even though the true marginal does not have a log - concave density with respect to lebesgue measure .all these observations confirm our earlier theoretical development .of proposition [ prop : cases ] 1 .suppose that .fix an arbitrary , and find and such that .then thus and .now suppose that , but for some hyperplane , where is a unit vector in and .find such that is an orthonormal basis for .define the family of density functions then , and as .now suppose that .notice that the density belongs to and satisfies moreover , where the second inequality follows from the proof of theorem 2.2 of .we may therefore take a sequence such that let denote the convex support of ; that is , the intersection of all closed , convex sets having -measure 1 . following the arguments in the proof of theorem 2.2 of , there exist and such that for all .moreover , these arguments ( see also the proof of theorem 4 of ) yield the existence of a closed , convex set , a log - concave density with and a subsequence such that since the boundary of has zero lebesgue measure , we deduce from fatou s lemma applied to the non - negative functions that it remains to show that .we can write where and for each and .let be a random vector with density , and let be a random vector with density .we know that as , and that are independent for each .let and .then we have where the matrix has row .moreover , and , so ( [ eq : secondrep ] ) provides an alternative , equivalent representation of the density , in which each row of the unmixing matrix has unit euclidean length . by reducing to a further subsequence if necessary, we may assume that for each , there exists such that as . by slutsky s theorem, it then follows that thus , for any , we conclude that are independent . since for all , we deduce further that is non - singular .moreover , each of has a log - concave density , by theorem 6 of .this shows that , as required . of theorem [ thm : pdica ] suppose that satisfies for some and .consider maximising over .letting and , where , we can equivalently maximise over .but , by theorem 4 of , the unique solution to this maximisation problem is to choose , where .this shows that can be written as since also , we deduce that is also the unique maximiser of over , so . of theorem[ thm : identifiability ] suppose that .let , so there exists such that has independent components .writing for the marginal distribution of , note that . by theorem [ thm : pdica ] andthe identifiability result of , it therefore suffices to show that has a gaussian density if and only if is a gaussian density .if has a gaussian density , then since is log - concave , we have .conversely , suppose that does not have a gaussian density .since satisfies ( * ? ? ?* remark 2.3 ) , we may assume without loss of generality that and have mean zero .we consider maximising over all mean zero gaussian densities .writing for the mean zero gaussian density with variance , we have this expression is maximised uniquely in at . but show that the only way a distribution and its log - concave projection can have the same second moment is if has a log - concave density , in which case has density .we therefore conclude that the only way can be a gaussian density is if has a gaussian density , a contradiction . of proposition [ prop : pd ] the proof of this proposition is very similar to the proof of theorem 4.5 of , so we only sketch the argument here .for each , let , and consider an arbitrary subsequence . by reducing to a further subsequence if necessary, we may assume that $ ] .observe that arguments from convex analysis can be used to show that the sequence is uniformly bounded above , and for all .from this it follows that there exist and such that .thus , by reducing to a further subsequence if necessary , we may assume there exists such that note from this that in fact , we can use the argument from the proof of proposition [ prop : cases ] to deduce that .skorokhod s representation theorem and fatou s lemma can then be used to show that .we can obtain the other bound by taking any element of , approximating it from above using lipschitz continuous functions , as in the proof of theorem 4.5 of , and using monotone convergence . from these arguments , we conclude that and .we can see from ( [ eq : aeconv ] ) that , so , by scheff s theorem .thus , given any and any subsequence , we can find and a further subsequence of which converges to in total variation distance .this yields the second part of the proposition . of theorem[ thm : conv ] the first part of the theorem is a special case of proposition [ prop : pd ] .now suppose is identifiable and is represented by and .suppose without loss of generality that for all and let .recall from theorem [ thm : pdica ] that if has density , then has density .suppose for a contradiction that we can find , integers , and such that we can find a subsequence such that , say , as , for all .the argument towards the end of the proof of case 3 of proposition [ prop : cases ] shows that can be used to represent the unmixing matrix of , so by the identifiability result of and the fact that , there exist and a permutation of such that . setting and , we deduce that for . now observe that if has density , then by slutsky s theorem , .it therefore follows from proposition 2(c ) of that for .this contradiction establishes that for each .it remains to prove that for sufficiently large , every is identifiable .recall from the identifiability result of and theorem [ thm : identifiability ] that not more than one of is gaussian .let denote the univariate normal density with mean and variance .let denote the index set of the non - gaussian densities among , so the cardinality of is at least , and consider , for each , the problem of minimising over and .observe that is continuous with for all and , that as and as .it follows that attains its infimum , and there exists such that comparing ( [ eq : supinf ] ) and ( [ eq : inf ] ) , we see that , for sufficiently large , whenever and , at most one of the densities can be gaussian .it follows that when is large , every is identifiable . of proposition [ prop : emplike ] it is well - known that for fixed , the nonparametric likelihood defined in ( [ eq : nonlike ] ) is maximised by choosing for , and , let the binary relation if defines an equivalence relation on , so we can let denote a set of indices obtained by choosing one element from each equivalence class .then since are in general position by hypothesis , we have that .it follows that .moreover , for any choice of distinct indices in if we construct the matrix as described just before the statement of proposition [ prop : emplike ] , then . of corollary [ cor : hats ] let denote the empirical distribution of . writing and , note that the covariance matrix corresponding to is observe further that there is a bijection between the set of maximisers of over and , and the set of maximisers of over and via the correspondence and .it follows from the discussion in section [ sec : pre - whiten ] that maximising over and amounts to computing the log - concave ica projection of .existence of a maximiser therefore follows from proposition [ prop : cases ] and the fact that the convex hull of is -dimensional with probability 1 for sufficiently large .now suppose and represent the log - concave ica projection .further , let denote the distribution of , so has identity covariance matrix and suppose .then as , so by theorem [ thm : conv ] , there exist a permutation of and scaling factors such that where . writing , , and noting that , the conclusion of the corollary follows immediately . of proposition [ prop : diff ] for , let , and let denote the row of .notice that as .it follows that for sufficiently small , as . hastie , t. and tibshirani , r. ( 2003 ) independent component analysis through product density estimation . in _ advances in neural information processing systems 15 ( becker , s. and obermayer , k. , eds )_ , mit press , cambridge , ma .pp 649 - 656 .hastie , t. and tibshirani , r. ( 2003 ) ` prodenica ` : product density estimation for ica using tilted gaussian density estimates ` r ` package version 1.0 ` http://cran.r-project.org/web/packages/prodenica/ ` .
|
independent component analysis ( ica ) models are very popular semiparametric models in which we observe independent copies of a random vector , where is a non - singular matrix and has independent components . we propose a new way of estimating the unmixing matrix and the marginal distributions of the components of using nonparametric maximum likelihood . specifically , we study the projection of the empirical distribution onto the subset of ica distributions having log - concave marginals . we show that , from the point of view of estimating the unmixing matrix , it makes no difference whether or not the log - concavity is correctly specified . the approach is further justified by both theoretical results and a simulation study . * keywords * : blind source separation , density estimation , independent component analysis , log - concave projection , nonparametric maximum likelihood estimator .
|
an impressive progress in understanding of specific biological regulatory mechanisms which play an important role in the way numerous molecular components interact have been made recently .this would be a key to control the development and physiology of a whole living organism .nevertheless , the behavior of large gene expression regulatory networks is still far from being understood mainly because of two reasons : first , up to now , a little is known of the global structure of genetic networks .there is still no a common opinion on whether they are organized hierarchically ( say , as a highly inhomogeneous scalable metabolic network as reported in ) or contains a plenty of cross interactions , might be of quite irregular structure , organized in a form of connected sets of small sub - networks .second , the relations between the global structure of networks and their local dynamical properties are also still unclear .a useful approach to the regulatory networks comprising of just a few elements consists of modelling their interactions by boolean equations . in this context , _ feedback circuits _( the circular sequences of interactions ) have been shown to play the key dynamical roles : whereas positive circuits are able to generate multistationarity , negative circuits may generate oscillatory behavior .genetic networks are represented by the fully connected boolean networks where each element interacts with all elements including itself .a feedback circuit can be formally defined as a combination of terms of the jacobian matrix of the system , with indices forming a circular permutation .flexibility in network design is introduced by the use of boolean parameters , one associated with each interaction of group of interactions affecting a given element . within this formalism, a feedback circuit will generate its typical dynamical behavior ( either stationary or oscillating ) only for appropriate values of some of its logical parameters , . most often in biology, the interactions between various molecular components can have a definite sign .for any circuit , one can easily check that each element exerts an indirect effect on itself which has the same sign for all elements of the circuit , leading to the definition of the `` circuit sign '' .in fact , this sign only depends on the parity of the number of negative interactions involved in the circuit : if this number is even , then the circuit is positive ; if this number is odd , then the circuit is negative , .what makes these concepts important is that specific biological and dynamical properties can be associated with each of theses two classes of feedback circuits .the relation between the presence of positive feedback loops and the occurrence of multiple states of gene expression has been at a focus of investigations for several years ( see and references therein ) . in particular , it has been proven that the presence of positive loop(s ) is a necessary condition for multistationarity , and the negative circuits ( with two or more elements ) are required for the stable periodicity of behavior , .biologically , this means that positive circuits are required for differentiative decisions and negative circuits are for the homeostasis , - .nevertheless , for the large regulatory networks comprising of many hundreds or even thousands of elements , the detailed logical analysis of possible feedback circuits seems to be impossible now , since the effect that the element can put on itself indirectly , in large regulatory networks , may follow along many different pathways ( indeed , if the interaction network is enough dense ) engaging probably a plenty of distinct feedback structures at once . furthermore, numerical observations over the various large discrete time regulatory networks convinced us that the `` functionable '' circuits and the rest `` passive '' elements are tightly related to each other in a way that they play the role of `` stabilizers '' for the active circuits : a dislocation made to an element of the inactive circuit may stampede a change to some `` functionable '' circuits and even cause they dissolve . in the present paper , we focus our attention on the large gene expression regulatory networks defined on some `` maximal '' graph . to get a qualitative understanding of their dynamical behavior , we consider a statistical ensemble of such regulatory networks , in which the thresholds assigned to each pairwise gene interaction and its sign are considered as the discrete random variables taking their values in accordance to some statistical laws .let us note that at present the values of regulatory parameters driving the behavior of actual gene expression regulatory networks are mostly unknown , so that it would be interesting to test the _ sensitivity _ of local dynamics observed in large regulatory networks to the random change of switching parameters , for the different types of such networks .starting from the fixed initial conditions , in large gene expression regulatory networks , we shuffled these switching parameters randomly and , otherwise , randomized the initial conditions for a fixed layout of thresholds and interaction signs .the long time dynamical behavior observed in such a statistical ensemble of large regulatory networks depends essentially upon the topology of underlying `` maximal '' graph including all possible pairwise interactions in between genes of the given set .short transient processes arisen in such systems at the onset of simulations conclude into the statistically stable behavior , that is , either a stationary configuration or the multi - periodic oscillations occupying up to a half of system size , in the homogeneous regulatory networks having a plenty of negative interactions .in contrast to the spreading of chaotic state over the regular and random arrays of piecewise linear and logistic discrete time coupled maps studied extensively in the last decade - , oscillations arisen in the discrete time regulatory networks do not propagate over the whole system and are bounded merely to the oscillating domains .lack of negative interactions and directed cycles in the networks brings it into one of fixed points which position in the phase space of system depends upon the certain choice of initial conditions and the layouts of switching parameters .the structure of active subgraph of in the homogeneous regulatory networks settled at a fixed point , resembles that one of erds - rnyi s random graphs , .the plan of paper is following , in sec.2 , we define the models of large synchronized regulatory networks defined on the both homogeneous and inhomogeneous `` maximal '' graphs where the switching parameters are shuffled randomly . in sec.3 , we present the results of numerical simulations on large regulatory networks . in sec . 4, we introduce the mean field approach to the stochastic ensembles of large discrete time regulatory gene expression networks with randomly shuffled thresholds of type interactions between genes and their interaction signs .then we conclude in the last section .we define the regulatory gene expression network on the directed graph with the set of nodes connected by the edges representing the action of gene onto gene ( the self - action of genes is possible and corresponds to the loops in ) .we call as a _ maximal _graph since it contains _ all _ possible interactions in between genes of the given set .the regulatory principle of gene expression networks is that the protein synthesis rate of a gene is affected ( either _ stimulated _ or _ inhibited _ ) by the proteins synthesized by other genes provided their instantaneous concentrations are below ( or over ) some threshold values .we assign the positive sign to an interaction if stimulates the synthesis of protein in , and the negative sign otherwise .indeed , in the real genome , the rate of protein synthesis varies from pair to pair of interacting genes , however , for a simplicity , in the present paper , we suppose that all interactions between genes are of equal _ effectiveness _ , so that all edges presented in have the equal weight .the maximal graph is specified by its _matrix such that if and otherwise .since the effect of interaction between two genes can be negligible at time ( that is , this interaction is _ switched off _ at time ) , one can define an _ active subgraph _, including all interactions _ efficient _ at time specified by the instantaneous adjacency matrix .following , in the present paper , we consider the synchronized model of gene - gene interactions , that is , time is discrete and the state of system at time is a function of its state at time in the form of a coupled map lattice . for each gene we define two variables : ] the _ exertion _ of gene , that is , an effective action of other genes onto at time which depends upon the their relative concentrations at same time . in the homogeneous regulatory networks ( like those defined on the complete graphs or on the random regular graphs with the fixed connectivity , ) the exertion can be defined as the fraction of active incoming edges ( i.e. , the actions of other genes onto ) at time , the elements of are updated synchronously at each time step , in accordance to the current values of ^n, ] is the threshold value for the action of onto , and is the sign of interaction . in accordance to ( [ th ] ) , the interaction is _ active _ at time if either for or for in the case of , we suppose that the action is active provided that then , the discrete time synchronous coupling , generates the flow in the phase space ^n\times[0,1]^n ] for any requires the factor before the protein synthesis term in ( [ x ] ) . in the case of directed scalefree `` maximal '' graph , the behavior of the minimally required extension of model ( [ ex00 ] ) , is not very interesting from the dynamical point of view , since all nodes with are decoupled from the rest of network , and there are also many nodes ( hubs or regulating genes ) with an excessive number of outgoing edges , for which , so that they are also _ effectively _ decoupled from other genes in the network . given a scale free random graph with the probability degree distributions where then the probability to observe the exertion in ( [ exx ] ) scales with like for and decreases rapidly with . as a result ,the most of protein concentrations in the model ( [ x ] ) with the exertion ( [ exx ] ) defined on the directed scale free graphs are driven by their own decays and got fixed fast close to . in particulary , one can not observe oscillations ( as a limiting stable behavior ) in the regulatory gene expression networks ( [ th]-[exx ] ) defined on such graphs with , for any layout of switching parameters and for any assignment of interaction signs .the occurrence of _ bidirectional _ edges ( when both genes can act onto each other simultaneously ) in the highly inhomogeneous scalable graphs can change dramatically the dynamical behavior of large regulatory networks defined on them . to quantify the value of bidirectional edges in the given `` maximal '' graph, we introduce the parameter , that is , the fraction of such edges among all edges of we have performed the numerical simulations for both the homogeneous graphs ( the complete graphs and the `` enough dense '' regular random graphs comprising of nodes ) assuming all edges in them to be bidirectional , with no loops , and the highly inhomogeneous scalable graphs , the directed scale free graphs such that both probabilities that a node has precisely incoming edges and outgoing edges follow the power law , .this graph has been reported in as being typical for the metabolic networks of many living organisms . in the latter case, we have varied the fraction of bidirectional edges in the whole interval .we monitor the system approaching a statistically stable regime by tracking out its `` velocities '' in the phase space , ( the rate of protein synthesis ) and ( the rate of gene exertions ) counting the number of edges triggered between and at time .while studying the model of large regulatory networks ( [ th]-[x ] ) with the random layouts of switching parameters defined on both homogeneous and inhomogeneous graphs consisting of nodes , we have varied the number of distinct threshold values , , from several tens to several hundreds changing by this way the coarse graining of phase space .we choose the threshold values uniformly distributed ( u.d . ) over the interval ] formed by threshold values u.d . over the unit interval ( see fig . [fig1].b ) .the boxes shown in fig . [ fig1].b present the variations of occupancy numbers for different random initial conditions with of negative interactions allowed between genes ( ) , for some fixed layout of switching parameters .the density of possible stationary asymptotic configuration depends upon the certain layouts of switching parameters for any certain initial string that is presented on fig .[ fig2 ] . a patchy structure of graph in fig .[ fig2].a . manifests the multistationarity in the system , meanwhile the `` clusters '' formed by the merged patches show that the limiting stationary configurations are sensitive to the layout of switching parameters and can move gradually as these parameters shuffle .shuffling of switching parameters _ mixes up _the orbits of deterministic system intensively , as a result each spreads out fast with time over the whole unit interval . in fig .[ fig2].b , we have sketched the density plot of possible values of ( for in ) vs. time in consequent time steps ( long enough to achieve a fixed point ) starting from the initial value . it is worth to mention that at any fixed point the active subgraph constitutes a random graph ( `` half - dense '' in comparison with ) . in fig. [ fig3 ] , we have shown the probability degree distributions ( the circles are for the incoming degrees and the diamonds are for the outgoing degrees ) for the nodes of active subgraph formed at a fixed point of the model ( [ ex00]-[x ] ) defined on the fully connected graph with of negative interactions allowed between genes . the solid line on fig. [ fig3 ] displays the gaussian probability degree distribution which is typical for the erds and rnyi random graphs , .one can say that in the homogeneous regulatory networks when the positive interaction between genes prevail in the system , its dynamical behavior is dominated by the _ positive feedback circuits _ responsible for a number of asymptotically stable states ( fixed points ) .thereat , the strain of negative feedback circuits related to just a few negative interactions is statistically negligible . with increasing percentage of negative interactions allowed in the system up to approximately , it exhibits a complicated spatiotemporal behavior where the domains of genes with the stationary concentrations of proteins coexist and interleave with those of periodically oscillating concentrations .in contrast to the spatiotemporal intermittency observed in the synchronously updated discrete time extended dynamical systems defined on the various regular arrays and on the regular random graphs , the dynamical state ( oscillations ) does not propagate with time throughout the regulatory network .the oscillating domains arisen in the homogeneous regulatory networks are bounded by the genes whose oscillation amplitudes are insufficient for their protein concentrations to cross the next thresholds . in the large enough homogeneous regulatory networks ,a turnover of nodes engaged into the oscillating domains happens occasionally .the averaged number of nodes joined the oscillating domains at a time increases as the fraction of negative interactions allowed in the model grows up . in fig .[ fig4].a , we have displayed the decreasing and vanishing of fractions of oscillating nodes with in the model defined on the fully connected graph ( with bidirectional edges ) .boxes show the fluctuations of these fractions in the ensemble of different random layouts of switching parameters and random initial conditions .bold points stand for the means of collected data . the solid line in fig . [ fig4].a is the gaussian curve , with fitting the data well .the direct logical analysis of low dimensional regulatory networks ( see , for example ) relates oscillations to the dynamical patterns generated by the `` functional '' negative feedback circuits .a feedback circuit exhibits its typical dynamical behavior only for appropriate values of some of the logical parameters .the graph sketched on fig .[ fig4].a demonstrates that the probability to observe a `` functional '' negative feedback circuit , in large homogeneous regulatory networks with the randomly shuffled switching parameters , is close to the normal statistical law with regard to .[ fig4].b displays the distribution of nodes changing their protein concentrations periodically vs. the periods of such changes observed in the large homogeneous regulatory networks .the distribution in fig .[ fig4].b counts all such nodes disregarding for the amplitudes of changes .each node has been counted in the distribution just once under the minimal period of oscillations it exhibits . as usual, boxes represent the fluctuations of numbers of oscillating genes over the ensemble of different random layouts of switching parameters and random initial conditions .the distribution has a maximum at independently upon the initial conditions and the layouts of switching parameters .formation of oscillating domains for ( when the negative interactions present in abundance ) comes along with the synchronization of the rest of system at ( see the profile for the occupancy number in fig .[ fig5].a ) .this synchronization looks essentially insensitive to the initial conditions ( the boxes on the graph are almost imperceptible ) .it gives an impression that when the dynamical behavior is obviously driven by the negative feedback circuits , and oscillations of protein concentrations can occupy up to a half of nodes in the network , just a few of them actually change the instantaneous structure of active subgraph . on fig .[ fig5].b , we have shown the behavior of the phase space velocities , and vs. characterizing the transient processes in the system for the model defined on the fully connected graph , with distinct thresholds u.d . over the unit interval , with ( of interactions are negative ) .when the negative interactions prevail in the system , these velocities decay much slower than the exponentially fast transients shown in fig .[ fig1].a and asymptotically tend to a power law as moreover , they do not extinguish eventually and bring in oscillations in short time .stable patterns of statistical behavior are insensitive to the random layouts of thresholds and assignments of interaction signs . starting from some fixed initial string , we have shuffled the switching parameters in the model defined on ( with bidirectional edges ) for and displayed the data for ( after the stable behavior had been achieved ) for the first nodes ( see fig .[ fig6].a ) .the time evolution of these density distributions indicating oscillations is illustrated by the beatings shown in fig .[ fig6].b in consequent time steps taken over the ensemble of different random layouts of switching parameters .it is important to note that for any value of it is always _ less _ than of nodes ( achieved only if all interactions between genes are negative ) that are engaged into oscillations in the statistically stable regime .herewith , the oscillating protein concentrations for the most of nodes are still bounded within their intervals ] . in fig .[ fig11 ] , we have shown the occupancy numbers of the model defined on the undirected scale free graph with a given configuration of thresholds u.d . over the unit interval for two opposite values of fluctuations shown by the boxesreveal the dependence of the occupancy number upon the certain choice of initial conditions and layouts of switching parameters .it is important to note the high _ error tolerance _ of scalable regulatory network : in its phase space , the intervals exist which stay void ( even though some nodes had initially been settled into these intervals ) and for which is got fixed independently upon the layout of switching parameters and initial conditions . when the negative interactions present in abundance ( ) , the valuable fraction of nodes , in the scalable regulatory network , synchronizes either in the first ( next to ) or in the last ( next to ) intervals of phase space .the nodes demonstrating oscillations of protein concentrations ( about total as ) are scattered in the middle range of phase space .the aim of present section is to introduce a `` mean field '' approach to the large regulatory networks with randomly shuffled switching parameters . in accordance to ( [ th]-[x ] ) , the -th protein synthesis rate depends upon the exertions of all genes acting on it , that is , the fraction of active arcs incident to at time . while shuffling randomly the switching parameters in the large regulatory network , we suppose that the positive sign is selected for the action with the probability , and the interaction threshold value equals to some chosen with some probability , such that for each gene acting on .the distribution of values over the unit interval can be defined by the set of integrable functions \to \mathbb{r}^{+} ] we assume that represents the current performance of a biological network ( say , the protein - protein interaction map ) , while and are the thresholds for outgoing and incoming edges respectively .the network is supposed to be stable until and , and is condemned otherwise .fluctuations of thresholds reflect the changes of an environment .the random process begins on the set of vertices with no edges at time at a chosen vertex .given two fixed numbers ] , the variable is chosen with respect to pdf , is chosen with pdf , and is chosen with pdf , we draw edge outgoing from vertex and entering vertex if and and continue the process to time otherwise , if ( ) , the process moves to other vertices having no outgoing ( incoming ) links yet . at time one of the three eventshappens : * i * ) with probability , the random variable is chosen with pdf but the thresholds and keep their values they had at time . *ii * ) with probability the random variable is chosen with pdf and the thresholds and are chosen with pdf and respectively . *iii * ) with probability the random variable is chosen with pdf , and the threshold is chosen with pdf but the threshold keeps the value it had at time .if , the process stops at vertex and then starts at some other vertex having no outgoing edges yet . if the accepting vertex is blocked and does not admit any more incoming link ( provided it has any ) .if and , the process continues at the same vertex and goes to time it has been shown in that the above model exhibits a multi - variant behavior depending on the probability distribution functions and chosen and values of relative frequencies and . in particular , if , both thresholds and have synchronized dynamics , and sliding the value of form 0 to 1 , one can tune the statistics of out - degrees and in - degrees simultaneously out from the pure exponential decay ( for ) to the power laws ( at ) provided , and belong to the class of power law functions .for instance , by choosing the probability distribution functions in the following forms one obtains that for different values of , the exponent of the threshold distribution , one gets all possible power law decays of .notice that the exponent characterizing the decay of is independent of the distribution of the state variable .in the uncorrelated case , the degree distribution functions decays exponentially ( for instance , for ) . for the intermediate values of , the decay rate is mixed .the preference attachment matrix which elements are the probabilities that a vertex with degree is connected to a vertex having degree , for a scale free graph , generated in accordance with the above algorithm depend only of one variable , , expanding the binomial in the above equation , one gets the leading term . 99 e. h. snoussi , r. thomas , bulletin of math . biol .* 55 * ( 5 ) , 973 ( 1993 ) .r. thomas , m. kaufman , chaos * 11 * ( 1 ) , 180 ( 2001 ) .j. d. murray , _ mathematical biology _ ( springer - verlag , berlin , 1993 ) .r. pastor - satorras , a. vespignani , phys .86 * , no 14 , 3200 ( 2001 ) .d. volchenkov , l. volchenkova , ph .blanchard , phys .rev , * e66 * ( 4 ) , 046137 ( 2002 ) ; virt . jour . of biol . phys .res . , * 4*(9 ) ( 2002 ) , available at texttthttp://www.vjbio.org .r. lima , b. fernandez , a. meyroneinc , `` modelling the discrete dynamics of genetic regulation networks via real mappings '' , in preparation ( 2003 ) .h. jeong , b. tombor , r. albert , z. n. oltvai , a .-barabsi , nature * 407 * , 651 ( 2000 ) . h. jeong , s. p. mason , z. n. oltvai , a .-barabsi , nature * 411 * , 41 ( 2001 ) .d. thieffry , _ qualitative analysis of gene networks _ in the mmoire pour lobtention dagrg de leseignement suprieur , univ .libre de bruxelles ( 2000 ) .r. thomas , `` on the relation between the logical structure of systems and their ability to generate multiple steady states or sustained oscillations '' , _springer series in sinergetics _ * 9 * , 180 ( 1981 ) . e. plahte , t. mestl , s. omholt , _ j. biol.syst._ * 3 * , 1 ( 1995 ) .r. thomas , _ ber . brunzenges . phys .* 98 * , 1148 ( 1994 ) .d. thieffry , e.h .snoussi , j. richelle , r. thomas , j. of biological systems , * 3 * ( 2 ) , 457 ( 1995 ) .p. erds , a. rnyi , publ .sci . * 5 * , 17 ( 1960 ) . in the random regular graphs , the fixed connectivity insures the presensce of many hamilton cycles traversing all nodes of the graph , see s. janson , t. uszak , a. rucinski , _ random graphs _ ,john wiley sons , ny ( 2000 ) .e. h. snoussi , _ dyn.stability syst . _* 4 * , 189 ( 1989 ) .m. e. j. newman , arxiv : cond - mat/0104209 ( 2001 ) .d. thieffry , d. romero , biosystems * 50 * , 49 ( 1999 ) . a .-barabsi , r. albert , science * 286 * , 509 ( 1999 ) .d. volchenkov , ph .blanchard , physica a * 315 * , 677 ( 2002 ) .d. volchenkov , e. floriani , r. lima , j.phys.a : math . and gen .* 36 * , 4771 ( 2003 ) .h. chat , p. manneville , chaos * 2 * , 307 ( 1992 ) .h. chat and p. manneville , europhys* 6 * , 591 ( 1988 ) .d. volchenkov , s. sequeira , ph .blanchard , m.g .cosenza , stochastic and dynamics , * 2 * ( 2 ) , 203 ( 2002 ) .[ cols= " > , < , > , < " , ]
|
we consider a model of large regulatory gene expression networks where the thresholds activating the sigmoidal interactions between genes and the signs of these interactions are shuffled randomly . such an approach allows for a qualitative understanding of network dynamics in a lack of empirical data concerning the large genomes of living organisms . local dynamics of network nodes exhibits the multistationarity and oscillations and depends crucially upon the global topology of a `` maximal '' graph ( comprising of all possible interactions between genes in the network ) . the long time behavior observed in the network defined on the homogeneous `` maximal '' graphs is featured by the fraction of positive interactions ( ) allowed between genes . there exists a critical value such that if , the oscillations persist in the system , otherwise , when it tends to a fixed point ( which position in the phase space is determined by the initial conditions and the certain layout of switching parameters ) . in networks defined on the inhomogeneous directed graphs depleted in cycles , no oscillations arise in the system even if the negative interactions in between genes present therein in abundance ( ) . for such networks , the bidirectional edges ( if occur ) influence on the dynamics essentially . in particular , if a number of edges in the `` maximal '' graph is bidirectional , oscillations can arise and persist in the system at any low rate of negative interactions between genes ( ) . local dynamics observed in the inhomogeneous scalable regulatory networks is less sensitive to the choice of initial conditions . the scale free networks demonstrate their high error tolerance . * keywords : * _ gene regulatory networks , mathematical modelling , mathematical biology . _ * properties of dynamical networks attract the close attention due to a plenty of their practical applications . one class of them is constituted by the growing _ evolutionary _ networks representing the _ long time evolution _ of a genome of living spices or such as the world wide web where the dynamics and topology evolve synchronously according to the external principles of informational safety and economic efficiency by the adding of new components at a time . * * another class is represented by the _ regulatory _ networks including a _ fixed _ number of elements interacting with each other sensitively to the actual position of system in its phase space . in the present paper , we study the simplest model of such a regulatory network described by a discrete time synchronously updated array of coupled structural ( topological ) and dynamical variables defined at each node of a large graph , for both the homogeneous and scalable graphs . * * for small gene expression regulatory networks comprising of just a few elements , a direct logical analysis of dynamics is possible , in which their behavior can be understood as to be driven by the positive and negative _ feedback circuits _ ( loops ) - . however , for large gene expression regulatory networks which can consist of thousands of interacting genes , the direct analysis is very difficult because of their complexity . being interested in a qualitative description of dynamical behavior exhibited by such large regulatory networks , we turn this problem in a statistical way . we suppose that any layout of switching parameters governing the sigmoidal type interactions between genes as well as any assignment of gene interaction signs ( _ stimulation _ or _ inhibition _ ) can be possible with some probability . * * another important issue discussed in our paper is the influence of global topology onto the local dynamics observed in the large gene expression regulatory networks . such an effect emerges in many coupled dynamical systems defined on the graphs with different topological properties , for instance , in the problem of epidemic spreading . being defined on the regular arrays and homogeneous random networks , the models of virus spreading predict the existence of a critical spreading rate such that the infection spreads and becomes persistent if and dies out fast when . recently , it has been shown in - that a variety of scale free networks is disposed to the spreading and persistence of infections at whatever spreading rate the epidemic agents possess that is compatible with the data from the experimental epidemiology . * * the dynamical behavior demonstrated by the discrete time coupled map lattice used in the present paper as the models of interacting genes is indeed more complicated than that in the probabilistic susceptible - infected - susceptible model discussed usually in epidemiology . we show that it is featured by two order parameters , that are , the fraction of positive interactions allowed in between genes , and the fraction of bidirectional edges presented in the `` maximal '' graph . * * in the large homogeneous regulatory networks , like those defined on the fully connected graphs or the regular random graphs , in which all edges are considered as bidirectional , the critical fraction of positive interactions in between genes at which the oscillations arise and persist in the system is . in the directed inhomogeneous networks , like those defined on the directed scale free graphs , oscillations die out fast even if the negative interactions between genes present therein in abundance ( ) . however , oscillations arise at any low rate of negative interactions between genes ( ) provided the `` maximal '' graph has a number of bidirectional edges . bidirectional edges effectively increase the number of circuits presented in the scale free network that is the source of oscillatory behavior . * * the proposed approach could help in understanding the behavior of large gene expression regulatory networks for lack of actual empirical data concerning the large genomes of living organisms . *
|
quantum computing is a paradigm in which quantum entanglement and interference are exploited for information processing .algorithms have been proposed which can even be exponentially more effective than the best known classical solutions .elements of quantum computing have been demonstrated experimentally using nuclear magnetic resonance and trapped ions .however building systems with many coupled qubits remains a challenge .many demands must be met : reliable qubit storage , preparation and measurement , gate operations with high fidelities and low failure rate , scalability and accurate transportation or teleportation of states . the biggest problem is posed by decoherence which tends to destroy the desired quantum behaviour . in recent yearsmuch has been done to address the problems posed by decoherence .an important step was the realisation that many systems possess a large subspace of decoherence - free states which is well protected from the environment and provides ideal qubits .a great variety of approaches in manipulating the qubits within the decoherence - free subspace has been discussed in the literature .the optimal approach would be to employ only hamiltonians leaving the decoherence - free subspace invariant and therefore not causing transitions into unwanted states .however they are in general hard to identify in physical systems . alternatively ,environment - induced measurements and the quantum zeno effect can be used to avoid decoherence .the idea of _ quantum computing using dissipation _ employs the fact that the presence of spontaneous decay rates can indeed have the same effect as rapidly repeated measurements , whether the system is in a decoherence - free state or not , thus restricting the time evolution onto the decoherence - free subspace . as in linear optics quantum computing , the presence of measurements has the advantage that local operations on the qubits become sufficient for the implementation of universal quantum computation .this allows significant reduction in the experimental effort for the realisation of gate operations .for example , universal quantum gates between atomic qubits can be realised with the help of a single laser pulse .however , obtaining these advantages does not always require the presence of spontaneous decay rates in the system . in some cases , the presence of a strong interaction alone is sufficient to restrict the time evolution of a system onto a subspace of slowly - varying states .an additionally applied interaction then causes an _ adiabatic time evolution _ inside this subspace . if the latter coincides with the decoherence - free subspace of the system , the effect of the strong interaction is effectively the same as the effect of continuous measurements whether the system is in a decoherence - free state or not and weak interactions can be used for the implementation of decoherence - free quantum gates .concrete proposals for the implementation of this idea for ion - trap quantum computing and in atom - cavity systems can be found in refs . . in this paperwe discuss the positive role dissipation can play in a situation where quantum gates are implemented with the help of the adiabatic processes described above .such a scheme is in general relatively robust against parameter fluctuations but any attempt to speed up operations results in the population of unwanted states .however , given the unwanted states possess spontaneous decay rates , the time evolution of the system can become a _ dissipation - assisted adiabatic passage_. even if operated rapidly , the system behaves as predicted for adiabatic processes .the reason is that the no - photon time evolution corrects for any errors due to non - adiabaticity . as a concrete example , we describe two - qubit gate operations in atom - cavity systems which can be performed twice as fast in the presence of certain decay rates without sacrificing their high fidelity and robustness . the atom - cavity systems provide a promising technology for quantum computing .the main sources of decoherence are dissipation of cavity photons with rate and spontaneous decay from excited atomic levels with decay rate .some recent proposals for atom - cavity schemes attempt to minimise the population of excited states using strong detunings ; others use dissipation .all these schemes are inherently slow which causes relatively high failure rates .regarding success probabilities , the quantum computing schemes perform much better .we believe that the dissipation - assisted adiabatic passages we describe in this paper contributes to the success of these schemes as well as being a key feature of the original proposal .over the last three decades quantum optical experiments have been performed studying the statistics of photons emitted by laser - driven trapped atoms and effects have been found that would be averaged out in the statistics of photons emitted by a whole ensemble .these experiments suggest that the effect of the environment on the state of the atoms is the same as the effect of rapidly repeated measurements and hence can result in a sudden change of the fluorescence of a single atom . from the assumption of measurements whether a photon has been emitted or not , the quantum jump approach has been derived .this approach is equivalent to the monte carlo wave - function approach and the quantum trajectory approach .suppose a measurement is performed on the free radiation field interacting with a quantum optical system initially prepared in . under the condition of no photon emission andgiven that the free radiation field was initially prepared in its vaccum state , the ( unnormalised ) state of the system at equals , according to the quantum jump approach , the dynamics under the conditional time evolution operator , defined by this equation , can be summarised in a hamiltonian .this hamiltonian is in general non - hermitian and the norm of a state vector developing with decreases in time such that is the probability for no emission in .the non - hermitian terms in the conditional hamiltonian continuously damp away amplitudes of unstable states .this takes into account that the observation of no emission leads to a continuous gain of information about the state of the system .the longer no photons are emitted , the more unlikely it is that the system has population in excited states . in the following we exploit the conditional no - photon time evolution to implement better gate operations .quantum computing with non - hermitian hamiltonians can be performed with high success rates as long as the system remains to a good approximation in a decoherence - free subspace .a state is decoherence - free if populating it can not lead to a photon emission and for all times .the decoherence - free subspace is therefore spanned by all the eigenvectors of the conditional hamiltonian with real eigenvalues and a time evolution with this hamiltonian leaves the decoherence - free subspace invariant . in generalit is very difficult to find a hamiltonian that keeps the decoherence - free subspace invariant _ and _ can be used for the realisation of gate operations .however , it is always possible to add a weak interaction to the conditional hamiltonian , so generating a new conditional hamiltonian .as long as the additional interaction is weak , the decoherence - free subspace constitutes an invariant subspace of to a very good approximation .this can be exploited to generate an adiabatic time evolution inside the decoherence - free subspace according to the effective hamiltonian this hamiltonian can now be used for the realisation of quantum gates .a drawback of this idea is that the effective evolution , which happens on the time scale given by the weak interaction , is very slow .the essential idea of this paper is to ensure that the same net evolution within the decoherence - free subspace is realised with high fidelity even when the system is operated relatively fast , i.e. outside the adiabatic regime , and despite the occurence of errors at intermediate stages .the form of eq .( [ heff ] ) assures that any error leads to the population of non decoherence - free states .as long as this population is small , there is a very high probability that it will be damped away during the no - photon time evolution .the system behaves effectively as predicted by adiabaticity and the underlying relatively fast process could be called a dissipation - assisted adiabatic passage .whenever a photon emission occurs , the computation fails and the experiment has to be repeated .naively , one might expect that a finite probability for failure of the proposed scheme also implies a decrease of the fidelity of the gate operation .however , the fidelity under the condition of no photon emission remains close to unity for a wide range of experimental parameters .moreover , the non - hermitian terms in inhibit transitions into unwanted states and stabilise the desired time evolution ( [ heff ] ) .let us now consider a concrete system . in the following, each qubit is obtained from two different ground states and of the same atom . to implement two - qubit gate operations , the two corresponding atoms are moved inside the optical resonator where both see the same coupling constant .suppose , the 1 - 2 transition in each atom couples resonantly to the cavity mode and and are the annihilation and creation operators for a single photon .the conditional hamiltonian in the interaction picture with respect to the interaction - free hamiltonian can then be written as the last two terms are the non - hermitian terms of the hamiltonian .using eq .( [ p0cond ] ) one can easily determine the decoherence - free subspace of the atom - cavity system with respect to leakage of photons through the cavity mirrors .it is spanned by the eigenvectors of the conditional hamiltonian with real eigenvalues assuming and includes all superpositions of the atomic ground states , i.e. the qubits states of the system , and the maximally entangled atomic state . prepared in the state the atomsdo not interact with the cavity mode and therefore can not transfer their excitation into the resonator .thus no photon can leak out through the cavity mirrors .populating only these states and performing the gate relatively fast , thereby reducing the possibility for spontaneous emission from the atoms with rate , should result in relatively high gate success rates .in this section we first neglect the spontaneous decay rates , assume and aim at finding laser configurations and rabi frequencies that result in a time evolution with the effective hamiltonian ( [ heff ] ) with respect to the decoherence - free subspace introduced in section [ sect:2.3 ] . in the following , we consider the level configuration in figure [ fig2 ] and denote the rabi frequency with respect to the - 2 transition of atom as .especially , we look for adiabatic processes where the two different time scales in the system are provided by the atom - cavity constant being a few orders of magnitude larger than the rabi frequencies of the applied laser fields , as concrete examples for gate implementations via dissipation - assisted adiabatic passages we describe possible realisations of a two - qubit phase gate and a swap operation .we then show that the same quantum gates can be operated twice as fast with decay rates , while maintaining fidelities above 0.98 . from the discussion in section [ sect:2 ]we know already that the decoherence - free states of the system are the eigenstates of the conditional hamiltonian ( [ cond ] ) with real eigenvalues in the absence of the weak laser interaction .consequently , they are also eigenstates of the interaction hamiltonian of the system , namely to a very good approximation .it is therefore convenient to consider them in the following as basis states . to obtain a complete basiswe introduce , in addition to eq .( [ a ] ) , the symmetric state in the following , denotes a state with photons in the cavity field and the atoms prepared in .we now write the state of the system as a superposition of the form and first calculate the time evolution of the coefficients of the decoherence - free states , , , and . the hamiltonian ( [ nocond ] ) yields ~ , ~ \nonumber \\ \dot c_{0;01 } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \omega_0^{(1 ) } \big(c_{0;s } - c_{0;a } \big ) + \sqrt{2 } \omega_1^{(2 ) } c_{0;02 } \big ] ~,~ \nonumber \\ \dot c_{0;10 } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \sqrt{2 } \omega_1^{(1 ) } \ , c_{0;20 } + \omega_0^{(2 ) } \big(c_{0;s } + c_{0;a } \big ) \big ] ~,~ \nonumber \\ \dot c_{0;11 } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \big ( \omega_1^{(1 ) } + \omega_1^{(2 ) } \big ) c_{0;s } - \big ( \omega_1^{(1 ) } - \omega_1^{(2 ) } \big ) c_{0;a } \big ] ~,~ \nonumber \\ \dot c_{0;a } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \big ( \omega_1^{(1 ) } - \omega_1^{(2 ) } \big ) \big ( c_{0;22 } - c_{0;11 } \big ) - \omega_0^{(1 ) } \ , c_{0;01 } \nonumber \\ & & + \omega_0^{(2 ) } \ , c_{0;10 } \big ] ~.\end{aligned}\ ] ] we furthermore consider the derivatives of the amplitudes of the states , , and , - { \rm i } g c_{1;01 } ~ , ~ \nonumber \\\dot c_{0;20 } & = & - { \textstyle { { \rm i } \over 2 } } \big [ \omega_0^{(1 ) } c_{0;00 } + \omega_0^{(2 ) } c_{0;22 } + \omega_1^{(1 ) } c_{0;10 } \big ] - { \rm i } g c_{1;10 } ~,~ \nonumber \\\dot c_{0;s } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \big ( \omega_1^{(1 ) } + \omega_1^{(2 ) } \big ) \big ( c_{0;11 } + c_{0;22 } \big ) + \omega_0^{(1 ) } c_{0;01 } \nonumber \\ & & + \omega_0^{(2 ) } c_{0;10 } \big ] - \sqrt{2 }{ \rm i } g c_{1;11 } ~,~ \nonumber \\\dot c_{0;22 } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \big ( \omega_1^{(1 ) } + \omega_1^{(2 ) } \big ) c_{0;s } + \big ( \omega_1^{(1 ) } - \omega_1^{(2 ) } \big ) c_{0;a } \nonumber \\ & & + \sqrt{2 } \omega_0^{(1 ) } c_{0;02 } + \sqrt{2 } \omega_0^{(2 ) } c_{0;20 } \big ] - \sqrt{2 } { \rm i } g c_{1;s } ~,~ \end{aligned}\ ] ] and the states , , , and , \nonumber \\ & & - { \rmi } g c_{0;02 } ~ , ~ \nonumber \\\dot c_{1;10 } & = & - { \textstyle { { \rm i } \over 2 \sqrt{2 } } } \big [ \sqrt{2 } \omega_1^{(1 ) } c_{1;20 } + \omega_0^{(2 ) } \big ( c_{1;s } + c_{1,a } \big ) \big ] \nonumber \\ & & - { \rm i } g c_{0;20 } ~,~ \nonumber \\\dot c_{1;11 } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big [ \omega_1^{(1 ) } \big ( c_{1;s } - c_{1,a } \big ) + \omega_1^{(2 ) } \big ( c_{1;s } + c_{1;a } \big ) \big ] \nonumber \\ & & - \sqrt{2 } { \rm i } g c_{0;s } ~,~ \nonumber \\\dot c_{1;s } & = & - { \textstyle { { \rm i } \over 2\sqrt{2 } } } \big ( \omega_1^{(1 ) } + \omega_1^{(2 ) } \big ) \big ( c_{1;11 } + c_{0;22 } \big ) \nonumber \\ & & - \sqrt{2 } { \rm i } g \big ( c_{0;22 } + \sqrt{2 } c_{2,11 } \big ) ~,~ \nonumber \\\dot c_{2;11 } & = & - { \textstyle { { \rm i } \over 2 } } \big [ \omega_1^{(1 ) } c_{2;21 } + \omega_1^{(2 ) } c_{2;12 } \big ] - 2 { \rm i } g c_{1;s } ~.~\end{aligned}\ ] ] suppose the system is initially prepared in a qubit state and the population of non - decoherence - free states remains of the order , we can easily eliminate all amplitudes that change on the fast time scale defined by the cavity coupling . setting their derivatives equal to zero and neglecting all terms of second order in and smaller , the differential equations ( [ slower ] ) yield substituting this result into eq .( [ slowest ] ) and ( [ slow ] ) we obtain further ~,~ \nonumber \\ & & \dot c_{0;22 } = -{\textstyle { { \rm i } \over 2\sqrt{2 } } } \big ( \omega_1^{(1 ) } - \omega_1^{(2 ) } \big ) c_{0;a } ~.~ \end{aligned}\ ] ] this shows that the only way to restrict the time evolution of the system onto the decoherence - free subspace is to choose note that if this is not fulfilled , population leaks directly from the antisymmetric state into the states and . there is no mechanism in the system ( see eq .( [ onkel ] ) ) that forbids the population of these states .the condition ( [ schwester ] ) can easily be implemented with and the realisation of dissipation - assisted gate operations requires only a single laser field with the respective rabi frequencies . as predicted before , the time evolution of the slowly - varying amplitudes of the system ( [ longer ] ) can be summarised in the hamiltonian ~,~ \nonumber \\&&\end{aligned}\ ] ] which coincides with the effective hamiltonian given in eq .( [ heff ] ) .the levels and transitions involved in the generation of the effective time evolution of the system are shown in figure [ daap1 ] .the corrections to the effective time evolution in first order can be obtained by performing another adiabatic elimination of fast varying amplitudes using the differential equations ( [ slow ] ) and setting . during gate operations , a small population , given by accumulates in the states , and .there are _ two _ ways to keep these errors small and we use both of them in the following .first , one should turn the laser field off slowly such that with being the gate operation time .then the system can adapt to the changing parameters and the unwanted amplitudes ( [ colder ] ) vanish at time .second , the presence of a finite cavity decay rate can be used to damp away photon population in the cavity mode during the no - photon time evolution .however , should not be too large in order not to disturb the adiabatic evolution .we should also mention that there is another way to guarantee that the system remains within the decoherence - free subspace .this requires the cavity decay rate to be of the same order as , which suppresses the population of _ all _ non decoherence - free states ( including the states and ) , independent of the choice of the rabi frequencies .indeed , it has been shown that quantum computing using dissipation leads to a realm of new possibilities for the implementations of quantum gate operations . however , the approach we consider here yields schemes with much shorter gate operation times than the ones reported in ref . .a simple gate operation that can easily be implemented with the effective hamiltonian ( [ hopt ] ) is the quantum phase gate with this operation changes the state of the second atom into provided that the first atom is in .suppose there is only one laser field coupling to the 0 - 2 transition of atom 1 such that and ~.~\ ] ] the implementation of the unitary operation ( [ phase ] ) then only requires as one can see from solving the time evolution of the hamiltonian ( [ nocond2 ] ) . in the following we choose and .given , the amplitudes of the states and remain unchanged while a minus phase is added to the state after undergoing an adiabatic transition to the excited state and back ( see figure [ daap1 ] ) .that this is indeed the case can be seen in figure [ daap3 ] , which results from a numerical solution of the no - photon time evolution of the system with the hamiltonian for the initial qubit states and . for ,fidelities above are achieved for gate operation times .furthermore , the phase gate can be operated in the presence of decay rates .the presence of decay rates allows the scheme to be run twice as fast while maintaining the same high fidelity .looking at the initial state , it can be seen that with the fidelity dips to just over but with the fidelity is always one .the reason is that the presence of cavity decay rates damps away all the population in the states and , at the latest at the end of the operation . maintaining a fidelity close to comes at cost of a significantly lowered success probability , which can be even lower than . for larger values of success probability would be even smaller .the fidelity given the initial state is only marginally helped by the non - zero . however, a small can further reduce the errors .the reason is that in the non - adiabatic regime unwanted population accumulates in the excited atomic state which is then taken care of by . the competition between greater speed tending to reduce the probability of a failure due to and an increased error due to non - adiabaticity leads to a maximum for the gate success rate of at .another quantum gate that can easily be implemented with the effective hamiltonian ( [ hopt ] ) is the swap operation with unlike the controlled phase gate , the swap gate is not universal .nevertheless , this operation can be very useful since it exchanges the states of two qubits without that the corresponding atoms have to physically swap their places . to implement the time evolution ( [ swap ] )one should choose and individual laser addressing of the atoms is not required .the effective hamiltonian becomes in this case \ ] ] and implements a swap operation if in the following we choose and .figure [ fig4 ] shows the results of a numerical solution of the no - photon time evolution of the system with the hamiltonian for the same parameters as in figure [ daap3 ] . for ,fidelities above are achieved as long as . in the presence of the decay rates , and for , the fidelity is well above while . the results for the swap operation are very similar to the results for the phase gate .the reason for this is that the relevant combined level scheme of the system is in both cases about the same ( see figure [ daap1 ] ) .this paper discusses how dissipation can help to increase the performance of quantum gate operations .the focus is on atoms with a -type level configuration trapped in an optical cavity but the conclusions can be applied more generally .the concrete examples considered here are two - qubit operations including a controlled phase gate and the swap operation . for , we showed that the gate success rate can be nearly as high as while the fidelity is well above .improved results have only been obtained in refs .the main problem of the proposed quantum computing scheme is spontaneous emission from the atoms .nevertheless , it is very simple , as it requires only a single laser field , and it is widely robust against parameter fluctuations ( see eqs .( [ opa ] ) and ( [ oma ] ) ) . in the absence of dissipation ,the quantum gates described employ adiabaticity arising from the different time scales set by the laser rabi frequencies and atom - cavity coupling constant .however , in the presence of decay rates , like and , the scheme can be operated about twice as fast .the reason is that the underlying process becomes a dissipation - assisted adiabatic passage .even operated outside the adiabatic regime , the no - photon time evolution corrects for errors and the system behaves as predicted by adiabaticity .the high fidelity comes at a cost of a finite gate failure rate . however , as long as one can detect whether an error occured or not and repeat the computation whenever necessary , this approach can be used to implement high - fidelity quantum computing even in the presence of dissipation .thanks the royal society and the gchq for funding as a james ellis university research fellow .this work was supported in part by the uk engineering and physical sciences research council and the european union through qgates and conquest .shor p w , ed . by s. goldwasser , _ proceedings of the 35th annual symposium on the foundations of computer science _ ,ieee computer society ( 1994 ) , p124 .d. deutsch , proc .a * 400 * , 97 ( 1985 ) .l. k. grover , phys .lett . * 79 * , 325 ( 1997 ) . l. m. k. vandersypen , m. steffen , g. breyta , c. s. yannoni , m. h. sherwood , and i. l. chuang , nature * 414 * , 883 ( 2001 ) . f. schmidt - kaler , h. hffner , m. riebe , s. gulde , g. p. t. lancaster , t. deuschle , c. becher , c. f. roos , j. eschner , and r. blatt , nature * 422 * , 408 ( 2003 ) .d. leibfried , b. demarco , v. meyer , d. lucas , m. barrett , j. britton , w. m. itano , b. jelenkovi , c. langer , t. rosenband , d. j. and wineland , nature * 422 * , 412 ( 2003 ) . g. m. palma , k. a. suominen , and a. k. ekert , proc .london ser .a * 452 * , 567 ( 1996 ) .p. zanardi and m. rasetti , phys79 * , 3306 ( 1997 ) .d. a. lidar , i. l. chuang , and k. b. whaley , phys .* 81 * , 2594 ( 1998 ) .d. bacon , j. kempe , d. a. lidar , and k. b. whaley , phys .lett . * 85 * , 1758 ( 2000 ) .b. misra and e.c.g .sudarshan , j. math . phys . * 18 * , 756 ( 1977 ) .beige , d. braun , b. tregenna , and p. l. knight , phys .lett . * 85 * , 1762 ( 2000 ) ; b. tregenna , a. beige , and p. l. knight , phys .a * 65 * , 032305 ( 2002 ) .a. beige , d. braun , and p. l. knight , new j. phys .* 2 * , 22 ( 2000 ) .j. pachos and h. walther , phys .lett . * 89 * , 18 ( 2002 ) ; j. k. pachos and a. beige , phys .a * 69 * , 033817 ( 2004 ) .e. knill , r. laflamme , and g. j. milburn , nature * 409 * 46 ( 2001 ) .g. g. lapaire , p. kok , j. p. dowling , and j. e. sipe , phys .a * 68 * , 042314 ( 2003 ) .a. beige , phys .a * 69 * , 012303 ( 2004 ) .l. viola and s. lloyd s , phys .a * 58 * , 2733 ( 1998 ) ; l. viola , m. knill , and s. lloyd , phys .lett . * 82 * , 2417 ( 1999 ) .g. s. agarwal , m. o. scully , and h. walther , phys .* 86 * , 4271 ( 2001 ) .p. facchi and s. pascazio , phys .lett . * 89 * , 080401 ( 2002 ) ; p. facchi , d. a. lidar , and s. pascazio , phys .a * 69 * , 032314 ( 2004 ) .a. beige , phys .a * 67 * , 020301(r ) ( 2003 ) . c. marr , a. beige , and g. rempe , phys .a * 68 * , 033817 ( 2003 ) .g. r. guthhrlein , m. keller , k. hayasaka , w. lange , and h. walther , nature * 414 * , 49 ( 2001 ) ; m. keller , b. lange , k. hayasaka , w. lange , and h. walther , appl .b * 76 * , 125 ( 2003 ) . s .-b . zheng and g. c. guo , phys .lett . * 85 * , 2392 ( 2000 ) .e. jan , m. b. plenio , and d. jonathan , phys .a * 65 * , 050302 ( 2002 ) .t. pellizzari , s. a. gardiner , j. i. cirac , and p. zoller , phys .lett . * 75 * , 3788 ( 1995 ) .m. s. shahriar , j. a. bowers , b. demsky , p. s. bhatia , s. lloyd , p. r. hemmer , and a. e. craig , opt . comm . * 195 * , 411 ( 2001 ) .x. x. yi , x. h. su , and l. you , phys .90 * , 097902 ( 2003 ) ; l. you , x. x. yi , and x. h. su , phys .a * 67 * , 032308 ( 2003 ) .h. walther , adv .chem . phys . * 122 * , 167 ( 2002 ) .h. g. dehmelt , bull .. soc . * 20 * , 60 ( 1975 ) .g. c. hegerfeldt and d. g. sondermann , quantum semiclass* 8 * , 121 ( 1996 ) .j. dalibard , y. castin , and k. mlmer , phys .lett . * 68 * , 580 ( 1992 ) .h. carmichael , _ lecture notes in physics _, vol * 18 * ( berlin : springer , 1993 ) .r. j. cook , phys .t * 21 * 49 ( 1988 ) .g. c. hegerfeldt , fortschr. phys . * 46 * , 595 ( 1998 ) .
|
it is commonly believed that decoherence is the main obstacle to quantum information processing . in contrast to this , we show how decoherence in the form of dissipation can improve the performance of certain quantum gates . as an example we consider the realisations of a controlled phase gate and a two - qubit swap operation with the help of a single laser pulse in atom - cavity systems . in the presence of spontaneous decay rates , the speed of the gates can be improved by a factor 2 without sacrificing high fidelity and robustness against parameter fluctuations . even though this leads to finite gate failure rates , the scheme is comparable with other quantum computing proposals .
|
during its development , human kind has transgressed various stages of fundamentally different ecological and technological characteristics . in line with dramatic population growth ,an increasing interaction with the biosphere and a domination of ecosystems took place . during the neolithic revolution , around 10,000bce, hunter - gatherer societies were progressively replaced by agrarian ones with far - reaching consequences such as the formation of settlements .the industrial revolution is considered as the most carving development affecting all areas of human life and coming along with the systematic exploration of fossil energy sources . from an economic point of view , the increasing significance of services is understood as an additional level of development .in fact , agrarian , industrial , and service sectors are commonly denoted primary , secondary , and tertiary , respectively .however , the production forms do not completely replace each other but are complements and economies have more or less contributions from each sector .current theories and models on sectoral development are largely influenced by the work of clark , fisher , and fourastie who developed the ` three - sector hypothesis ' in the first half of the 20th century , describing development as a process of shifting economic activities from the primary via the secondary to the tertiary sector .their research was mainly based on observed historical shifts of workforce between sectors in today s more developed countries .further recently , approaches have concentrated on describing the relationship between shifts in sectoral labor allocation or gross domestic product ( gdp ) shares and economic development , often focusing on specific countries or regions . yet, the universality of the three - sector hypothesis has been challenged , since it does not well represent labor allocation in today s developing countries .different from the historical pathways of industrialized countries , shifts of labor force from the primary to the secondary sector have been relatively low .instead , advancement of the tertiary sector appears disproportionate early , which has been related to excessive urbanization and different structural conditions .while existing work has mainly focused on modeling patterns observed in the united states or in western europe , applying a similar analysis in a universal model does not exist to the best of our knowledge .furthermore , attention has mainly been given to sectoral resource allocation , e.g. labor input , rather than to economic output , e.g. the fractions of gdp .thus , the objective is to develop a parsimonious description of a countries sectoral composition of gdp , which is also able to capture the early advancement of the tertiary sector observed in developing countries .we consider a country and it s sectoral gdp composition , where the fractions , , , correspond to the agricultural , industrial , and service sector contributions , respectively .the fractions of the three sectors add up to unity , . with economic development , i.e. increasing gdp / cap , the shares of the gdp shift between the sectors .we assume the transfer occurs according to a system of ordinary differential equations where is the of the gdp / cap ( the natural logarithm is used in order to compensate for the broad distribution of gdp / cap values ) and are country - specific parameters . , industrial , and service ,are represented by colored circles .the arrows indicate possible transfer paths and their parameters as defined in eqs .( [ eq : dgla])-([eq : dgls ] ) .( b ) typical evolution of sectoral composition as a function of the logarithm of per capita gdp ( here : , , ) . ]an additional parameter , , emerges from the boundary conditions , i.e. from the value of where , and has the character of a shift along the -axis .figure [ fig : illustrative ] illustrates the model and shows schematic trajectories .parameter determines the transfer from the agrarian sector , which , depending on , is split into contributions to the industrial or service sectors .moreover , determines the transfer from industrial to service .e.g. for , , and transfer takes place from to and continuously from to , leading to monotonously decreasing agrarian and increasing service whereas the industrial sector exhibits a maximum ( fig .[ fig : illustrative](b ) ) . except for the trivial case, the model does not have any steady state .types of sectoral gdp transfer as obtained from the fitting parameters of the model eqs .( [ eq : dgla])-([eq : dgls ] ) .the types are defined according to the sign of the parameters and as well as if or .the parameters also specify between which sectors there is a transfer and in which direction . for instead of a transfer from the agriculture to the services sector , a transfer occurs between industry and services , depending on the values of and ( indicated by a ) .if and have opposite signs , the flow is in one direction for all possible values of , , and .if and are positive , flow may occur from to for large , while it is reversed for large ( indicated by ) .vice versa for and negative .only types + have a convergent asymptotic behavior , namely , , for .most countries belong to the types 1 - 3 .the remaining types are type 5 ( guinea - bissau , madagascar , vanuatu ) , type 6 ( ivory coast ) , type 7 ( cameroon ) , and type 8 ( burkina faso , morocco , sudan , venezuela , south africa ) .the special cases , , or did not occur .[ cols= " < , < , < , < , < , < " , ] we fit the model eqs .( [ eq : dgla])-([eq : dgls ] ) with a two step procedure , using global country - level data . in the first step , the logarithmic form of eq .( [ eq : sola ] ) , as introduced later , was used to identify and an initial value of by using a linear regression between and . in the second step , , , and were estimated by using the r - implementation of the shuffled complex evolution algorithm to minimize the sum of the mean squared errors between , , from the model and the corresponding observed values . to obtain more reasonable fits, we restricted the parameter ranges as follows : , , and . for 176 out of 246 countries the available data was sufficient , i.e. data on gdp / cap was available for at least 4 years , and for 137 countries the fitting worked reasonable ( we choose as the threshold for the sum of the mean squared error between the data and the fits of all sectors ) . due to an anomalous decline of after dissolution of the soviet union , disbandment of the warsaw pact , and the breakup of yugoslavia , respectively , the data before 1995 has been disregarded , in the case of the corresponding countries . for the same reason , data from liberia and mongolia prior to 1995 was omitted . )-([eq : dgls ] ) , and dashed lines extrapolations for illustration .the fitted parameters are ( a ) pakistan ( , , , ) , ( b ) finland ( , , , ) , and ( c ) the united states ( , , , ) . for finland ,the maximum of the industrial sector , , is indicated by a black arrow in ( b ) . ]typical examples for which the model results were accepted are depicted in fig .[ fig : examples ] together with the obtained fits . in all three examplesthe model agrees reasonably with the data . in the case of pakistan ,the fraction of industry overtakes the fraction of agriculture at ( /cap ) .the service sector is the largest and still increasing for these examples . in the case of the usa ,the agrarian sector has a very low contribution .different parameter ranges imply different behavior , e.g. means that the country transfers economic activity to agriculture with increasing gdp / cap . in totalthere are two different cases for each parameter leading to eight combinations .table [ tab : types ] gives an overview of the corresponding types together with the transfer behavior , i.e. economic transfer from which sector to which , and the frequency of each type .almost half of the considered countries belong to type 1 , the traditional path from the agrarian , via industrial , to the service sector .the second most frequent is type 3 , which includes a transfer from the service to industrial sector .another big group consists of type 2 countries , i.e. with transfer from agrarian to industry and flows between industry and services depending on the development .all other types are less populated , type 4 does not occur at all .the occurrence of types 5 - 8 might be due to noise in the data . only types 1&2 and 7&8 exhibit a maximum of the industrial sector at .the examples from fig .[ fig : examples ] are of type 3 ( pakistan ) , type 1 ( finland ) , and type 2 ( usa ) . on the world map , fig .[ fig : worldfit ] , one can see which country belongs to which type .type 1 consists of big parts of asia and eastern europe , some countries in africa , canada , and mexico .the usa , brazil , other southern american countries , western european countries , japan , and australia belong to type 2 .type 3 is mainly found in africa , middle east , central asia , south - east asia , and a few times in southern america .a strong regionality can be observed and neighboring countries tend to belong to the same types .it is apparent that most developed countries belong to either type 1 or type 2 . at this stageit is not clear what the decisive factor is and further analysis including other economic data could help to pinpoint the most relevant influences of the countries economic paths .methods from network theory have been applied to analyze the economic productions of countries , indicating that neighboring countries instead of diversifying tend to compete over the same markets .the inset of fig .[ fig : worldfit ] shows the histograms of ( year 2005 ) for the types 1 - 3 .type 1 and 3 countries are spread over a wide range of gdp / cap , whereas there is a tendency of type 1 countries to higher gdp / cap ( ) compared to the type 3 case .type 2 countries generally tend to larger gdp / cap .surprisingly , many high gdp / cap countries belong to type 2 and not to type 1 .accordingly , their economic growth follows the traditional path but depending on the state of and , the flow between may be increased ( high ) , decreased , or even reversed ( high ) ( see tab .[ tab : types ] ) .type 3 , which is the second most frequent one , also follows the traditional path from the agrarian to the industry and service sector , but comes along with a transfer from the service sector to the industry sector , , .this version seems to be characteristic for many developing economies although not exclusively .) and correlations between agrarian sector and rural population .panels ( a)-(c ) display the transformed values of all countries and all years .( a ) type 1 , ( b ) type 2 , and ( c ) type 3 , see tab . [tab : types ] . for clarity , in ( c ) the signs have been changed and the logarithm taken .the underlying grey data points show the values of all other types .the dashed lines have slope and intercept . for some countries the fitting does not perform well , in particular (a ) singapore ( at approx ., ) and bulgaria ( approx . between , and , ) , ( b ) denmark ( at approx ., ) , and ( c ) saudi arabia( at approx ., ) and the maldives ( at approx ., ) . panel ( d ) shows for the year 2005 versus the fraction of rural population , based on data from the world databank .the open circles represent the values of all countries with available data and exhibit a correlation coefficient of . the examples from fig .[ fig : examples ] are highlighted with filled symbols , i.e. pakistan ( diamond ) , finland ( triangle ) , and usa ( circle ) .the filled squares with error - bars represent averages and standard deviations of gdp / cap quartiles , i.e. lowest ( light , top right ) to highest ( dark , lower left ) . ] in order to test the universality and applicability of the model , we derive a collapsed representation of all data .we start from solutions of eqs .( [ eq : dgla])-([eq : dgls ] ) eliminating in eq .( [ eq : sola ] ) and ( [ eq : soli ] ) one obtains a relation between and allowing to collapse the data of all countries and all years , i.e. independent of . in fig .[ fig : collapse](a)-(c ) we plot the transformed data and separate the three most frequent types for better visibility into panels .the data generally collapses onto the unity diagonal , despite few countries where deviations are partly due to the fact that values before 1995 have been excluded from the fitting ( see above ) but the values are still displayed for completeness .the collapse suggests universality and supports the applicability of the proposed model .for the set of countries with reasonable fitting , some of the parameters are correlated non - linearly , i.e. and as well as and .thus , by introducing global parameters , the number of country - specific ones could be reduced .moreover , is weakly correlated with , suggesting that from the ensemble point of view( [ eq : sola ] ) has rather a log - normal shape , which indicates that the system is not ergodic .we would like to note that since is the logarithm of the gdp / cap , decreases as a power - law with the gdp / cap .studying the asymptotic behavior of the types as defined in tab .[ tab : types ] , it turns out , that types 1 and 2 converge to for .the model is confined to specific ranges of , e.g. for negative parameter .thus , it is important to keep in mind that fitting the model only characterizes the transfer as it is included in the data .this means , the obtained parameters only capture the behavior of the past .finally , is plotted versus the fraction of urban population for the year 2005 in fig . [fig : collapse](d ) .the two quantities are correlated with a correlation coefficient of . despite not being completely linear ,the correlations are considerable , implying that low agrarian contribution to the economy s gdp comes along with less rural population . in order to visualize the relation to overall economic output, figure [ fig : collapse](d ) also includes averages and standard deviations of those countries belonging to gdp / cap quartiles , i.e. the quarter of all countries with highest gdp / cap , the second quarter of countries , etc . as one can see , with increasing gdp / cap , rurality and agrarian gdp share decrease . in other words ,a high degree of urbanization comes along with economic development or vice versa .this can be related to the finding that per capita socio - economic quantities such as wages , gdp , number of patents applied , and number of educational and research institutions increase by an approximate factor of with increasing city size .however , as timberlake has pointed out , in the case of developing countries an `` overurbanization '' with fast growing urban populations and excessive employment in the service sector can also hinder economic growth .not without reason most developing countries in our model belong to type 3 , where economic growth is associated with a sectoral transfer from service to industry . in summary ,we propose a system of ordinary differential equations to characterize the development of the sectoral gdp composition . despite being very simple and involving only 4 country - specific parameters( has only the character of shift along ) , the model fits for the majority of the countries in the world .relating agrarian and industrial fractions , we collapse the data of all countries and all years onto a straight line .this could be used as an alternative approach to fit the parameters by means of non - linear techniques .we find that according to the parameter ranges , the countries belong to eight different types .most countries are found in three of them ; the members are distinct in geography and state of economic development .this suggests that countries with low current gdp / cap follow a different path from early developed countries .our results could indicate a relation between transfer patterns and economic development .further analysis of additional socio - economic data could shed light on reasons of economic failure or success . as with any model ,our approach is a strong simplification of reality .also , we assume that parameters are fixed over time and countries follow a given development pathway .this may be justified by cultural , bio - climatic , and structural conditions , which have been consistent over the period of observation . on the other hand ,a transition between characteristic pathways is possible . for workforce distribution of 22 countries from the former soviet union and central as well as eastern europe, such a transition analysis has been performed using another simple model based on the three - sector hypothesis .a similar transition analysis could be an extension of the work presented .since it has been found that countries tend to develop goods which are similar to those they currently produce and that economically successful countries are extremely diversified , it could be also of interest , to extend the analysis to the level of products , in order to enable a more detailed analysis .furthermore , the inclusion of a `` quaternary '' sector in our model might provide additional insights , but sufficient data is not ( yet ) available . in this contextwe would also like to note that many developing countries exhibit an informal service sector which is not included in the official figures .similarly , in developed countries the products can be very complex so that the separation between industrial and service sector might be fuzzy .accordingly , already the data analyzed in this study is likely to be affected by inaccuracies .we thank torsten wolpert , flavio pinto siabatto , xavier gabaix , boris prahl , and lynn kaack for useful discussions and comments .the authors acknowledge the financial support from the federal ministry for the environment , nature conservation and nuclear safety of germany who support this work within the international climate protection initiative and the federal ministry for education and research of germany who provided support under the rooftop of the progress initiative ( grant number # 03is2191b ) .
|
we consider the sectoral composition of a country s gdp , i.e. the partitioning into agrarian , industrial , and service sectors . exploring a simple system of differential equations we characterize the transfer of gdp shares between the sectors in the course of economic development . the model fits for the majority of countries providing 4 country - specific parameters . relating the agrarian with the industrial sector , a data collapse over all countries and all years supports the applicability of our approach . depending on the parameter ranges , country development exhibits different transfer properties . most countries follow 3 of 8 characteristic paths . the types are not random but show distinct geographic and development patterns .
|
surfactant molecules self assemble into mesoscale structures ( characteristic lengths are in the order of hundreds of nanometer ) when their concentration in an aqueous solvent exceeds a threshold value , generally called the critical micelle concentration ( cmc ) . examples of these mesoscale entities include simple structures like a monolayer of surfactants at the air - water / air - oil interface or more complex structures like a micelle and a bilayer of surfactants in the bulk . the stability of a given mesoscale structure is in turn is governed by the geometry and chemistry of the individual surfactant molecules .characteristic energies of a self assembled surfactant interface are comparable to the thermal energy , where is the boltzmann constant and is the equilibrium temperature , and a result the spatial organization of the molecules , which is characterized at the mesoscale by the morphology and topology of the interface , is susceptible to thermal fluctuations in the solvent . a similar but a more complex system that is of importance to cell biology is the lipid bilayer membrane , formed by the self assembly of lipid molecules , which defines the outer boundaries of most mammalian cells and their organelles .lipid molecules are fatty acids synthesized within the cell and like a surfactant molecule they also have a hydrophilic head group and a hydrophobic tail commonly occurring lipids include glycerol based lipids such as dopc , dops and dope , sterol based lipids like cholesterol , and ceramide based lipids like sphingomyelin .the cell membrane is formed by the self assembly of these different types of lipid molecules along with other constituents namely proteins and carbohydrates , and the composition of these building blocks differ across different cell membranes .being the interface of the cell , the lipid membrane plays a dominant role in a number of biophysical processes either by virtue of its surface chemistry at the molecular scale or through modulations in its physical properties at the mesoscale : the most obvious examples of the latter include inter- and intra - cellular trafficking , membrane mediated aggregation of cell signaling molecules and cell motility .hence , it is natural to expect an inherent feedback between the physical properties of the cell membrane and the biophysical processes it mediates .the primary aim of this article is to review theoretical and computational approaches at the mesoscale that can be used to develop an understanding of this feedback . in particular , our focus is to show how thermodynamic free energy methods employed in a variety in a contexts in condensed matter physics can be applied to the theoretical models for membranes at the mesoscale . in equilibrium statistical mechanics , the ground state of a systemwhose intensive or extensive variables are coupled to the environment , and hence can exchange for instance heat or area or volume or number with the bath , is governed by its thermodynamic potential which is also called the free energy of the system .the various thermodynamic observables can be determined by measuring the suitable thermodynamic potential that depends on the ensemble in which the system is defined .excellent introduction to the implementations and applications of the various free energy methods for molecular systems is provided by frenkel and smit .the spatial and temporal resolution of the various biophysical processes observed in cell membranes can be classified into two broad classes , namely ( a ) biochemical processes in which the dynamics of the system is primarily determined by the chemistry of the constituent molecules and ( b ) biophysical processes where collective phenomena and macroscopic physics govern the behavior of the membrane .these two class of processes have disparate time and length scales .the large separation in the time and length scales allows one to decouple the slower degrees of freedom from the faster ones and this feature can be exploited in constructing physical models at multiple scales for the cell membrane .molecular scale models such as all - atom or coarse grained molecular dynamics are faithful to the underlying chemistry and are hence more appropriate for investigating membrane processes in the sub cellular length and nanoscopic time scales . in the other limit ,phenomenology based field theoretic models neglect the membrane dynamics at the nanoscale and instead focus on how the collective effects of these molecular motions manifest at length and time scales comparable to those accessed in conventional experiments like light microscopy and mechanical measurements of cells .more rigorous discussions on the formulation of multiscale models for membranes can found in a number of review articles on this topics . in this article, we will use the thermodynamic formalism of membrane biophysics to demonstrate how free energy methods can extended to the study of diverse class of problems involving the cell membrane at the mesoscale .the phenomenology based approach focuses primarily on the conformational states of the bilayer membrane at length scales ( nm ) that are large compared to the thickness of the membrane ( 5 nm ) . in this approachthe membrane is treated as a thin elastic sheet of a highly viscous fluid with nearly constant surface area ( the number of lipids under consideration is assumed to be constant ) .this sheet is representative of the neutral surface of a membrane bilayer : it is defined as the cross sectional surface in which the in - plane strains are zero upon a bending transformation , see references for details .the thermodynamic weights of the conformational state of the membrane is governed by the well known canham - helfrich energy functional commonly written as , if and are the principal curvatures at every point on the membrane surface then and are its mean and gaussian curvatures respectively .the elastic moduli and are the isotropic and deviatoric bending moduli .experimental measurements on lipid and cell membranes have estimated their bending stiffness to be in the range , with the lower values corresponding to model membrane structures like giant uni - lamellar vesicles .the deviatoric modulus is normally taken to but the gaussian energy term can be neglected , by virtue of the gauss - bonnet theorem if the topology of the membrane does not change during the analysis .the surface area of the membrane and the volume are coupled to their respective conjugate variables namely the surface tension and osmotic pressure .reported values of membrane surface tension ( combined contributions from both the lipids and the underlying cytoskeleton ) varies between 3300 / m depending on the cell type .the osmotic pressure difference is a function of the difference in the osmolyte concentration between the inside and outside of the cell .the spontaneous curvature denotes an induced curvature which can arise in a number of context such as defects in lipid packing , the presence of intrinsic degrees of freedom in the constituent lipids , interactions of the membrane with non - lipid molecules like proteins or nanoparticles and also due to the coupling of the membrane with the underlying cytoskeleton . since the spontaneous curvature is an important parameter in most of our discussions later , it is important to have a closer look at how its impacts the conformational states of the membrane . when , the energy given by the first term in eqn .is quadratic in the mean curvature and hence the probability of finding a membrane conformation with a given curvature is a gaussian peaked around , with its width being proportional to the bending stiffness . on the other hand ,when the membrane has a non - zero spontaneous curvature the peak of the probability distribution now shifts to a value and as a result highly curved membrane regions are observed with much larger probabilities . for purposes of computer simulations , a number of discretizations based on eqn .have been introduced in the literature .the free energy methods for membranes presented in the later sections are based on the dynamical triangulation monte carlo technique which has been reviewed in brief below .the two dimensional membrane surface is discretized into an interconnected set of triangles that intersect at vertices ( _ nodes _ ) forming independent links .the values of , , and define the topology of the membrane surface in terms of euler characteristic as .the degrees of freedom of the discretized membrane are the position vectors of the vertices given by ] .the discrete form of the elastic hamiltonian is thus a sum over the curvature energies at every vertex in the triangulated surface given by , the index denotes a vertex on the triangulated surface and and are respectively its principal curvatures , is the local spontaneous curvature , and denotes the surface area associated with the vertex .the principal curvatures are computed using the methods introduced by ramakrishnan et .the spontaneous curvature at a vertex is expressed using the general form : with being the magnitude of the induced curvature and the functional form of the curvature contribution at vertex due to a curvature field at vertex .the various forms of relevant in different contexts have been discussed in references . in this article , we limit our discussions on curvature induced membrane remodeling to protein that have isotropic curvature fields with a gaussian profile : here denotes the range of a curvature field , i.e. a curvature field defined at a vertex can induce a non - zero spontaneous curvature at a far vertex .we denote the set of all protein fields as ] and the various state are sampled using a set of three monte carlo moves : ( i ) a _ vertex move _ in which a randomly chosen vertex is displaced to new location that leads to change in state \rightarrow [ \{\vec{x}^{'}\},\{\mathscr{t}\},\{\boldsymbol{\phi } \}] ] , and ( iii ) a _ field exchange _ move to simulate diffusion of the protein field in which the protein field at vertex is exchanged with that at vertex that leads to a change in state \rightarrow [ \{\vec{x}\},\{\mathscr{t}\},\{\boldsymbol{\phi}^{'}\}]12 & 12#1212_12%12 [1 ] [0 ] _ _ , ed .( , , ) link:\doibase 10.1111/j.1582 - 4934.2008.00281.x [ * * , ( ) ] http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=1972sci...175..720s&link_type=ejournal [ * * , ( ) ] link:\doibase 10.1038/nrm1102 [ * * , ( ) ] link:\doibase 10.1038/nature04394 [ * * , ( ) ] link:\doibase 10.1038/nature01451 [ * * , ( ) ] link:\doibase 10.1146/annurev.biochem.78.081307.110540 [ * * , ( ) ] link:\doibase 10.1101/cshperspect.a004721 [ * * , ( ) ] link:\doibase 10.1039/c2cs15309b [ * * , ( ) ] link:\doibase 10.1038/nrm1838 [ * * , ( ) ] link:\doibase 10.1038/nrm2748 [ * * , ( ) ] link:\doibase 10.1038/35073095 [ * * , ( ) ] http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17589565&retmode=ref&cmd=prlinks [ * * , ( ) ] link:\doibase 10.1007/s00249 - 011 - 0741 - 0 [ * * , ( ) ] http://books.google.co.in/books?id=p9yjnjzr9oic[__ ] ( , ) http://www.worldcat.org/isbn/0122673514[__ ] , ed .( , ) http://gateway.webofknowledge.com/gateway/gateway.cgi?gwversion=2&srcauth=mekentosj&srcapp=papers&destlinktype=fullrecord&destapp=wos&keyut=a1997we91800002 [ * * , ( ) ] http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=9512654&retmode=ref&cmd=prlinks [ * * , ( ) ] link:\doibase 10.1016/j.physrep.2006.07.006 [ * * , ( ) ]link:\doibase 10.1016/j.semcdb.2009.11.011 [ * * , ( ) ] link:\doibase 10.1016/j.sbi.2012.01.011 [ * * , ( ) ] link:\doibase 10.3390/polym5030890 [ * * , ( ) ] link:\doibase 10.1016/j.physrep.2014.05.001 [ * * , ( ) ] link:\doibase 10.1016/j.chemphyslip.2014.05.001 [ ( ) , 10.1016/j.chemphyslip.2014.05.001] * * , ( ) http://www.ncbi.nlm.nih.gov/pubmed/4273690 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1038/ncomms6974 [ * * , ( ) ] link:\doibase 10.1016/j.tcb.2012.09.006 [ * * , ( ) ] http://books.google.com/books?id=fbcmqgnrvjcc&pg=pa323&dq=intitle:statistical+mechanics+of+membranes+and+surfaces&hl=&cd=1&source=gbs_api[__ ] ( , ) link:\doibase 10.1103/physreve.81.041922 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1000926.s008 [ * * , ( ) ] link:\doibase 10.1039/c1ib00036e [ * * , ( ) ] link:\doibase 10.1080/00268976.2012.664661 [ * * , ( ) ] http://journals.aps.org/pre/abstract/10.1103/physreve.90.022717 [ * * , ( ) ] link:\doibase 10.1063/1.1699114 [ * * , ( ) ] link:\doibase 10.1063/1.1734110 [ * * , ( ) ] link:\doibase 10.1016/0021 - 9991(76)90078 - 4 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/001046559500053i [ * * , ( ) ] link:\doibase 10.1049/iet - syb.2013.0057 [ * * , ( ) ] link:\doibase 10.1016/s0010 - 4655(00)00215 - 0 [ * * , ( ) ] link:\doibase 10.1073/pnas.1006611107/-/dcsupplemental [ * * , ( ) ] link:\doibase 10.1016/j.bpj.2011.05.063 [ * * , ( ) ]
|
the conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems , in which the entropic contributions play a dominant role . while computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states , to determine phase boundaries , and to compute ligand - receptor binding energies its use in problems involving the cell membrane is limited . here , we present an overview of four different free energy methods to study morphological transitions in bilayer membranes , induced either by the action of curvature remodeling proteins or due to the application of external forces . using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation monte carlo , we have focused on the methods of widom insertion , thermodynamic integration , bennett acceptance scheme , and umbrella sampling and weighted histogram analysis . we have demonstrated how these methods can be employed in a variety of problems involving the cell membrane . specifically , we have shown that the chemical potential , computed using widom insertion , and the relative free energies , computed using thermodynamic integration and bennett acceptance method , are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins . the umbrella sampling and wham analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements . furthermore , we also present a method based on wham and thermodynamic integration to handle problems related to end - point - catastrophe that are common in most free energy methods .
|
the contribution of this study is as follows .first , based on the notion of stable manifold originated from dynamical systems theory we propose a method for constructing a sequence of approximate solutions of increasing accuracy to general equilibrium perfect foresight models on nonlocal domains .second , we prove the convergence of the proposed approximate solutions to the true one , pointing out approximation errors and domain where solutions are defined . in this way ,the result is a new proof of the stable manifold theorem .third , we establish the relations between the approach presented in this paper and the extended path method ( ep ) proposed by , and show that the approach can be considered as a rigorous proof of convergence of the ep algorithm . a stablemanifold is a set of points that approach the saddle point as time tends to infinity .in fact , the set of solutions to a nonlinear rational expectations model determines the stable manifold because each solution must satisfy the stability condition , i.e. the convergence to the steady state in the long - run . in economic literature , this set is represented by a graph of a policy function ( in other words , a decision function ) that maps the state variables into the control variables .since we primarily focus on constructing approximations of stable manifolds ( not the manifolds themselves ) , we shall call the approach proposed in this paper the approximate stable manifolds ( asm ) method .the existence of stable manifolds in the dynamical systems literature is proved using either the lyapunov - perron method or the graph transformation method of hadamard .the hadamard method constructs the entire stable manifold as the graph of a mapping , whereas the lyapunov - perron method relies on a dynamical characterization of stable manifolds as a set of initial points for solutions decaying exponentially to the steady state .the method developed in this paper has features of both the hadamard and lyapunov - perron approaches . on the one hand ,the iteration scheme of the asm method can be treated as an implicit iteration scheme of the hadamard method . on the other hand ,the solution obtained by the asm method is a fixed point of the so - called truncated lyapunov - perron operator .we also show the relation between the lyapunov - perron approach and forward shooting method used for solving nonlinear rational expectation models and relaying on this explain the computational instability of the later .initially , the proposed method involves the same steps as perturbation methods ) : ( i ) find a steady state ; ( ii ) linearize the model around the steady state ; ( iii ) decompose the jacobian matrix at the steady state into stable and unstable blocks . the next step is to project the original system on the stable eigenspace ( spanned on the stable eigenvectors ) and the unstable one ( spanned on the unstable eigenvectors ) . as a result, the system will be presented by two subsystems interrelated only through nonlinear terms .these terms are obtained as residuals after subtraction of the linearized system from the original nonlinear one ; hence they vanish , together with their first derivatives , at the origin .such a transformation makes the obtained system convenient to take the next stage of the method .specifically , the approximate solutions are constructed by employing the convergence of solutions to the steady state and the contraction mapping theorem . in this waywe obtain a sequence of policy functions of increasing accuracy in a nonlocal domain .the main results of the paper are theorem [ thm1 ] and theorem [ thm2 ] , which are proved in section [ theory ] .theorem [ thm1 ] establishes the existence of a sequence of approximate policy functions .theorem [ thm2 ] estimates the accuracy of the approximate solutions and the following corollaries prove the convergence of the approximate policy functions to the true one in a definite nonlocal domain .thus , the result can be treated as a new proof of the stable manifold theorem .the proposed approach relates to the extended path method .specifically , under the assumption that the state variables are exogenous , at each point in time the solution of the extended path method applied to the transformed system lies on the corresponding asm . in this way the ep method can be easily put in the asm framework . mention that there is no proof that the ep algorithm converges to the true rational expectation solution .the ams method can be considered as a rigorous proof of convergence of the ep algorithm .indeed , theorems [ thm1 ] and [ thm2 ] can be applied directly for proving the convergence of the ep algorithm for the original ( non - transformed ) system .however , the use of linearization and spectral decomposition transforms the system into the form more convenient for verifying the conditions of the contraction mapping theorem .moreover , this gives us an advantage of the asm approach over the ep in terms of the speed of convergence to the true solution .this occurs due to the fact that the transformation makes the lipschitz constant for the mappings involved smaller in a set of neighborhoods of the steady state . exploiting the contraction mapping theoremalso allows for obtaining the a priori and a posteriori approximation errors .the analogy with the ep method suggests that stochastic simulation in the asm approach can be performed much as in .an attractive feature of the method is its capability to handle non - differentiable problems such as occasionally binding constraints , for example , the zero lower bound problem .this feature results from the fact that the contraction mapping theorem requires a less restrictive condition than differentiability ; namely , this condition is the lipschitz continuity .the deterministic neoclassical growth model of , considered as an example to illustrate how the method works , shows that a first few approximations yield very high global accuracy . even within the domain of convergence of the taylor series the low - order asm solutions are more accurate than the high - order taylor series approximations .the rest of the paper is organized as follows .the next section presents the model and its transformation into a convenient form to deal with . in section [ theory ] our main resultsare stated and proved .section [ ep ] establishes the relation between the proposed approach and ep method .the asm method is applied to neoclassical growth model in section [ example ] .conclusions are presented in section [ conclusion ] .this paper is primarily concerned with the deterministic perfect foresight equilibrium of the models in the form : where is an vector of endogenous state variables at time ; is an vector containing -period endogenous variables that are not state variables ; is an vector of exogenous state variables at time ; maps into and is assumed to be at least twice continuously differentiable .all eigenvalues of matrix have modulus less than one . in the same way as in the ep method of ,the vector can be treated as initial values of temporary shocks at the time .we define the steady state as vectors such that below , we also impose the conventional blanchard - kan condition at the steady state .the problem then is to find a bounded solution to for a given initial condition for all .by denote vectors of deviation from the steady state . linearizing ( [ eq:3 ] ) around the steady state , we get where , , are partial derivatives of the mapping with respect to , , , , and , respectively , at ; and is defined by the mapping is referred to as the nonlinear part of . by assumption on , the mapping is continuously differentiable and vanishes , together with its first derivatives , at . for the sake of simplicitywe assume that ( [ eq:8 ] ) can be transformed in such a way that does not depend on and this transformation can be done for many deterministic general equilibrium models ( see section [ example ] for the neoclassical growth model ) .indeed , equations ( [ eq:4 ] ) and ( [ eq:9 ] ) can be written in the vector form as : where {ccc}i & 0 & 0\\ 0 & f_{3 } & f_{1}\end{array } \right)\ ] ] and {ccc}\lambda & 0 & 0\\ f_{5 } & f_{4 } & f_{2}\end{array } \right).\ ] ] we assume that the matrix is invertible .is a singular matrix , then in the sequel we must use a generalized eigenvalue decomposition as in . ] then multiplying both sides of the last equation by gives where {ccc}\lambda&0&0\\ ( f_{3},f_{1})^{-1}f_{5}&(f_{3},f_{1})^{-1}f_{4}&(f_{3},f_{1})^{-1}f_{2}\end{array } \right)\ ] ] and {ccc}0\\(f_{3},f_{1})^{-1}n(w_{t})\end{array } \right).\ ] ] if the mapping does depend on and , then we can employ the implicit function theorem to express and as mappings of and indeed , taking into account that the derivatives of with respect to and are zero at the origin and the invertibility of , we can easily see that the conditions of the implicit function theorem hold . in the case of the singular matrix , we do not have problems with zero - eigenvalues , because they correspond to identities that do not contain the terms with and .therefore , in what follows we assume that we have the representation of the original model in the form .next , the matrix is transformed into a block - diagonal one which can be obtained , for example , by using the block - diagonal schur factorization where ;\ ] ] where and are quasi upper - triangular matrices with eigenvalues larger and smaller than one ( in modulus ) , respectively ; and is an invertible matrix .we now introduce new variables multiplying ( [ eq:10 ] ) by yields where , and . by construction , it follows that where and stand for the jacobian matrix of the mappings and , respectively , at the point .the system in the form ( [ eq:12a])([eq:12b ] ) is convenient for obtaining the theoretical results of section [ theory ] .the next subsection introduces some notation that will be necessary further on .theorem [ thm1 ] proves the existence of a sequence of approximate policy functions in subsection [ existence ] , whereas theorem [ thm2 ] tells us about the accuracy of these approximations in subsection [ accuracy ] . by and the closed balls of radii and centered at the origin of and , respectively .let be the direct sum of these balls .by denote the euclidean norm in .the induced norm for a real matrix is defined by the matrix in ( [ eq:11 ] ) can be chosen in such a way that where and are the largest eigenvalues of the matrices and ( in modulus ) , respectively , and is arbitrarily small .this follows from the same arguments as in ( * ? ? ?* 9 ) , where it is done for the jordan matrix decomposition .let , and are continuously differentiable maps .define the following norms : where , and are the jacobian matrices of , and at and , respectively .[ def ] a mapping is called lipschitz continuous if there exists a real constant such that , for all and in , any such is referred to as a lipschitz constant for the mapping .[ thm1 ] let be a domain of definition for the mappings and in ( [ eq:12a ] ) and ( [ eq:12b ] ) such that the following conditions holds : [ con1 ] [ con2 ] where [ con3 ] if , then .then , there exists a sequence of the mappings , , satisfying the recurrent equations : with the initial condition .moreover , the following inequalities for the norm of the mappings and their derivatives hold : [ remark ] the neighborhood that satisfies_ conditions _ [ con1][con3 ] always exists locally , because the mappings and vanish , together with their first derivatives , at . nonetheless ,_ conditions _ [ con1][con3 ] are not local by themselves .[ remark2 ] from _ condition _ [ con1 ] and it follows that the right hand side of satisfies the inequality the proof is by induction on .more precisely , using the contraction mapping theorem , we derive by induction on the existence of satisfying ( [ eq:21 ] ) . to satisfy the conditions of the contraction mapping theorem, we need the estimates ( [ eq:22])([eq:23 ] ) for the mappings on each stage of the induction .suppose that .let be the parameterized mapping of to such that we claim that maps the closed ball into itself and has the lipschitz constant less than one , and thus satisfies the conditions of the contraction mapping theorem .then there exists a fixed point of such that notice that the dependence of on determines the mapping of .if in addition this mapping satisfies the inequalities ( [ eq:22 ] ) and ( [ eq:23 ] ) , then the induction hypothesis will be proved for .indeed , taking norm of both sides ( [ eq:24 ] ) and using the norm property and _ condition _ [ con1 ] , we have this means that maps the closed ball into itself .our task now is to show that is a contraction .the jacobian matrix of is where is the jacobian matrix of the mapping with respect to at the point . taking the norm of both sides of ( [ eq:27 ] ) and using ( [ eq:20 ] ) , the norm property and _ condition _ [ con2 ], we obtain the norm is an upper bound for the lipschitz constant of in the domain . since the mapping has the lipschitz constant less than one and maps the closed ball into itself , we see that by the contracting mapping theorem , has a unique fixed point in for each .this implies that the mapping defined by ( [ eq:25 ] ) exists . from ( [ eq:26 ] )it follows that satisfies inequality ( [ eq:22 ] ) .it remains to check that the norm of the derivative of satisfies inequality ( [ eq:23 ] ) . differentiating ( [ eq:25 ] ) with respect to , we obtain taking norms and applying the triangle inequality and using the norm property gives for all . rearranging terms in ( [ eq:29 ] ) and taking into account the definitions of and , _ condition _ [ con2 ] and ( [ eq:20 ] ) , we get from _ condition _ [ con2 ] it follows easily that .this implies that hence satisfies the inequality ( [ eq:24 ] ) .therefore , the inductive assumption is proved for .next , suppose inductively that there exist mappings , that satisfy ( [ eq:21])([eq:23 ] ) .let be the parameterized mapping of to such that for each .as before , we shall show that satisfies the contraction mapping theorem conditions .indeed , taking norms in ( [ eq:30 ] ) , using the norm property and applying the triangle inequality yields by the inductive assumption , inequality ( [ eq:22 ] ) holds ; therefore where the last inequality follows from remark [ remark2 ] .this means that the jacobian matrix of the mapping at the point is taking norms , using the norm property and applying the triangle inequality , _ condition _ [ con3 ] and ( [ eq:20 ] ) , we obtain inserting the inductive assumption for , we have for all . since the mapping has the lipschitz constant less than one and maps into itself , it has a unique fixed point for each .this implies that there exists a mapping of to such that from ( [ eq:32 ] ) it follows that the norm of satisfies inequity ( [ eq:22 ] ) . to conclude the inductive assumption for , it remains to check inequality ( [ eq:23 ] ) for the norm of the derivative of the mapping .this is the harder part of the proof . indeed , taking the derivative of at the point , we have .\label{eq:36 } \end{split}\ ] ] taking norms and using the triangle inequality and the norm property , we obtain for all . using ( [ eq:20 ] ) and _ condition _ [ con3 ] yields by the inductive assumption ,the norm satisfies the estimate ( [ eq:23 ] ) , hence .then from ( [ eq:23 ] ) it follows that consider now the following difference equation : where suppose then the difference equation ( [ eq:40 ] ) has two fixed points : and such that where is a stable fixed point , is an unstable fixed point . if , then , is a monotonically increasing sequence that converges to .the lemma can be proved by direct calculation .inequality ( [ ineq:41 ] ) easily follows from _ condition _ [ con2 ] of the theorem. comparing ( [ eq:39 ] ) and ( [ eq:40 ] ) for the initial point and the initial mapping , we have for , i.e. is majorized by . from ( [ eq:43 ] )it follows that therefore , the mapping satisfies inequality ( [ eq:23 ] ) .this concludes the induction argument .the condition for the graph of a mapping to be an invariant manifold is that the image under transformation ( [ eq:12a])([eq:12b ] ) of a general point of the graph of must again be in the graph of .this holds if and only if taking into account the invertibility of , we have thus , the true policy function satisfies .the next theorem gives the estimate of the error created by the approximate policy functions , , obtained in theorem [ thm1 ] .[ thm2 ] under the conditions of theorem [ thm1 ] , the following inequality holds : for all and , where and is the solution of the difference equation at the time and with the initial value .is the -coordinate of the solution lying on the stable manifold . ]the proof is by induction on for .suppose that .let be the solution to ( [ eq:45 ] ) at the time for an initial point .subtracting ( [ eq:18 ] ) from ( [ eq:25 ] ) at the point , taking norms , using the norm property and applying the triangle inequality , we get it follows easily that combining ( [ eq:45 ] ) , ( [ eq:46 ] ) and ( [ eq:47 ] ) , and taking into account the inequality , we obtain therefore , the inductive assumption is proved for .assume now that inequality ( [ eq:44 ] ) holds for ; we will prove it for .in particular , for we have subtracting ( [ eq:18 ] ) from ( [ eq:21 ] ) both written for the argument and denoting for brevity , we get . \label{eq:50 } \end{split}\ ] ] adding and subtracting the term in brackets , we have .\label{eq:51 } \end{split}\ ] ] using ( [ eq:20 ] ) , ( [ eq:47 ] ) and the triangle inequality gives .\label{eq:53 } \end{split}\ ] ] rearranging terms , we obtain with the notation from ( [ eq:23 ] ) we have . from itfollows that inserting the inductive assumption ( [ eq:49 ] ) for the upper bound of yields from the proof of theorem [ thm1 ] it follows that , where is given by ( [ eq:41 ] ). therefore by .consider now the following function of : inserting from ( [ eq:41 ] ) gives ^{-1}.\ ] ] it can easily be checked that the function attains its maximum in the interval ] and premultiplying both side of ( [ a11 ] ) by , we can rewrite ( [ a5 ] ) as where and , where and are the components of the matrices and , respectively .the mapping has the following representation : \end{split}\ ] ] then , the approximate policy function has the implicit form : .\label{a13 } \end{split}\ ] ] the first iteration , , of the contracting mapping is obtained by substituting zeros for in the right hand side of ( [ a13 ] ) .\label{14 } \end{split}\ ] ] the function can be found by further iteration of the right hand side of . in the same way , the functions and satisfy the following equations : having mappings , , , or , to return to the original variables we must perform the transformation : where or . finally , the policy function has the following parametric representation : to give an impression of the accuracy of the approximations for the asm method on a global domain we construct the functions and from section [ theory ] and compare them with the taylor series expansion of the 1st , 2nd , 5th and 16th - orders , which correspond to the solutions of perturbation methods by in the case of zero shock volatilities .this model has a closed - form solution for the policy function given by we calculate approximations in the level ( rather than in the logarithm ) of the state variable , otherwise the problem becomes trivially linear .the parameter values take on standard values , namely and .then for our calibration the steady state of capital is .it is not hard to see that the taylor series expansion of the true solution ( [ 4.4 ] ) converges in the interval ] .the taylor series approximation perform well in the interval $ ] , i.e. within the domain of the convergence of the taylor series expansion ; however , outside the interval they explode .the functions is essentially indistinguishable with the true solution for all , thus providing the perfect global approximation . even within the domain of convergence of the taylor series the accuracy of is as good as for the 16th - order of the taylor series expansion .the function also provides a very close fit for the whole interval .this study has used concepts and technique originated from dynamical system theory to construct approximate solutions to general equilibrium models on nonlocal domains and proved the convergence of these approximations to the true solution . as a result , a new proof of the stable manifold theorem is obtained .the proposed method allows estimating the a priory and a posteriori approximations errors since it involves the contraction mapping theorem . as a by - product, the approach can be treated as a rigorous proof of convergence of the ep algorithm .the method is illustrated by applying to the neoclassical growth model of .the results shows that just a first few approximations of the proposed method are very accurate globally , at the same time they have at least the same accuracy locally as the solutions obtained by the higher - order perturbation method .collard , f. , and juillard , m. ( 2001 ) .accuracy of stochastic perturbation methods : the case of asset pricing models , _ journal of economic dynamics and control _ * 25 * 979999 . fair , r. , and taylor , j.b .solution and maximum likelihood estimation of dynamic rational expectation models , _ econometrica _ * 51 * 11691185 .potzsche c. ( 2010 ) ._ geometric theory of discrete nonautonomous dynamical systems _ , springer - verlag , berlin - heidelberg - new - york - tokyo .schmitt - groh , s. , and uribe , m. ( 2004 ) .solving dynamic general equilibrium models using as second - order approximation to the policy function , _ journal of economic dynamics and control _ * 28 * 755775 . smale , s. ( 1967 ) .differentiable dynamical systems , _ bulletin of the american mathematical society _ * 73 * 747817 .stuart , a.m. ( 1990 ) . numerical analysis of dynamical systems , _ acta numerica _ * 3 * 467572 .
|
this study presents a method for constructing a sequence of approximate solutions of increasing accuracy to general equilibrium models on nonlocal domains . the method is based on a technique originated from dynamical systems theory . the approximate solutions are constructed employing the contraction mapping theorem and the fact that solutions to general equilibrium models converge to a steady state . the approach allows deriving the a priori and a posteriori approximation errors of the solutions . under certain nonlocal conditions we prove the convergence of the approximate solutions to the true solution and hence the stable manifold theorem . we also show that the proposed approach can be treated as a rigorous proof of convergence for the extended path algorithm to the true solution in a class of nonlinear rational expectation models .
|
characterizing statistical properties of cyber attacks not only can deepen our understanding of cyber threats but also can lead to implications for effective cyber defense .honeypot is an important tool for collecting cyber attack data , which can be seen as a birthmark " of the cyber threat landscape as observed from a certain ip address space .studying this kind of data allows us to extract useful information about , and even predict , cyber attacks . despite the popularity of honeypots ,there is no systematic framework for rigorously analyzing the statistical properties of honeypot - captured cyber attack data .this may be attributed to that a systematic framework would require both a nice abstraction of cyber attacks and fairly advanced statistical techniques . in this paper, we make three contributions .first , we propose , to our knowledge , the first statistical framework for systematically analyzing and exploiting honeypot - captured cyber attack data .the framework is centered on the concept we call _ stochastic cyber attack process _ , which is a new kind of mathematical objects that can naturally model cyber attacks .this concept can be instantiated at multiple resolutions , such as : network - level ( i.e. , considering all attacks against a network as a whole ) , victim - level ( i.e. , considering all attacks against a computer or ip address as a whole ) , port - level ( i.e. , the defender cares most about the attacks against certain ports or services ) .this concept catalyzes the following fundamental questions : ( i ) what statistical properties do stochastic cyber attack processes exhibit ( e.g. , are they poisson ) ?( ii ) what are the implications of these properties and , in particular , can we exploit them to predict the incoming attacks ( prediction capability is the core utility of the framework ) ?( iii ) what caused these properties ?thus , the present paper formulates a way of thinking for rigorously analyzing honeypot data .second , we demonstrate use of the framework by applying it to analyze a dataset , which is collected by a _low - interaction _ honeypot of 166 ip addresses for five periods of time ( 220 days cumulative ) .findings of the case study include : ( i ) stochastic cyber attack processes are not poisson , but instead can exhibit long - range dependence ( lrd ) a property that is not known to be exhibited by honeypot data until now .this finding has profound implications for modeling cyber attacks .( ii ) lrd can be exploited to predict the incoming attacks at least in terms of attack rate ( i.e. , number of attacks per time unit ) .this is especially true for network - level stochastic cyber attack processes .this shows the power of gray - box " prediction , where the prediction models accommodate the lrd property ( or other statistical properties that are identified ) .( iii ) although we can not precisely pin down the cause of the lrd exhibited by honeypot data , we manage to rule out two possible causes .we find that the cause of lrd exhibited by cyber attacks might be different from the cause of lrd exhibited by benign traffic ( see section [ sec : step-5 ] ) .third , the framework can be equally applied to analyze both _low - interaction _ and _ high - interaction _ honeypot data , while the latter contains richer information about attacks and allows even finer - resolution analysis .thus , we plan to make our statistical framework software code publicly available so that other researchers or even practitioners , who have ( for example ) high - interaction honeypot data that often can not be shared with third parties , can analyze their data without learning the advanced statistic skills .the paper is organized as follows .section [ sec : preliminaries ] briefly reviews some statistical preliminaries including prediction accuracy measures , while some detailed statistical techniques are deferred to the appendix .section [ sec : concept - and - framework ] describes the framework . section [ sec : case - study ] discusses the case study and its limitations .section [ sec : total - discussion ] discusses the limitation of the case study ( which is imposed by the specific dataset ) and the usefulness of the framework in a broader context .section [ sec : related - work ] discusses related prior work .section [ sec : conclusion ] concludes the paper with future research directions .a stationary time sequence , which instantiates a stochastic cyber attack process , is said to possess lrd if its autocorrelation function for , where is called lag " , is a slowly varying function meaning that for all .intuitively , lrd says that a stochastic process exhibits persistent correlations , namely that the rate of autocorrelation decays slowly ( i.e. , slower than an exponential decay ) .quantitatively speaking , the degree of lrd is expressed by hurst parameter ( h ) , which is related to the parameter in eq . as .this means that for lrd , we have and the degree of lrd increases as . in the appendix , we briefly review six popular hurst - estimation methods that are used in this paper .since is necessary but not sufficient for lrd , we need to eliminate the so - called spurious lrd " as we focus on the lrd property in this paper .spurious lrd can be caused by non - stationarity , or more specifically caused by ( i ) short - range dependent time series with change points in the mean or ( ii ) slowly varying trends with random noise .we eliminate spurious lrd processes by testing the null hypothesis ( denoted by ) that a given time series is a stationary lrd process against the alternative hypothesis ( denoted by ) that it is affected by change points or a smoothly varying trend .one test is for : where is a stationary short - memory process , , is a bernoulli random variable , and is a white ( i.e. , gaussian ) noise process .the other alternative is : where is as in the previous test , ] , where the average is taken over the six hurst estimation methods .moreover , none of the 153 attacker - level processes exhibit spurious lrd . using period v as another example , we observe that all 166 attacker - level attack processes have average hurst parameter ] if .the pot method states that if are heavy - tailed data , then ] .zhenxin zhan is a phd candidate in the department of computer science , university of texas at san antonio .he received m.s .degree in computer science from the huazhong university of science and technology , china , in 2008 .his primary research interests are in cyber attack analysis and detection .maochao xu received his ph.d . in statistics from portland state university in 2010 .he is an assistant professor of mathematics at the illinois state university .his research interests include applied statistics , extreme value theory , cyber security , and risk analysis in actuary and insurance .he currently serves as an associate editor for communications in statistics .shouhuai xu is an associate professor in the department of computer science , university of texas at san antonio .his research interests include cryptography and cybersecurity modeling & analysis .he earned his phd in computer science from fudan university , china .more information about his research can be found at www.cs.utsa.edu/~shxu .
|
rigorously characterizing the statistical properties of cyber attacks is an important problem . in this paper , we propose the _ first _ statistical framework for rigorously analyzing honeypot - captured cyber attack data . the framework is built on the novel concept of _ stochastic cyber attack process _ , a new kind of mathematical objects for describing cyber attacks . to demonstrate use of the framework , we apply it to analyze a low - interaction honeypot dataset , while noting that the framework can be equally applied to analyze high - interaction honeypot data that contains richer information about the attacks . the case study finds , for the first time , that long - range dependence ( lrd ) is exhibited by honeypot - captured cyber attacks . the case study confirms that by exploiting the statistical properties ( lrd in this case ) , it is feasible to predict cyber attacks ( at least in terms of attack rate ) with good accuracy . this kind of prediction capability would provide sufficient early - warning time for defenders to adjust their defense configurations or resource allocations . the idea of gray - box " ( rather than black - box " ) prediction is central to the utility of the statistical framework , and represents a significant step towards ultimately understanding ( the degree of ) the _ predictability _ of cyber attacks . cyber security , cyber attacks , stochastic cyber attack process , statistical properties , long - range dependence ( lrd ) , cyber attack prediction
|
when drawing up genetic trees of languages , it is sometimes useful to quantify the degree of relationship between them .mathematical approaches along these lines have been pursued for some time now is an excellent review of some important techniques ., in fact , attempts to address the issue central to this paper that of obtaining distance measures between related chinese dialects . however , he does this at a lexical level by using karl pearson s tetrachoric correlation coefficient on 905 words from a lexical dictionary .this paper takes a novel approach to this problem by pioneering the use of phonological data to find dissimilarity measures , as opposed to lexical ( which has been used most frequently up till now ) , semantic or syntactic data .an argument can also be made that phonetic or phonological dissimilarity measures , being the least abstract of all , could give the most realistic results .unfortunately , studies in this direction have been relatively rare .two such works which should be mentioned are and , both of which are , however , constrained by the use of lexicostatistical methodology . in fairness to existing methods , it must be noted that many other existing methods for obtaining dissimilarity measures are in fact applicable to non - lexical data for deriving non - lexical measures . in practice , though , they have been constrained by a preoccupation with the lexicon as well as by the unavailability of phonological data .hopefully , the phonological data developed in this project should provide fresh input to those methods and revive their application to the problem area in future research .the data we use to illustrate our ideas are two phonological histories taken from the field of chinese linguistics .one is an account of the modern beijing ( mb ) dialect from an earlier stage of chinese , referred to as middle chinese , and published as ; the other is an account of the modern cantonese ( mc ) dialect also from middle chinese , published as chen and newman ( 1984a , 1984b and 1985 ) .these should be consulted for further explanation of the diachronic rules and their relative chronology as well as for an explanation of the rule labels used in this paper . for brevity, we will refer to the former as chen76 and the latter as cn84 in subsequent sections .we would now like to draw attention to five features of these accounts which make them ideal for the purpose at hand : 1 .the accounts are relatively explicit in their expositions .each account assumes middle chinese reconstructions which are phonetically explicit , states each rule in a formal style , and defines the ordering relationships which hold between the rules .this degree of comprehensiveness and explicitness in writing the history of a language is relatively rare .it is even rarer to have accounts of two related dialects described in a similarly explicit way . obviously , when it comes to translating historical accounts into phonological derivations , the more explicit the original account , the more readily one can arrive at the derivations .the two accounts assume identical reconstructions for the middle chinese forms , which of course is crucial in any meaningful comparison of the two dialects .not surprisingly , given the existence of sinology as an established field and one with a history going back well over a hundred years , there are many conflicting proposals about middle chinese and its pronunciation .decisions about the forms of middle chinese go hand in hand , necessarily , with corresponding decisions about the historical rules which lead from those forms to modern - day reflexes .one can not easily compare competing historical accounts if they assume different reconstructed forms as their starting points .see chen76 for a full description and justification of the middle chinese reconstructions used in these accounts .the two accounts are couched in terms of one phonological framework .this , too , is a highly desirable feature when it comes to making comparisons between the sets of rules involved in each account .the framework could be described as a somehwat `` relaxed '' version of spe .for example , the accounts make use of orthodox spe features alongside others where it was thought appropriate ( e.g. [ + /-labial ] , [ + /-acute ] ) .phonotactic conditions are utilized as a way of triggering certain phonological changes , alongside more conventional rule statements .the accounts purport to describe the phonological histories of a single database of chinese characters and their readings in modern dialects .this is a substantial database containing about 2,700 chinese characters and it is the readings of these characters in two of the dialects modern beijing and modern cantonese which are the outputs of the rule derivations in the two accounts .the accounts themselves are published in an easily available journal , _ the journal of chinese linguistics _ , which allows readers to scrutinize the original discussion and rule statements .the features alluded to in points 15 make these two accounts uniquely suited to testing out formal hypotheses relating to historical phonology .the historical account of modern beijing / modern cantonese is construed as a set of derivations .the input to a derivation is a reconstructed middle chinese form ; the input is subjected to a battery of ( ordered ) phonological rules ; and the output of the derivation is the reflex in the modern dialect .the mechanistic model we have used to represent diachronic phonological derivations is that of probabilistic finite state automata ( pfsa ) .these are state determined machines which have stochastic transition functions .the derivation of each word in mb or mc from middle chinese consists of a sequence of diachronic rules .these rule sequences for each of the approximately 2700 words are used to construct our pfsa .node 0 of the pfsa corresponds to the reconstructed form of the word in middle chinese .arcs leading out of states in the pfsa represent particular rules that were applied to a form at that state , transforming it into a new intermediate form .a transition on a delimiter symbol , which always returns to state 0 , signifies the end of a derivation process whereby the final form in the daughter language has been arrived at .the weightings on the arcs represent the number of times that particular arc was traversed in processing the entire corpus of words .the complete pfsa then represents the phonological complexity of the derivation process from middle chinese into one of the modern dialects .if this is the case , then the length of the minimal description of the pfsa would be indicative of the distance between the parent and daughter languages .there are two levels at which the diachronic complexity can be measured .the first is of the canonical pfsa , which is a trie encoding of the rules .this is the length of the diachronic phonological hypothesis accounting for the given dataset .the second is of a minimised version of the canonical machine .our minimisation is performed initially using the sk - strings method of and then reducing the resultant automaton further with a beam search heuristic .the sk - strings method constructs a non - deterministic finite state automaton from its canonical version by successively merging states that are indistinguishable for the top s% of their most probable output strings limited to a length of symbols . both _s _ and _ k _ are variable parameters that can be set when starting program execution . in this paper ,the reduced automata are the best ones that could be inferred using any value of string size ( ) from 1 to 10 and any value of the agreement percentage ( ) from 1 to 100 .the beam search method reduces the pfsa by searching recursively through the best descendants of the current pfsa where a descendant is defined to be the result of merging any two nodes in the parent pfsa .the variable parameter is called the beam size and determines the exhaustiveness of the search . in this paper , was set to 200 , which was the maximum the sun sparcserver 1000 with 256 mb of main memory could tolerate .the final resultant pfsa , minimised thus is , strictly speaking , a generalisation of the proposed phonology .its size is not really indicative of the complexity of the original hypothesis , but it serves to bring to light important patterns which repeat themselves in the data .the minimisation , in effect , forms additional diachronic rules and highlights regular patterns to a linguist .the size of this structure is also given in our results to show the effect of further generalisation to the linguistic hypothesis .a final point needs to be made regarding the motivation for the additional sophistication embodied in this method as compared to , say , a more simplistic phonological approach like a distance measure based on a simple summation of the number of proposed rules .our method not only gives a measure dependent on the number of rules , but also on the inter - relationship between them , or the regularity present in the whole phonology .a lower value indicates the presence of greater regularity in the derivation process . as a case in point, we may look at two closely related dialects , which have the same number of rules in their phonology from a common parent. it may be the case that one has diverged more by losing more of its original structure .as in the method of internal reconstruction , if we assume that the complexity of a language increases with time due to the presence of residual forms , the pfsa derived for the more distant language will have a greater complexity than the other .the derivations that were used in constructing the pfsa were traced out individually for each of the 2714 forms and entered into a spreadsheet for further processing .the relative chronologies ( rc ) of the diachronic rules given in chen76 and cn84 propose rule orderings based on bleeding and feeding relationships between rules .we have tried to be as consistent as possible to the rc proposed in chen76 and cn84 .for the most part , we view violations to the rc as exceptions to their hypothesis .consistency with the rc proposed in chen76 and cn84 has been maintained as far as possible .for the most part , violations to them are viewed as serious exceptions . thus if rule a is ordered before rule b in the rc , but is required to apply after rule b in a specific instance under consideration , it is made an exceptional application of rule a , denoted by `` [ a ] '' .such exceptional rules are considered distinct from their normal forms .the sequence of rules deriving beijing _ tou _ from middle chinese _ to _ ( `` all '' ) , for example , is given as `` t1-split : raise - u : diphthong - u : chamel : '' .however , `` diphthong - u '' is ordered before `` raise - u '' in the rc . the earlier rule in the rcis thus made an exceptional application and the rule sequence is given instead as `` t1-split : raise - u:[diphthong - u]:chamel : '' .there are also some exceptional phonological changes not accounted for by cn84 or chen76 . in these cases, we form a new rule representing the change that took place , denote it in square brackets to show its exceptional status .related exceptions are grouped together as a single exceptional rule .for example ,tone-4 in middle chinese only changes to tone-1a or tone-2 in beijing when the form has a voiceless initial .however , for the middle chinese form _ niat _( `` pinch with fingers '' ) in tone-4 , the corresponding beijing form is _ nie _ in tone-1a .since the n - initial is voiced , the t4-tripart rule is considered to apply exceptionally .the complete rule sequence is thus denoted by `` raise - i : apocope : chamel:[t4 ] : '' where the `` [ t4 ] '' exceptional rule covers cases when tone-4 in smc unexpectedly changed into tone-1a or tone-2 in beijing in the absence of a voiceless initial .it also needs to be mentioned that there are a few cases where an environment for the application of a rule might exist , but the rule itself may not apply although it is required to by the linguistic hypothesis .this would constitute an exception again .the details of how to handle this situation more accurately are left as a topic for future work , but we try to account for it here by applying a special rule [ ! a ] where the ` ! ' is meant to indicate that the rule a did nt apply when it ought to have . as an example, we may consider the derivation of modern cantonese _hap_(tone 4a ) from middle chinese _k^h^ap_(tone 4 ) ( `` exactly '' ) . the sequence of rules deriving the mc form is `` t4-split : spirant : x - weak : '' .however , since the environment is appropriate ( voiceless initial ) for the application of a further rule , ac - split , after t4-split had applied , the non - application of this additional rule is specified as an exception . thus ,`` t4-split : spirant : x - weak:[!ac - split ] : '' is the actual rule sequence used . in general , the following conventions in representing and treating exceptions have been followed as far as possible : exceptional rules are always denoted in square brackets .they are considered excluded from the rc and thus are consistently ordered at the end of the rest of the derivation process wherever possible .a final detail concerns the status of allophonic changes in the phonology .the derivation process is actually two - stage , comprising a diachronic phase during which phonological changes take place and a synchronic phase during which allophonic changes are automatically applied .changes caused by cantonese or beijing phonotactic constraints ( pcs ) are treated as allophonic rules and fall into the synchronic category , whereas pcs applying to earlier forms are treated in line with the regular diachronic rules which chen76 calls p - rules .a minor problem presents itself when it comes to making a clear - cut separation between the historical rules proper and the synchronic allophonic rules . in chen76 and cn84 ,they are not really considered part of the historical derivation process .yet it was found that the environment for the application of a diachronic rule is sometimes produced by an allophonic rule .such feeding relationships between allophonic and diachronic rules make the classification of those allophonic rules difficult .the only rule considered allophonic in beijing is the * chamel pc , this being a rule which determines the exact qualities of mb vowels . for cantonese ,cn84 has included two allophonic rules within its rc under bleeding and feeding relationships with p - rules .these are the break - c and y - fuse rules , both of which concern vocalic detail . in these cases ,every instance of their application within the diachronic phonology has been treated as an exception , effectively elevating these exceptions to the status of diachronic rules .in other cases , as with other allophonic rules , they are always ordered after all the diachronic rules . since the problem regarding the status of allophonic rules in general is properly in the domain of historical linguists , it is beyond the scope of this work .it was thus decided to provide two complexity measures one including allophonic detail and one excluding all allophonic detail not required for the derivation process .the minimum message length ( mml ) principle of is used to compute the complexity of the pfsa . for brevity, we will henceforth call the minimum message length of pfsa as the mml of pfsa or where the context serves to disambiguate , simply mml . in the context of data transmission ,the mml of a set of symbols is the minimum number of bits needed to transmit a static model together with the data symbols given this model _ a priori_. in the context of pfsa ,the mml is a sum of : * the length of encoding a description of the proposed machine * the length of encoding the dataset assuming it was emitted by the proposed machine the following formula is used for the purpose of computing the mml : where is the number of states in the pfsa , is the number of times the state is visited , is the cardinality of the alphabet including the delimiter symbol , the frequency of the arc from the state , is the number of different arcs from the state and is the number of different arcs on non - delimiter symbols from the state .the logs are to the base 2 and the mml is in bits . the mml formula given above assumes a non - uniform prior on the distribution of outgoing arcs from a given state .this contrasts with the mdl criterion due to which recommends the usage of uniform priors .the specific prior used in the specification of is , i.e. the probability that a state has outgoing arcs is .thus is directly specified in the formula using just bits and the rest of the structure specification assumes this .it is also assumed that targets of transitions on delimiter symbols return to the start state ( state 0 for example ) and thus do nt have to be specified .the formula is a modification for non - deterministic automata of the formula in where it is stated with two typographical errors ( the factorials in the numerators are absent ) .it is itself a correction ( through personal communication ) of the formula in which follows on from work in numerical taxonomy that applied the mml principle to derive information measures for classification .the results of our analysis are given in tables [ tbl : canon - mml ] ( for canonical pfsa ) and [ tbl : reduced - mml ] ( for reduced pfsa ) .row 1 represents pfsa which have only diachronic detail in them and row 2 represents pfsa which do not distinguish between diachronic and allophonic detail .column 1 represents the mml of the pfsa derived for modern cantonese and and column 2 represents the mml of pfsa for modern beijing . as mentioned in section [ sec : mod ] , smaller values of the mml reflect a greater regularity in the structure ..mmls for the canonical pfsa for middle chinese to modern cantonese and modern beijing respectively [ cols="<,<,<",options="header " , ] the canonical pfsa are too large and complex to be printed on a4 paper using viewable type . however , it is possible to trim off some of the low frequency arcs from the reduced pfsa to alleviate the problem of presenting them graphically .thus the reduced pfsa for modern beijing and modern cantonese are presented in figures [ fig : mand - opfsa ] and [ fig : cant - opfsa ] at the end of this paper , but arcs with a frequency less than 10 have been pruned from them . since several arcs have been pruned , the pfsa may not make complete sense as some nodes may have outgoing transitions without incoming ones and vice - versa .there is further a small amount of overprinting .they are solely for the purposes of visualisation of the end - results and not meant to serve any other useful purpose .the arc frequencies are indicated in superscript font above the symbol , except when there is more than one symbol on an arc , in which case the frequencies are denoted by the superscript marker `` '' .exclamation marks ( `` ! '' ) indicate arcs on delimiter symbols to state 0 from the state they superscript .their superscripts represent the frequency .superficially , the pfsa may seem to resemble the graphical representation of the relative chronologies in chen76 and cn84 , but in fact they are more significant .they represent the actual sequences of rules used in deriving the forms rather than just the ordering relation among them .the frequencies on the arcs also give an idea of how many times a particular rule was applied to a word at a certain stage of its derivation process .certain rules that rarely apply may not show up in the diagram , but that is because arcs representing them have been pruned .the mml computation process , however , accounted for those as well .the complete data corpus , an explanation of the various exceptions to rules and the programs for constructing and reducing pfsa are available from the authors .the results obtained from the mmls of canonical machines show that there is a greater complexity in the diachronic phonology of modern beijing than there is in modern cantonese .these complexity measures may be construed as measures of distances between the languages and their ancestor . nevertheless we exercise caution in interpreting the results as such .the measures were obtained using just one of many reconstructions of middle chinese and one of many proposed diachronic phonologies .it is , of course , hypothetically possible that a simplistic reconstruction and an overly generalised phonology could give smaller complexity measures by resulting in less complex pfsa .one might argue that this wrongly indicates that the method of obtaining distances as described here points to the simplistic reconstruction as the better one .this problem arises partly because of the fact that the methodology outlined here assumes all linguistic hypotheses to be equally likely _ a - priori_. we note , however , that simplicity and descriptive economy are not the only grounds for preferring one linguistic hypothesis to another .many other factors are usually taken into consideration to ensure whether a reconstruction is linguistically viable .plausibility and elegance , knowledge of what kinds of linguistic changes are likely and what are unlikely , and in the case of chinese , insights of the `` chinese philological tradition '' are all used when deciding the viability of a linguistic reconstruction .thus , a final conclusion about the linguistic problem of subgrouping is still properly within the domain of historical linguists .this paper just provides a valuable tool to help quantify one of the important parameters that is used in their decision procedure .we make a further observation about the results that the complexity measures for the phonologies of modern beijing and modern cantonese are not immensely different from each other .interestingly also , while the mml of the canonical pfsa for modern beijing is greater than that for modern cantonese , the mml of the reduced pfsa for modern beijing is less than that for modern cantonese .while the differences might be within the margin of error in constructing the derivations and the pfsa , it is possible to speculate that the generalisation process has been able to discern more structure in the diachronic phonology of modern beijing than in modern cantonese . from a computational point of view, one could say that the scope for further generalisation of the diachronic rules is greater for modern cantonese than for modern beijing or that there is greater structure in the evolution of modern beijing from middle chinese than in the evolution of cantonese .one could perhaps claim that this is due to the extra liberty taken historically by current modern cantonese speakers to introduce changes into their language as compared to their mandarin speaking neighbours .but it would be nave to conclude so here .the study of the actual socio - cultural factors which would have resulted in this situation is beyond the scope of this paper .it is also no surprise that the mmls obtained for the two languages are not very different from each other although the difference is large enough to be statistically significant .bits ( however small is ) would point to an odds ratio of 1: that the larger pfsa is more complex than the smaller one .the explanation is not directly applicable in this case as we are comparing two different data sets and so further theoretical developments are necessary . ] indeed , this is to be expected as they are both contemporary and have descended from a common ancestor. we can expect more interesting results when deriving complexity measures for the phonologies of languages that are more widely separated in time and space .it is here that the method described in this paper can provide an effective tool for subgrouping .in this paper , we have provided an objective framework which will enable us to obtain distance measures between related languages . the method has been illustrated and the first step towards actually applying it for historical chinese linguistics has also been taken .it has been pointed out to us , though , that the methodology described in this paper could in fact be put to better use than in just deriving distance measures .the suggestion was that it should be possible , in principle , to use the method to choose between competing reconstructions of protolanguages as this tends to be a relatively more contentious area than subgrouping .it is indeed possible to use the method to do this we could retain the basic procedure , but shift the focus from studying two descendants of a common parent to studying two proposed parents of a common set of descendants . a protolanguage is usually postulated in conjunction with a set of diachronic rules that derive forms in the descendant languages .we could thus use the methodology described in this paper to derive a large number of forms in the descendant languages from each of the two competing protolanguages . since descriptive economy is one of the deciding factors in selecting historical linguistic hypotheses , the size of each body of derivations , suitably encoded in the form of automata , in conjunction with other linguistic considerationswill then give the plausibility of that reconstruction .further study of this line of approach is , however , left as a topic for future research .embleton , s. m. 1991 .mathematical methods of genetic classification . in s.l. lamb and e. d. mitchell , editors , _ sprung from some common source_. stanford university press , stanford , california , pages 365388 . georgeff , m. p. and c. s. wallace . 1984 . a general selection criterion for inductive inference . in tim oshea , editor , _ecai-84 : advances in artificial intelligence_. elsevier , north holland , dordrecht , pages 473481 .harms , r. t. 1990 .synchronic rules and diachronic `` laws '' : the saussurean dichotomy reaffirmed . in e. c. polom , editor , _ trends in linguistics : studies and monographs : 48_. mouton de gruyter , berlin , pages 313322 .raman , anand v. and jon d. patrick .the sk - strings method for inferring pfsa . in _ proceedings of the workshop on automata induction , grammatical inference and language acquisition at the 14th international conference on machine learning icml97 _ , page ( in press ) , nashville , tennessee .
|
this paper addresses the problem of deriving distance measures between parent and daughter languages with specific relevance to historical chinese phonology . the diachronic relationship between the languages is modelled as a probabilistic finite state automaton . the minimum message length principle is then employed to find the complexity of this structure . the idea is that this measure is representative of the amount of dissimilarity between the two languages .
|
classical food web models represent an idealization of real ecosystems that focuses on feeding relationships as the most important type of interaction and that considers populations as well mixed and homogeneous in space . typically , such models include nonlinear differential equations that capture the growth and loss terms of population dynamics , and a simple stochastic algorithm for generating network structures with realistic features , such as the niche model or the cascade model .they provide a static , mean field description , integrating the feeding relationships across the whole spatial extent of the system and ignoring temporal changes in the composition of the network due to species turnover . in order to go beyond mean - field models ,various approaches have been taken to include spatial structure or species turnover in food web models . if space has the structure of discrete habitats , one obtains `` networks of networks '' .the outer network represents the spatial landscapes consisting of several habitats , the connections between them representing possible routes for dispersal .a chain topology of habitats results for instance for a river with barrages , and a ring of habitats can occur along island shores .more complex spatial networks might represent archipelagos , or a system of waterbodies connected by streams and canals .the inner networks describe localized food webs on these habitats , the connections between species representing feeding relationships .the need to study such spatially extended food webs has been highlighted recently by several authors .most studies of spatial ecosystems concentrate on simple topologies of the inner network , such as food chains or small food web motifs of two , three or four species in space .so far , there exist few investigations of larger food webs in space , both empirical , and theoretical .moreover , all of the mentioned studies focus on spatial aspects under the assumption that the species composition is static . on the other hand ,studies addressing species turnover typically neglect spatial aspects . during the last years, several models were introduced that include evolutionary dynamics ( for references see next paragraph ) . on a time scalemuch slower than population dynamics , new species , which are modifications of existing species , are added to the system .they can be interpreted either as invaders from another , not explicitly considered spatial region , or as arising from a speciation process .population dynamics then determines which species are viable .in contrast to static models such as the niche model , the food web structure is not put in by hand , but emerges from the interplay between population dynamics and species addition .evolutionary food web models can therefore give insights into the conditions under which complex network structures can emerge and persist in face of ongoing species turnover .they are thus fundamentally different from species assembly models , which have been studied for a longer time and which are based on a fixed species pool from which species are added to a smaller habitat . in 2005 ,loeuille and loreau introduced the probably simplest successful evolutionary food web model .in contrast to other well - known evolutionary food web models , like for example the matching model or the webworld model , which describe a species by a vector of many abstract traits , a species in this model is specified only by its body mass .the feeding relationships are determined by differences in body mass .a version with gradual evolution was studied by brnnstrm et al in 2011 .ingram et al extended the model to include an evolving feeding range , and allhoff and drossel also considered a version with an evolving feeding center .these extensions make the model very similar to the evolving niche model , where the niche value can be equated with the logarithm of the body mass , and where also these three parameters are evolved .in contrast to the simpler model by loeuille and loreau , these models need additional ingredients that prevent evolution from running to extremes , such as adaptive foraging or restrictions on the possible trait values .recently , several authors emphasized that combining the spatial and the evolutionary perspective on ecosystems is essential for better understanding coexistence and diversity .it is well known that including a spatial dimension in evolutionary models enables the coexistence of species or strategies that would otherwise exclude each other .this is due to the formation of dynamical waves in which the competitors cyclically replace each other , or to the formation of local clusters that can not easily be invaded from outside .however , these studies are usually limited to two or three species . a recent study of a larger system was published in 2008 by loeuille and leibold , who investigated a metacommunity food web model with two plant and two consumer species on a patchy environment , where one of the plant species has evolving defense strategies .the authors demonstrated the emergence of morphs that could only exist in a metacommunity due to the presence of dispersal highlighting the fact that the combination of space and evolutionary processes yields important new insights . in this paper , we study the combined effect of space and evolution on food webs consisting of many species on up to four trophic levels .we use the model of loeuille and loreau , placing it on several habitats that might represent lakes , islands or a fragmented landscape , and that are coupled by migration .the results are `` evolutionary networks of networks '' . by varying migration rules ( undirected , directed , diffusive , adaptive , dependent on body mass ) , the time of migration onset ( at the beginning or after local food webs have evolved ) , and the number and properties of habitats ( 2 or 8 habitats , equivalent or differing with respect to simulation parameters ) , we investigate many different scenarios . with diffusive migration ,our results agree qualitatively with diversity - dispersal relationships from empirical studies and from other theoretical metacommunity studies .low migration rates lead to an increased diversity in the local habitats , and high migration rates lead to homogenization of habitats and hence to a decreased regional diversity . for a chain of eight habitats coupled by diffusive migration, we find that migration leads to equal biomasses in the habitats , even when the species composition of neighboring patches is very different . with adaptive migrationwe obtain networks that differ strongly in their species composition but that do not show increased local diversity .the model by loeuille and loreau includes population dynamics on the one hand and the introduction of new species via modification of existing species on the other .because such `` mutation '' events are very rare , population dynamics typically reaches an attractor before the introduction of a new species .thus , ecological and evolutionary time scales can be viewed as separate .population dynamics is based on the body mass of a species as its only key trait .species are sorted such that body mass increases with index number .production efficiency and mortality rate scale with body masses according to the allometric relations and .the population dynamics of species with biomass is given by with describing the rate with which predator consumes prey , and with describing the competition strength .the parameters , , , and are the integrated feeding rate , the preferred body mass difference between predator and prey , the width of the feeding niche , and the competition range .energy input into the system is provided by an external resource of `` body mass '' and total biomass , which is subject to the dynamical equation the terms represent constant input of inorganic nutrient , nutrient outflow , consumption by the basal species , and recycling of a proportion of the biomass loss due to mortality , competition and predation .starting from a single ancestor species of body mass , the food web is gradually built by including evolutionary dynamics in addition to the population dynamics .new species are introduced with a `` mutation '' rate of per unit mass and unit time .the new , `` mutant '' species has a body mass that deviates by at most 20 percent from the body mass of the `` parent '' species and is drawn randomly from the interval ] . hence , the additional migration term for species on habitat is we choose closed boundary conditions , i. e. we set and in equation [ eq : chain ] .+ the increased number of habitats leads to a significantly increased program runtime . to keep it within a reasonable limit , we decreased the value of the feeding range to ( only for this variant ) .this leads to better adapted , but fewer predators and hence to a decreased system size of approximately species per isolated habitat . *adaptive migration : + up to now , migration is diffusive and hence based on random movement .however , especially for higher developed species that can evaluate their current situation and possibly follow their prey or avoid competitors , this might be too simple . in this variant ,we go back to 2 equivalent habitats , but instead of diffusive migration , we analyze two versions of adaptive migration , where the migration rates of the species are dependent on their current growth rates . * * type i : a species emigrates from habitat if its population size in that habitat is currently decreasing , * * type ii : migration is directed into the habitat with better local conditions , + we varied the value of the proportionality factor over three orders of magnitude from to .( top line ) or ( bottom line ) starts at after the initial build - up of the networks .the resulting networks are shown in the top line of fig .[ fig : summary ] . ] .the color of the frames around the networks indicates the two possible outcomes .light gray frames : networks with small additional populations ( outcome 1 ) .dark gray frames : similar / identical networks ( outcome 2 ) .network visualizations given in this paper are based on _ graph - tool _ ( http://graph-tool.skewed.de ) .[ fig : summary ] ] starting from a single ancestor species , the evolutionary model of loeuille and loreau goes first through a period of strong diversification , and then the network structure stabilizes and assumes a regular pattern .[ fig : ung_bm ] shows the body masses of all species occurring during two exemplary simulations in two habitats that are initially isolated .species with a body mass of consume species with a body mass of approximately and form the trophic level .these trophic levels are blurred for large values of the niche width ( not shown ) and clearly separated for small values . to avoid competition, species keep a minimum body mass difference of , allowing for more species on a trophic level when is smaller . if two species are so similar in body mass that they compete with each other ( e.g. parent and mutant species ) , only one of them survives .although the realizations in the two habitats are based on the same set of parameters , they show slightly different structures due to different sets of random numbers . at time , undirected diffusive migration between the two habitats according to equation ( [ eq : migration ] ) sets in .dependent on the migration rate , we identified two major outcomes , here and in the following marked by a light or dark gray frame : 1 . for small migration rates ( e.g. , see top line in fig .[ fig : ung_bm ] ) , migrants have small additional populations in the foreign habitat , leading to an approximately doubled number of species per habitat .the resulting network structures are thus combinations of the isolated networks . 2 .in case of a high migration rate ( e.g. , see bottom line in fig .[ fig : ung_bm ] ) , native species become displaced by invaders .the resulting networks are very similar or even identical in the two habitats . in both cases ,the outcome is reached soon after the onset of migration . since the immigrants arrive in a habitat where the network is already completely developed , every niche is already occupied and all immigrants have to compete with native species . if migration rate is small , the immigrants gain in biomass due to migration and feeding interactions becomes soon canceled by competition losses , and the immigrant populations stay small .as soon as migration is switched off , these small populations vanish again ( not shown ) . with a higher migration rate , some immigrants can establish themselves against their competitors and displace native species .again , all species ( invaders and natives ) tend to keep a body mass difference of to minimize their competition loss .since all species from one habitat and especially from one trophic level coevolved together , they are in this respect well matched to each other . as a consequence ,often complete levels are replaced . in fig .[ fig : summary ] , the resulting network structures of several simulation runs are shown .the first line corresponds to the simulations shown in fig .[ fig : ung_bm ] .the colors of the species represent their habitat of origin .white species are natives to habitat 1 and black species are natives to habitat 2 .if a black species migrates into habitat 1 and has a mutant there , this mutant is colored dark gray .light gray species have analogously originated in habitat 2 but are descendants of a white species from habitat 1 .we also analyzed scenarios where migration is only allowed in one direction or where migration starts at the beginning of the simulation .directed migration ( fig .[ fig : summary ] , line 2 ) leads to similar results for the immigration habitat as undirected migration .the network structure in the emigration habitat depends on the migration rate .the migration loss can be formally regarded as an increased mortality term for all species in habitat 2 . in case of very low migration rates ,this term is negligible and leaves the network structure of habitat 2 unchanged . in case of high migration rates , a significant amount of biomass leaves the emigration habitat per unit time , leading to the extinction of one or more species from the upper levels . if migration starts at the beginning of the simulation ( fig .[ fig : summary ] , line 3 ) , both resulting networks are identical and only either black and dark gray or white and light gray species occur .the first successful mutant that replaces its ancestor species in its home habitat is also able to migrate to the other habitat and displace also the ancestor s population there .every subsequent mutant is a descendant from this first mutant and finds identical conditions in both habitats , leading to identical networks .line 4 of fig .[ fig : summary ] shows results of directed migration during the whole simulation time .we observe a combination of the explained effects .all species are descendants of the first successful invader from habitat 2 and therefore either black or dark gray . for a low migration rate, we observe again small additional populations in habitat 1 ( outcome 1 ) . for a high migration rate ,the networks are mostly identical ( outcome 2 ) except for the top level . in order to gain a general overview of the influence of the migration rate , we varied the value of over several orders of magnitude and performed 48 simulation runs with .larger or smaller migration rates influence the time needed until the system reaches a new fixed point after a mutation event , but the resulting network structures do not provide any new insights besides the two explained outcomes .intermediate migration rates lead to a superposition of the two outcomes , where one trophic level contains additional small populations corresponding to outcome 1 and another trophic level is replaced by species from the other habitat corresponding to outcome 2 .we observed essentially the same effect for body - mass dependent migration rates , with the migration rate being proportional to the body mass ( another 120 realizations , data not shown ) .since now species on higher levels had larger migration rates , they were more often replaced , while lower levels showed more often additional populations .however , since in this model all body masses are of the same order of magnitude , body - mass effects are only minor . .each data point represents the average and standard deviation of 10 simulation runs with different random numbers .isolated habitats show the same results as realizations with undirected migration starting at the beginning of the simulation ( line 3 ) . ]different scenarios and for values of the migration rate .each data point represents average and standard deviation of 10 realizations with different random numbers .isolated habitats show the same distribution as realizations with undirected migration starting at the beginning of the simulation ( line 3 ) . ]different scenarios and for values of the migration rate .each column represents an average over 10 realizations with different random numbers . in all realizations ,the resource had by far the biggest population ( ) .isolated habitats show the same distribution as realizations with undirected migration starting at the beginning of the simulation ( line 3 ) . ]to understand the transition from small to larger migration rates in more detail , we performed more than 400 simulation runs with . in fig .[ fig : sb ] we show that the transition between the described two outcomes is smooth and covers approximately two decades of migration strength . with undirected migration starting after the initial build - up ( line 1 ), each species has populations in both habitats , so that the number of species per habitat is identical . in case of a high migration rate , not only the species number , but the whole networks are identical ( outcome 2 ) ,whereas in case of a rather small migration rate , the network size is approximately doubled due to small additional populations ( outcome 1 ) .however , even if the species composition strongly depends on the migration rate , the total biomasses of the resources and the total biomass of the species do not ( see top line of fig .[ fig : sb ] ) .it is even nearly identical for both habitats and shows very small variations across the realizations that differ only in the set of random numbers .the situation is different with directed migration ( fig .[ fig : sb ] , line 2 ) .the number of species in habitat 1 shows a similar smooth transition from many additional populations to the displacement of native species , whereas habitat 2 accommodates a rather constant number of species . only in case of high migration rates , species from upper trophic levelsbecome extinct due to the migration losses , as explained above .this leads to a non - monotonous dependence of the biomasses on the migration rate , according to the following top - down trophic cascade : for , all species in the fourth trophic level of habitat 2 are extinct due to migration losses , so that species in the third level experience no predation pressure . hence , even despite their own migration losses , they can have big populations and exert a high pressure on the second level .due to the subsequent reduction of population sizes in the second level , species in the first level also experience a reduced predation pressure , have big populations and exert a high pressure on the external resource , which is observed as a reduced resource biomass .for even higher migration rates , also species from the third level in habitat 1 become extinct due to migration losses , the total biomass of the species decreases and the resource recovers . with a migration start at the beginning of the simulation and undirected migration ( fig .[ fig : sb ] , line 3 ) , we observe identical networks , as explained above . the resultsdo not depend on the migration rate , and are identical with results from simulations of isolated habitats ( not shown ) . if directed migration is active during the build - up of the networks ( fig .[ fig : sb ] , line 4 ) , we observe in principle the same effects as when migration sets in after the initial build - up . however , some immigrants occasionally find an empty niche as long as the build - up is not yet completed .they do not have to compete with natives and can establish themselves , reducing the number of additional small populations . note that this figure does not show the fact that all species are black or dark gray ( see line 4 of fig .[ fig : summary ] ) . the number of species per habitat can also be interpreted as local diversity . in case of undirected migration , local and regional diversityare identical , since every species has populations on both habitats . in case of directed migration ,the local diversities differ , and the local diversity of habitat 1 is again the regional diversity .hence , we observe that low migration rates can lead to an increased local diversity , whereas high migration rates lead to a decreased regional diversity .the distribution of the species per trophic level reveals more details about the transition from small additional populations to the displacement of native species , as shown in fig . [fig : stl ] . here , light gray represents small migration rates and dark gray represents high migration rates .higher trophic levels show the transition at smaller values of the migration rate than lower trophic levels : in case of , nearly no additional populations were observed on the third level , but many on the second and even more on the first level .we ascribe this to the fact that in this model species on higher trophic levels have smaller populations than species on lower trophic levels , and therefore exert a lower competition pressure on the invaders .however , as also observed in fig .[ fig : sb ] , the error bars are biggest for intermediate migration rates , indicating that dependent on the random numbers single simulations might deviate from this trend .this is due to the above mentioned fact that whole trophic levels ( not only single species ) show either outcome 1 or outcome 2 , which leads to an increased number of possible network structures .the population sizes of the invaders of outcome 1 depend on the migration strength , see fig .[ fig : biomass ] . in those scenarios that show the discussed transition between the outcomes and for low migration rates ( light gray columns , ) , we observe a bimodal frequency distribution of population sizes .in addition to the population sizes that also occur in outcome 2 for higher migrations rates ( black columns , ) , also peaks at smaller population sizes occur , which correspond to the additional populations of outcome 1 . for higher migration rates ,these peaks shrink and shift to larger populations sizes , in agreement with the smooth transition shown in the previous figures .( left ) or in the competition range ( right ) .migration is absent ( top ) or starts at with ( middle ) or ( bottom ) . for more explanationssee caption of fig .[ fig : summary ] .[ fig : comp ] ] fig .[ fig : comp ] shows example networks resulting from simulations with different competition parameters . due tothe different environments on the two habitats , different network structures emerge in the two patches if they are uncoupled ( see upper line in fig .[ fig : comp ] ) .a decreased competition strength leads to bigger , but fewer populations ( left ) and an increased competition range leads to more competitive exclusion and therefore to smaller networks ( right ) .when coupled by weak migration , the resulting networks look like a superposition of the isolated networks , see middle line in fig .[ fig : comp ] .each species exists in both habitats resulting in an increased number of populations .however , the population of one species is large in one habitat and small in the other , like the additional populations in outcome 1 . counting only the big populations, one recognizes the network structures of the isolated habitats .a stronger migration link ( bottom line ) leads to identical networks consistent with outcome 2 .similar network structures but with mixed colors can be obtained when migration starts after the networks have developed ( not shown ) . .[ fig : chain ] ] .black ( white ) vertices : species that originated in habitat 4 ( 5 ) .dark ( light ) gray vertices : species that originated further on the left ( right ) . ] as an example of an extended spatial landscapes , we discuss a chain of habitats . in the left panel of fig .[ fig : chain ] , the biomass distributions of eight isolated habitats are shown . due to the randomly chosen mutant body masses ,each network consists of a unique species composition .some compositions seem to be more favorable than others in the sense that the total amount of biomass ( of species and resources ) is larger .after all eight networks have fully emerged , migration is switched on . in the case of weak migration ( middle panel )the situation is again similar to outcome 1 , where the immigrating species can not establish themselves .their populations stay much smaller than the natives and survive only due to the continuous migration into the habitat .these additional populations are too small to be visible in fig .[ fig : chain ] , but are obvious in the example networks of habitat 4 and 5 shown in fig .[ fig : chainnetworks ] .the color code in this figure is different to the previous figures .black species with big populations in their native habitat 4 have small populations in their neighboring habitat 5 and vice versa . also shownare small populations from the habitats further on the left in dark gray or further on the right in light gray .these additional populations have a major effect on the system concerning the biomass distributions .even if the invaders can not establish themselves , they provide a continuous energy flow between the habitats . as a consequence ,all biomass distributions equalize .this result does not depend on the recycling loop in equation ( [ eq : recycling ] ) , but occurs also when the recycling loop is switched off . stronger migration ( right panel of fig .[ fig : chain ] ) leads again to outcome 2 .immigrating species can establish themselves and displace natives .transiently very large networks occur while all species migrate in both directions and are present in many habitats at once .then , by and by , the most favorable species composition ( i.e. , the one with the largest total biomass ) displaces others and the resulting networks are identical .however , this process takes much longer as with only two habitats .we also discussed the same scenario of 8 habitats with a migration start at time ( not shown ) .then , all networks coevolve .if the mutation rate is still so small that a successfully mutant can spread over all habitats before a new mutant emerges , identical networks emerge , in consistency with the corresponding scenario of undirected diffusive migration between two habitats ( line 3 in fig .[ fig : summary ] ) . just as for the case of 2 patches, we made sure that these results are generic by performing several simulation runs and by using more values of the migration rates . .the migration rate of a species is proportional to the difference of its growth rates in the two habitats ( type 2 ) , here with a factor . for more explanationssee caption of fig .[ fig : summary ] .[ fig : adaptive ] ] finally , we discuss two types of adaptive behavior . witha migration start after the initial build - up of the networks , both types show in principle the same behavior . since the system is most of the time near a fixed point with zero growth rate for all species , migration can only occur rarely , when the system is disturbed by a successful mutant .if the mutant replaces another species , this species has a negative growth rate and hence can migrate to the other habitat , where it can possibly replace an already existent species and establish itself .this makes the two habitats different from each other and can lead to completely different species compositions as shown in the example networks in fig .[ fig : adaptive ] .however , the general structure of the networks remains unchanged and is the same as that of isolated networks .the probability that a replaced species can successfully invade the other habitat increases with the proportionality factor of the migration rate , in consistency with the previously discussed cases . with few species invade the other habitat successfully and with even smaller values of or the networks are basically isolated with very rare successful invasions .the replacement of one species by a similar mutant or invader is only a small perturbation of the whole system .every prey or predator of the dying species experiences small fluctuations in its population size during the replacement .consequently , other species can also migrate into the other habitat .however , additional mini - populations like in outcome 1 can not occur , since these disturbances and hence the migration rates are very small and occur transiently .thus , these species can not establish themselves on the other habitat and do not occur in figure [ fig : adaptive ] . with a migration start at the beginning of the simulation ,two mostly identical networks emerge ( not shown ) .they differ at most by one single species , which just emerged in one habitat with a monotonously increasing population size and therefore did not have the opportunity to migrate .this is consistent with the corresponding scenario of diffusive migration between two habitats ( line 3 in fig .[ fig : summary ] ) .we have studied an evolutionary food web model on several habitats .locally , species emerge , interact and go extinct according to the evolutionary food web model of loeuille and loreau . additionally , they migrate between habitats according to diffusive or adaptive dispersal .the computer simulations start with one species in each habitat .migration may occur from the beginning , so that the food webs in the different habitats coevolve , or it may occur later after the local food webs have become established . usually , our computer simulations show one of two frequent outcomes : either the local food webs of the different habitats become mostly identical and have a similar structure and size as in an isolated system ( outcome 2 ) , or the local food webs differ with respect to the main species in each trophic layer but include small populations from the neighboring habitats that are sustained by ongoing immigration but can not displace the native occupant of a niche ( outcome 1 ) . which of the two outcomes occurs depends mainly on the migration rate and the time of migration onset .for intermediate migration rates , the two outcomes can combine , with part of the trophic levels being identical in the two patches , and with other levels being different and showing small populations from the other patch .furthermore , we find that even for outcome 1 the total biomasses in the habitats become very similar to each other . to our knowledge , this result has not yet been observed in other studies .when the habitats are not equivalent because migration is directed or because the model parameters are different , the food webs can differ with respect to species number and population sizes .when migration occurs only under certain circumstances ( for instance when the growth rate of a population is negative ) , the species composition of neighboring habitats can become very different .generally , the food webs in the habitats show the regular structure that is characteristic of the model by loeuille and loreau .they are clearly structured into distinct levels , and each level consists of niches separated by a body mass difference that is equal to the competition range . after the initial stage of strong diversification ,species turnover becomes very slow , new mutants are only rarely successful and very similar to their parent species , and the food web structure is very stable .the model can therefore not reproduce the sometimes catastrophic effect of species invasions into natural ecosystems , where alien species may find such good local conditions that their populations grow explosively leading to the extinction of prey or competitors species and a cascade of secondary extinctions . in the model by loeuille and loreau , species differ only with respect to body mass .if an invader successfully replaces a similar species , it has automatically the same predators and the same prey and hence the same function in the food web .thus , the displacement of a species leaves the overall network structure unchanged .once the initial build - up of the network is complete , all viable niches are occupied and stay occupied so that secondary extinctions can not occur . the complexity of real ecosystems of cause exceeds by far the complexity of this model .the interactions between species do not only depend on body mass but also on many other species traits and also on environmental factors .the latter show considerable variations in space and time , causing species to change continuously , as for example implemented in another model presented by loeuille and leibold in 2008 .after all , not only a changing environment , but also the local feedback between species and their environment makes the food webs and the migration behavior highly diverse .the simple model studied in this paper highlights those effects of migration on evolving ecosystems that occur already when only few traits are taken into account and when habitats are equivalent .the two main outcomes described above are widely observed in empirical and theoretical studies .sax et al . investigated invasions and extincts of land birds and vascular plants on oceanic islands .they found that for land birds the number of naturalizations of nonnative species is roughly equal to the number of extinctions , whereas for vascular plants species richness has increased by about a factor of two .the authors give several possible explanations for this behavior , one of them posits that nonnative species have become established because they are competitively superior to natives .if applied to birds , this would correspond to outcome 2 .the increase in the number of plant species is similar to outcome 1 . in our model , such an increase could be explained by ongoing immigration sustaining additional populations that would go extinct otherwise .however , this appears to be an unlikely explanation for oceanic islands , where the increased diversity is rather being attributed to an increased variety in local habitats , including those created by man .other theoretical studies of metacommunities also show the two types of outcomes .mouquet and loreau studied the effect of migration on local and regional diversity in a non - evolving metacommunity .their model was later extended by urban to contain adaptive phynotypic variation in the reproductive rates of 20 competing species inhabiting 20 heterogeneous patches . without dispersal ,all communities are unique and isolated , leading to a low local diversity and a high regional diversity . with a low or intermediate level of dispersal ,regional diversity remains unchanged whereas local diversity increases due to immigration from neighboring communities .this corresponds to outcome 1 , where the total number of species does not change after the onset of migration , but where the local number of species is approximately doubled ( in a 2-patch system ) .mouquet and loreau predicted that higher levels of dispersal lead to homogenization of the metacommunity and hence to decreasing local and regional diversities , in consistency with our outcome 2 .also many other studies suggest that local and regional diversity react differently to changes in the spatial landscape and dispersal .haegemann and loreau extended the investigations by analyzing different dispersal rates for resources and consumers leading also to local consumers with regional resources or regional consumers with local resources .the latter corresponds to our case of body - mass dependent migration rates , where species with small body masses in the lower trophic levels experience too small migration rates to successfully invade the other habitat , whereas the species compositions in the upper trophic levels are homogeneous .however , it should be mentioned that in the model by loeuille and loreau body mass differences are generally small and metabolic scaling of the migration rates has only weak effects .evolutionary species turnover is not necessary for all our results . in situations wheremigration is switched on only after the local food webs have become established , the two types of outcomes are also observed when the process of introducing new mutant species is stopped .however , in such a case no `` gray '' species would occur ( see fig .2 ) , which are descendants of immigrants from the other habitat . our computer simulations with adaptive migration ,however , yield an outcome that could not be obtained in absence of evolution .since migration rates are dependent on population growth rates , migration occurs only temporally , when the system is disturbed by the emergence of a new mutant .this leads to different species compositions in the two habitats .just as the results of loeuille and leibold mentioned in the introduction , these findings show that the interplay between space and evolutionary processes gives rise to new phenomena. however , we have to admit that the typical outcomes observed in our model do not appear to be very realistic .they can probably be attributed to the unusual stability of the model by loeuille and loreau . in general ,adaptive behavior is known to have a considerable stabilizing effect on food web dynamics , but it can not become visible when dynamics is already very stable in the absence of adaptive behavior .we expect that a noticeable stabilizing effect will become visible when adaptive migration is combined with more complex and less stable evolutionary food web models .certainly , the study presented in this paper is only a modest beginning of the investigation of evolutionary food web models in space .this work was supported by the dfg under contract number dr300/12 - 1 .+ we thank nicolas loeuille and daniel ritterskamp for very useful discussions .melanie hagen , w daniel kissling , claus rasmussen , marcus am de aguiar , lee l brown , daniel w carstensen , alves - dos santos , yoko l dupont , francois k edwards , paulo r guimares , et al .biodiversity , species interactions and ecological networks in a fragmented world ., 46:89210 , 2012 .mathew a leibold , m holyoak , n mouquet , p amarasekare , jm chase , mf hoopes , rd holt , jb shurin , r law , d tilman , et al .the metacommunity concept : a framework for multi - scale community ecology ., 7(7):601613 , 2004 .delphine legrand , olivier guillaume , michel baguette , julien cote , audrey trochet , olivier calvez , susanne zajitschek , felix zajitschek , jane lecomte , quentin bnard , et al .the metatron : an experimental system to study dispersal and metaecosystems for terrestrial organisms . , 9(8):828833 , 2012 .mark c urban , mathew a leibold , priyanga amarasekare , luc de meester , richard gomulkiewicz , michael e hochberg , christopher a klausmeier , nicolas loeuille , claire de mazancourt , jon norberg , et al .the evolutionary ecology of metacommunities ., 23(6):311317 , 2008 .
|
we introduce an evolutionary metacommunity of multitrophic food webs on several habitats coupled by migration . in contrast to previous studies that focus either on evolutionary or on spatial aspects , we include both and investigate the interplay between them . locally , the species emerge , interact and go extinct according to the rules of the well - known evolutionary food web model proposed by loeuille and loreau in 2005 . additionally , species are able to migrate between the habitats . with random migration , we are able to reproduce common trends in diversity - dispersal relationships : regional diversity decreases with increasing migration rates , whereas local diversity can increase in case of a low level of dispersal . moreover , we find that the total biomasses in the different patches become similar even when species composition remains different . with adaptive migration , we observe species compositions that differ considerably between patches and contain species that are descendant from ancestors on both patches . this result indicates that the combination of spatial aspects and evolutionary processes affects the structure of food webs in different ways than each of them alone . * keywords : + evolutionary assembly , extinction , diversity , metacommunity , metapopulation *
|
this paper is concerned with many body systems and their cooperative behaviour ; in particular when that behaviour is complex and hard to anticipate from the microscopics , even qualitatively and even when the systems are made up of simple individual units with simple inter - unit interactions .` range - free ' ( or ` infinite - ranged ' ) refers to situations where the interactions are not dependent on the physical separations of individual units , and hence neither on the dimensionality nor on the structure of the embedding space .such systems are also often referred to as ` mean - field ' , since one can often show ( and usually believes ) that their behaviour in the thermodynamic limit ( units ) is identical to that of an appropriate mean - field approximation to a short - range system . `frustration ' refers to incompatability between different microscopic ordering tendencies .self - consistent mean - field theories do have the ability to describe spontaneous symmetry breaking and phase transitions and they have played an important role in statistical physics .however as pure systems , without quenched hamiltonian disorder or out - of - equilibrium self - induced disorder , they do not exhibit the interesting non - simple dimension - dependent but details - independent ( universal ) critical behaviour whose study drove much of the interest of statistical mechanics in the seventies and eighties .for this reason ` mean - field ' used to be interpreted as fairly trivial . on the other hand , with quenched disorder and frustration in their interactions range - free many - body systemscan , and regularly do , exhibit behaviour that is complex and rich .this paper represents a brief introduction to and partial overview of such systems .the general class of systems we consider can be summarized as characterised by schematic ` control functions ' of the form where ( i ) in thermodynamics ( statics ) the are the variables and the are quenched ( frozen ) parameters , or vice - versa , ( ii ) in dynamics the are the ` fast ' variables and the are ` slow ' variables , or vice - versa , where ` fast ' and ` slow ' refer to the characteristic microscopic time - scales , ( iii ) in both cases , the are intensive control parameters , influencing the system deterministically , quenched - randomly or stochastically , and ( iv ) we shall be particularly interested in typical behaviour in situations in which any quenched disorder is drawn independently from identical intensive distributions , enabling ( at least in principle ) useful thermodynamic - limit measures of the macroscopic behaviour .the interest arises when the effects of different interactions are ` frustrated ' , in competition with one another . in such cases with detailed balance , at low enough noisethe macrostate structure / space is typically fractured ( or clustered ) , in a manner often envisaged in terms of a ` rugged landscape ' paradigm in which the dynamics is imagined as motion in a very high dimensional landscape of exponentially many hills and valleys , often hierarchically structured , with concomitant confinements , slow dynamics and history dependence . in dynamical systems without detailed balance , strictly there is no such simple lyapunov ` landscape ' but the ` motion ' is analogously complexly hindered , with many effective macroscopic time - scales .first studied ( in physics ) in the context of magnetic alloys , such systems are now recognised in many different contexts ; in inanimate physical systems , computer science , and information science ; in animate biology , economics and social science . in these different systems ` controllers ' of the ` control functions ' vary ; including the laws of physics , devisors of computer algorithms , human behaviour , governmentally - devised laws etc .a simply - formulated but richly - behaved canonical model is that of sherrington and kirkpatrick ( sk ) , originally introduced as a potentially soluble model corresponding to a novel mean - field theory introduced by edwards and anderson ( ea ) to capture the essential physics of some unusual magnetic alloys , known as spin glasses .the sk model is characterized by a hamiltonian where the label spins , taken for simplicity as ising , and the interactions are chosen randomly and independently from a distribution .dynamically the system can be considered to follow any standard single - spin - flip dynamics corresponding to a temperature . were normal equilibration to occur itwould be characterized by boltzmann - gibbs statistics , . however ,if the distribution has sufficient variance compared with its mean and the temperature is sufficiently low , normal equilibration does not occur and complex macro - behaviour results beneath a transition temperature .the interesting regime , known as the ` spin glass phase ' , occurs at intensive if the variance of scales with as , the mean as .as parisi showed , in a series of papers ( _ e.g. _ ) which involved amazing insight and highly original conceptualization and methodology , this glassy state is characterized by a hierarchy of ` metastable ' macrostates , differences between restricted and gibbsian thermodynamic averages , as well as non - self - averaging ; see also __ .these features can be characterized by the macrostate overlap distribution functions where denotes a thermodynamic average of over the macrostate . for a conventional system , with a single macrostate, has a single delta function , while for a system with entropically extensively many macrostates has more structure . when the state structure is continuously hierarchical , as it is for sk , there is a continuum of weight in the disorder - averaged overlap distribution function p_{\{j_{ij}\}}(q) ] is the action choice of the strategy acting on the information and and is the average ` choice ' over the stategies actually employed , ; \label{eq : majority_weight}\ ] ] _ i.e. _ by increasing the point - score bias for strategies leading to minority behaviour . in the original formulation the information used was the booolean string indicating the minority choice in the previous time - steps of play and the were boolean operators .however , essentially similar behaviour is obtained for a system in which is randomly generated at each time , equally probably from the whole space of binaries .the most obviously relevant macroscopic measure in the mg is the volatility , the variance of the choices .computer simulations demonstrated that it has scaling behaviour , the volatility per agent versus the the information dimension per agent approaching independence of as the latter is increased , and also has a cusp - like minimum at a critical with behaviour ergodic for but non - ergodic for .fig 1 shows this behaviour for ( circles ) , ( squares ) and ( diamonds ) , ( ii ) a comparison betweeen the results of simulation of the deterministic many - agent dynamics ( open symbols ) and the numerical evaluation of the analytically - derived stochastic single - agent ensemble dynamics . from . ] a slightly different variant of the model in which the strategies are taken as -dimensional binary strings with each component chosen randomly and independently at the outset and thereafter fixed ( quenched ) , and the stochastic ` information ' consists in randomly choosing at each time - step and then using the corresponding strategy elements .this is reminiscent of the behaviour of the susceptibility of the sk spin glass , shown in fig 2 , , as predicted by parisi theory .the upper curve shows the full gibbs average , obtained from the full and interpreted as the field - cooled ( fc ) susceptibility .the lower curve shows the result of restricting to one thermodynamic state , as obtained from and interpreted as the zero - field - cooled susceptibility . from . ]if one compares the volatility with the inverse susceptibility and the information dimension with the temperature .hence one is tempted to analyze the mg using methodolgy developed for spin glasses .updating the point - score only after steps where leads to an averaging over the random information to produce an effective interaction between the agents and yield the so - called ` batch ' game ( with temporally - rescaled update dynamics ) where is an effective ` hamiltonian ' and and are effective ` exchange ' and ` field ' terms given by where .since the are random so are the exchange and field terms .hence is a disordered and frustrated control function .the expression for the is very reminiscent of the hebbian - inspired synapses of the hopfield neural network model , where the are the stored memories , but crucially with the opposite sign ensuring that here the are now repellors rather attractors .there are two main methodologies employed to study statics , the replica procedure and the cavity method ( see _ e.g. _ ) .the most common method for the cooperative dynamics is the generating functional method . in the replica method onestudies the disorder - averaged free energy using the identity , identifying the power as describing replicas , ; with eventually taken to 0 .macroscopic order parameters are introduced through multiplication by unity of the form where the label replicas and is the effective hamiltonian after disorder averaging .the microscopic variables are integrated out and the dominant extremum with respect to the is taken in the limit . in the most natural asantz , replica symmetry among assumed , but this proved to be too naive .the correct solution for the sk model requires parisi s much more subtle ansatz of replica symmetry breaking .this ansatz introduces a hierarchy of spontaneous replica symmetry breaking ( rsb ) with a sequence of that in the limit of yields a continuous order function , later shown to be related to the average overlap distribution through . the dynamical functional method for the sk modelis discussed in . herewe describe instead its use for the minority game .a generating functional can be defined by where , denotes the transformation operatation of eqn .( [ batchupdate ] ) and denotes the probability distribution of the initial score differences . averaging over the specific choices of quenched strategies , introducing macroscopic two - time correlation order functions via [ unity ] and similar expressions for response functions and derivative - variable correlators , and integrating out the microscopic variables , the averaged generating functional may then be transformed exactly into a form where is -independent , the bold - face notation denotes matrices in time and the tilded variables are complementary ones introduced to exponentiate the delta functions in eqn .( [ unity ] ) and its partners .being extremally dominated , in the large- limit this yields the effective single agent stochastic dynamics where is coloured noise determined self - consistently over the corresponding ensemble by {tt'}. \label{colourednoise}\ ] ] fig .1 demonstrates the veracity of this result in a comparison of the results of computer simulation of the original deterministic many - body problem eqn . ( 11 ) and the numerical evaluation of the self - consistently noisy single - agent ensemble of eqn .( [ effectiveagent ] ) . the analogous equations for the -spin spherical spin glass formed the basis for recognition of the dynamical transitions mentioned earlier and the existence of aging solutions and modifications to conventional fluctuation - dissipation relations .having commented earlier that standard non - frustrated non - disordered infinite - ranged systems do not have interesting critical behaviour , it is relevant to note that again frustrated disordered systems are different , having interesting critical behaviour at low temperature and applied magnetic field , even though mean - field .parisi replica symmetry breaking involves an infinite sequence of hierarchies .-rsb has step - breaks in the order function .is related to the averaged overlap distribution by . ]the exact free energy is formally obtained by finding the supremum with respect to the break and plateau values and taking .the continuum limit was given as a set of implicit equations already in parisi s early work .most ( but not all ) of the subsequent analysis has been perturbative near to the transition temperature for spin glass onset .numerical evaluations have until a couple of years ago been restricted to just the first few steps of rsb , but very recently very high accuracy numerical extremizations for high orders of rsb have been performed at zero and low temperatures and have shown interesting features . at low temperaturesthe steps scale as with the having non - zero limits as and exposing critical points at both and . as the -step approximation of against approaches a fixed - point function of form close to with a ` correlation function ' in -space given by .the degree of rsb can be viewed as an effective one - dimensional lattice of size , with the analogue of the infinite - length lattice ( or thermodynamic ) limit .similarly , finite- approximation yields an analogue of finite - size effects , including finite - size scaling .note however that this new type of finite - size scaling is for a mean - field problem in the thermodynamic limit and is in a space of degree of approximation .there are also finite -size scalings when the system is perturbed away from the critical point ( at ) and for finite applied field near .correspondingly there are further ` correlation lengths ' in temperature - deviation and in field - deviation , which of course also determine the extent of rsb needed to get a good approximation as temperature or field become non - zero .in this short paper it has only been possible to present a brief and non - detailed vignette of the complexity that can and does exist in disordered and frustrated many - body systems , even within a dimension - free mean - field situation .the puzzles , intrigues and challenges have developed and been a source of intense study for over 30 years .finite - range systems have also been a great source of interest , again with significant progress but still subject to some controversy .the case of systems with variables having different fundamental timescales , such as fast neurons and slow synapses or evolutionary models with different timescales for phenotypes and genotypes , have not been discussed . nor has the problem of dynamical sticking in effectively self - determined disordered states of some systems without quenched disorder in their control functions but started far from equilibrium . also , in this brief review , only some of the simplest models have been described .it is however clear that many extensions and more realistic / complete scenarios exist that are still effectively range - free , yet complex , interesting and challenging .the author would like to thank his numerous collaborators , students , colleagues and friends , too many to name all individually , for their parts in helping his understanding and appreciation of the subject of this paper .he also acknowledges , with gratitude , the financial support of the epsrc ( and its predecessors ) , the ec and the esf .99 j cardy , _ scaling and renormalization in statistical physics _( cambridge university press , cambridge , 1996 ) d. sherrington and s. kirkpatrick , _ phys .rev . lett._*35 * 1972 ( 1976 ) s.f .edwards and p. w. anderson , _ j. phys .f _ * 5 * , 965 ( 1975 ) j. a. mydosh , _ spin glasses ; an experimental introduction _ , taylor and francis , london ( 1993 ) d. sherrington , in _ spin glasses _ , eds .e. bolthausen and a. bovier , ( springer , berlin , 2007 ) g. parisi , _ j. phys .* 13 * , 1101 ( 1980 ) g. parisi , _ phys .lett . _ * 50 * , 1946 ( 1983 ) m. mzard , g. parisi , n. sourlas , g. toulouse and m.a .virasoro , _j.physique _ * 45 * , 843 ( 1984 ) m. mzard , g. parisi and m.a .virasoro , _ spin glasstheory and beyond _( world - scientific , singapore , 1987 ) l. f. cugliandolo and j. kurchan , _ j.phys.a _ * 27 * , 5749 ( 1993 ) a.p .young a p ( ed . )_ spin glasses and random fields _( world scientific , singapore , 1997 ) g. parisi , in _ stealing the gold : a celebration of the pioneering physics of sam edwards _ , eds .p. m. goldbart , n. goldenfeld and d. sherrington ( oxford university press , oxford , 2004 ) a. p. young , _ physlett . _ * 51 * , 1206 ( 1983 ) m.talagrand , _ the sherrington - kirkpatrick model : a challenge for mathematicians _ , _ probab .* 110 * , 109 ( 1998 ) m. talagrand _ spin glasses : a challenge for mathematicians _( springer , berlin , 2003 ) f. guerra , _ cond - mat/_057581 ( 2005 ) e. bolthausen and a. bovier eds . , _ spin glasses _ ( springer , berlin , 2007 ) m. talagrand , _ ann .math._*163 * , 221 ( 2006 ) l. f. cugliandolo and j. kurchan , _ j. phys .a_*41 * , 324018 ( 2008 )a variant was posed to describe one of the clay millenium prize problems ; see http://www.claymath.org/millennium/p_vs_np . that the problem of finding the ground state of a spin glass in three and more dimensions is np - complete has been known since at least the early 1980s .s. kirkpatrick , c. d. gelatt and m. p. vecchi , _ science _ * 220 * , 672 ( 1983 ) d. j. gross , i. kanter and h. sompolinsky , _phys.rev.lett._*55 * , 304 ( 185 ) t. kirkpatrick and p. g. wolynes , _ phys.rev._*b * 36 , 8552 ( 1987 ) m. mzard and r. zecchina , _phys.rev . e_*66* , 056126 ( 2002 ) d. j. gross and m. mzard , _ nuc .b_*240 * , 431 ( 1984 ) a. crisanti and h - j sommers , _ z. phys .b_*87 * , 341 ( 1992 ) p. gillin , h. nishimori and d. sherrington , _ j. phys .a_*34 * , 2949 ( 2001 ) m. mzard , g. parisi and r. zecchina , _ science _ * 297 * , 812 ( 2002 ) f. kzakala , a. montanari .f. ricci - tersenghi , g. semerjian and l. zdeborova , _ proc .sci . _ * 104 * , 10318 ( 2007 ) a. crisanti , h. horner and h - j .sommers , _b_,*92 * , 257 ( 1993 ) e. gardner , _ nuc .b _ * 257 * , 747 ( 1985 ) s. kirkpatrick amd b. selman , _ science _ * 264 * , 1297 ( 1994 ) d. elderfield and d. sherrington , _ j. phys .c_*16 * , l497 ( 1983 ) d. j. gross , i kanter and h. sompolinsky , _ phys .lett . _ * 55 * , 304 ( 1985 ) d. challet , m. marsili and y - c zhang , _ minority games _ ( oxford university press , oxford 2005 ) a.c.c .coolen , _ the mathematical theory of minority games _ ( oxford university press , oxford 2005 ) d. challet and y - c zhang , _ physica a_*246 * , 407 ( 1997 ) a. cavagna , _ phys .rev e_*59 * , r3783 ( 1998 ) t.galla and d. sherrington _ physica a _ 324 , 25 ( 2003 ) d. sherrington , in _ heidelberg symposium on glassy dynamics _, 2 , ( springer - verlag , berlin 1987 ) j. j. hopfield , _proc.nat.acad.usa_*79 * , 2554 ( 1982 ) c. de dominicis , _ .j. physique c_*1 * , 247 ( 1976 ) h.janssen,_z .b_*23 * , 377 ( 1976 ) a. p. young , _ j.phys.a_41,324016 ( 2008 ) g. parisi , _a._*41 * , 324002 ( 2008 ) r. oppermann and d. sherrington , _ phys .lett . _ * 95 * , 197203 ( 2005 ) r. oppermann , m. j. schmidt and d. sherrington , _ phys .rev . lett._*98*,127201( 2007 ) r.oppermann and m. j. schmidt , _ arxiv : _ o801.1756 ( 2008 ) r. oppermann and m. j. schmidt , _ arxiv:_0803.3918 ( 2008 )s. pankov , _ phys .rev . lett._*96 * , 197204 ( 2006 ) h - j .sommers and w. dupont , _ j. phys .* , 5785 ( 1984 ) a. crisanti and t. rizzo , _ phys .rev . e_*65 * , 046137 ( 2002 )
|
a brief introduction and overview is given of the complexity that is possible and the challenges its study poses in many - body systems in which spatial dimension is irrelevant and naively one might have expected trivial behaviour .
|
the use of the hidden markov model ( hmm ) is ubiquitous in a range of sequence analysis applications across a range of scientific and engineering domains , including signal processing , genomics and finance .fundamentally , the hmm is a mixture model whose mixing distribution is a finite state markov chain .whilst the markov assumptions rarely correspond to the true physical generative process , it often adequately captures first - order properties that make it a useful approximating model for sequence data in many instances whilst remaining tractable even for very large datasets . as a consequence, hmm - based algorithms can give highly competitive performance in many applications .central to the tractability of hmms is the availability of recursive algorithms that allow fundamental quantities to be computed efficiently .these include the viterbi algorithm which computes the most probable hidden state sequence and the forward - backward algorithm which computes the marginal probability of a given state at a point in the sequence .computation for the hmm has been well - summarized in the comprehensive and widely read tutorial by with a bayesian treatment given more recently by .it is a testament to the completeness of these recursive methods that there have been few generic additions to the hmm toolbox since these were first described in the 1960s .however , as hmm approaches continue to be applied in increasingly diverse scientific domains and ever larger data sets , there is interest in expanding the generic toolbox available for hmm inference to encompass unmet needs .the motivation for our work is to develop mechanisms to allow the _ exploration _ of the posterior sequence space .typically , standard hmm inference limits itself to reporting a few standard quantities . for an -state markov chain of length there exists of possible sequences but often only the most probable sequence or the marginal posterior probabilitiesare used to summarize the whole posterior distribution .yet , it is clear that , when the state space is large and/or the sequences long , many other sequences maybe of interest . modifications of the viterbi algorithm can allow arbitrary number of the most probable sequences to be enumerated whilst bayesian techniques allows us to sample sequences from the posterior distribution .however , since a small change to the most likely sequences typically give new sequences with similar probability , these approaches do not lead to reports of _ qualitatively diverse _ sequences .by which we mean , alternative sequence predictions that might lead to different decisions or scientific conclusions . in this articlewe describe a set of novel recursive methods for hmm computation that incorporates segmental constraints that we call _-segment inference algorithms_. these are so - called because the algorithms are constrained to consider only sequences involving no more than specified transition events .we show that -segment procedures provide an intuitive approach for posterior exploration of the sequence space allowing diverse sequence predictions containing and segments or specific transitions of interest .these methods can be applied prospectively during model fitting or retrospectively to an existing model . in the latter case ,the utility of the methods described here comes at no cost ( other than computational time ) to the hmm user and we provide illustrative examples to highlight novel insights that maybe gained through -segment approaches .the hmm encodes for two types of random sequences : the hidden state sequence or path and the observed data sequence .individual hidden states take discrete values , such that , while observed variables can be of arbitrary type .the hidden state sequence follows a markov chain so that here , the first hidden state is drawn from some initial probability vector so that denotes the probability of being in state , whereas any subsequent hidden state ( with ) is drawn according to a transition matrix so that {m ' m } = p(x_n = m|x_{n-1}=m') ] and were also placed in random locations within the sequence , replacing thus the original text and with the only constraint that they did nt overlap with each other .each such set of artificially inserted segments were also randomly selected from the theses in economics by first picking a thesis and then selecting non - overlapping segments within that thesis text sequence .the whole procedure created a new dataset of documents so that a subset of them contained segments from the relevant topic and the remaining ones did not . for the detection task we worked similarly with the classification task discussed earlier .particularly , again we randomly perturb the test documents and insert a number of segments from the subject economics in each of the them .the insertion of segments was done exactly as described above with the only difference being that now we insert segments in all documents and their number can be much larger since .
|
hidden markov models ( hmms ) are one of the most widely used statistical methods for analyzing sequence data . however , the reporting of output from hmms has largely been restricted to the presentation of the most - probable ( map ) hidden state sequence , found via the viterbi algorithm , or the sequence of most probable marginals using the forward - backward ( f - b ) algorithm . in this article , we expand the amount of information we could obtain from the posterior distribution of an hmm by introducing linear - time dynamic programming algorithms that , we collectively call -segment algorithms , that allow us to i ) find map sequences , ii ) compute posterior probabilities and iii ) simulate sample paths conditional on a user specified number of segments , i.e. contiguous runs in a hidden state , possibly of a particular type . we illustrate the utility of these methods using simulated and real examples and highlight the application of prospective and retrospective use of these methods for fitting hmms or exploring existing model fits .
|
it has been pointed out by several authors for a long time , see e.g. , that the discrete character of the renormalization group transformation may in principle give rise to a periodic modulation of the critical amplitude .this oscillation has been then observed in hierarchical models , see for example that deal in particular with ising and potts models on diamond lattices .the modulation of the critical amplitude in these models turns out to be very small and the nature of this phenomenon has not been fully elucidated .here we consider this phenomenon for a particular hierarchical model , the wetting or pinning hierarchical model , because it is arguably the easiest set - up and it makes a direct contact with a vast mathematical literature : iteration of polynomial maps .in fact the partition function of a hierarchical model with pinning potential and volume size if just , where and with , and .we set and : we assume the _ super - criticality condition _ , so that is an unstable fixed point of and the sequence increases to infinity .actually it is straightforward to see that the increase is super - exponentially fast and just a little more work leads to the existence of the free energy density of the model we refer to ( * ? ? ?* appendix ) for the existence of this limit as well as for various properties of , such as the fact that is a convex non - decreasing function . as a matter of fact , it is immediate to see that for , while for .the origin is therefore necessarily a critical point and it is actually associated to a _ localization transition _ ( see and references therein for details on the statistical mechanics context , about which we are particularly concise here ) .one is then particularly interested in the free energy critical behavior , that is how behaves near , which , in this case , of course reduces to considering .what is argued in the aforementioned physical literature is that the expected critical behavior must be of the form where the amplitude is -periodic , which is of course compatible with being constant .we point out that the hierarchical model in has , , and , with ( to be precise in the _ dual _ model with replaced by and is considered , see for the equivalence of these two models ) .the restriction to is just for the sake of simplicity , but the choice of reflects a symmetry of the model that has no particular impact on the issue we tackle in this note : deal with _ disordered _ hierarchical pinning and in the presence of disorder the fact that leads to a new phenomenon ( but this is not not the case in absence of disorder ) . in and with a substantially more detailed analysis in the authors tackle the issue of establishing whether is trivial or not and of understanding the origin and size of the fluctuations . in an argument is presented , in the ising and potts models framework , that is expected to capture the size of the critical amplitude oscillations , but it yields an amplitude that is smaller than the true one , obtained numerically , by several orders of magnitude .the authors argue heuristically about the reason of such a mismatch between their arguments and numerics and they point out the role of the geometry of the julia set of the map associated to this model . in this directionthey actually provide a relatively precise estimate of the oscillation amplitude by exploiting empirical estimates on the julia set , even if they admit that the precision of their computation is surprising and _ presumably merely to be considered as a lucky circumstance_ . clarifying the relation between julia set andoscillations is one of the main aims of this note .an important observation at this point is that there is a probabilistic representation of the partition function of the hierarchical pinning model .consider in fact a random branching process , or galton - watson process , starting from one individual at time zero and with offspring distribution determined by the s weights .therefore if one focuses on the number of individuals that are present at time , then , conditionally on , is the sum of independent and identically distributed random variables for which the probability of being equal to is .one directly verifies that \ , .\ ] ] it is a classical result that converges almost surely to a non - degenerate limit random variable ( with a mass at zero if ) .moreover for every \ , = \ , { { \ensuremath{\mathbf e } } } \left[\exp(s w ) \right]\ , = : \ , \psi(s)\ , , \ ] ] and extends to the whole complex plane as an entire function . in his seminal paper ,t. e. harris pointed out ( among several other facts ) that where the _ harris function _ is continuous and -periodic .harris was unable to show that is not constant , even if he was able to compute numerically the value of for , and up to six decimal digits , a remarkable achievement considering the date at which the paper was published .later it became clear that does oscillate and that the amplitude of the oscillation is extremely small with respect to its _ average _ value ( see in particular , but also ) .a full understanding of this near - constant behavior is however still elusive .actually analogous phenomena were recorded also in other mathematical fields like combinatorial enumeration , spectral properties of transition operators on fractals and more ( see e.g. ) .ultimately , this is not surprising because all these problems boil down to studying iterations of a map : in the galton - watson case for example tells us that the generating function of the law of is precisely .certainly it has not escaped the reader that the qualitative properties of coincide with the ( expected ) qualitative properties of the critical amplitude .however the quantitative connection between and , beyond the common period , is a priori not clear . in statistical mechanics terms , the harris function emerges from the limit of the partition function in a particular vanishing limit of the pinning parameter : and then . comes also out of a limit of vanishing pinning parameter , but in this case is taken at fixed pinning parameter , only the leading laplace asymptotic term is kept , and then is sent to zero .nevertheless , and coincide : [ th : qualit ] given as above , the asymptotic relation holds with a -periodic analytic function .moreover from now on , they will be denoted by and , if we denote the fourier coefficients of , there exists a positive constant such that for sufficiently large .the notation we use does not highlight the dependence of on , but we stress that proposition [ th : qualit ] says that given one obtains a function .proposition [ th : qualit ] can be proven in a rather direct way by exploiting a number of relations that one can find in the large literature devoted to the subject , but we have been unable to find the statement in this literature .the proof is in section [ sec : proofs ] but let us anticipate one of the main tools that is going to be of help for the sequel of the introduction : for every with .this follows by observing that , if and for we have . to go beyond proposition [ th : qualit ]we restrict to the case .this restriction is made because we want to exploit directly the results in , that develop only the case .it is certainly possible to generalize these works , but this would not add much to the purpose of this note at the expense of rather lengthy arguments .if we set we directly verify that solves the _ bttcher equation _ .this can be seen directly from , that is conjugates and the monomial map . is increasing on and it is real analytic on ( see the proof of proposition [ th : qualit ] ) .we can therefore set and is analytic too .the central point of our approach is the following theorem , in which we use the notation . in what follows the natural logarithm with the choice of the negative semi - axis as branch cut : , for and ] .the function , from the positive semi - axis to itself , is invertible and its inverse is , where is -periodic and , with .moreover , and therefore , is analytic in the strip , which implies that , for every and . to keep the statement simple the explicit series expansion for postponed to section [ sec : quantit ] , starting from . in section [ sec : quantit ] one can also find an explicit construction for .what we want to emphasize with theorem [ th : bottcher ] is the quantitative and explicit relation between oscillations and geometry of the julia set . a more general but very implicit relation between julia set and oscillations can be established by using the notion of green s function of the monic polynomial map ( * ?- 101 ) , where _ monic _ means that .the green s function is well defined and in .it is actually harmonic ( ) except on the julia set and it is of course identically zero on the filled julia set .moreover can be alternatively defined as the unique continuous function that vanishes on , which is harmonic in and such that .note the remarkable fact that depends on only through .if now we observe that is conjugated to a monic via the linear map , , we readily see that is and for .therefore the free energy is determined as unique the solution of a dirichlet problem once ( and the blowing up factor ) are given .as a consequence , the oscillatory function is directly connected to the julia set via the solution of this dirichlet problem .theorem [ th : bottcher ] asserts that is not a constant : estimating the size of the oscillations is a challenging task .however , one does have _ explicit _ characterizations of and that can be exploited to get precise numerical and also _ computer - assisted _ estimates on the fourier coefficients of , with explicit error bounds .we address this issue in section [ sec : quantit ] .= 12 cm = 14 cmwe have the following representation of the harris function given in : for it is worth recalling that follows by using the _ poincar relation _ , which is an immediate consequence of , and of which is equivalent to the bttcher functional relation ( cf . [ sec : julia_etc ] ) .in fact these two relations imply that if is the right - hand side of then we have but directly implies that and so where in taking the limit we have also used .therefore is proven .going back to the main argument , is increasing and , for small , so for small .therefore , since is continuous ( and periodic ) , we obtain from which is with .we are therefore left with the regularity properties of which we now call . for thiswe first claim that that extends to an analytic function in a cone and , for a .the claim follows from by because one can find such that both and for .therefore no singularity comes from the series in if , if is sufficiently small , because by elementary estimates we see that the argument of remains bounded by a constant smaller than for .let us now look at the formula for in with for : by periodicity it suffices to consider small , which guarantees that the ( entire ) function takes values in the cone and therefore is analytic in the strip , which directly implies that the fourier coefficients of are smaller than , for every and sufficiently large .this completes the proof of proposition [ th : qualit ] .the argument we have just presented effectively uses the analyticity of for in a truncated cone , that is for and small . in order to get a better ( in fact , optimal ) estimate on the decay of the fourier coefficients a precise knowledge of the julia set is needed , in the sense that it is necessary for the truncated cone to be in the complement of the filled julia set .this of course guarantees that , but this a priori is not sufficient because one has to ensure also that the absolute value of the argument of , , does not go beyond for every in the truncated cone ( recall ) .we therefore restrict to and attack the problem from a somewhat different angle .for the map we consider is a super - attracting fixed point ( see ) .it is practical and customary to map the fixed point at infinity to a fixed point at zero by conjugation .let us make this explicit for , so that and .the fixed points of are ( unstable ) and ( stable ) .the affine transformation sends into the standard logistic map : the fixed points are mapped to ( unstable ) and ( stable ) .we then map to , so in the end we use the transformation ( ) to get to and the new iteration andthe unstable fixed point is now , while is stable ( in fact , super - attracting ) . for later use ,it is of help to note that and that we are going to use the fact that there exists a unique function , from to the interior of , which is analytic and invertible , satisfying and and actually , the existence and uniqueness of such a map in a small disk around the origin is a general result ( bttcher theorem ) . is usually called bttcher function for and it is uniquely determined once we require it to be analytic near zero , with and ( this is a classical result : see and references therein ) .the fact that can be extended to the whole unit disk depends on the details of the map and , in our case , on the fact that .the argument that follows is based on ( * ? ? ?* theorem 1 ) which gives an expression for : there exists entire , with and , and a non - trivial ( non - constant ! ) periodic function of period ( analytic on the strip ) such that and we recall that .the properties of directly imply that is analytic in the punctured unit disc and that it is a conformal ( and invertible ) map .actually , extends by continuity to .moreover is invertible in the range of its argument in .let us point out that implies .for this let us us go back to the , with which we have defined and then , see [ sec : julia_etc ] . as we have already remarked in the proof of proposition [ th : qualit ] , for sufficiently large and this directly implies the analyticity of in a neighborhood of infinity , as well as the existence of it inverse in the same neighborhood , since for ( we remark that coincides with the function in ) . going ( backward ) through the conjugation that we have performed we see that for large and that solves the bttcher equation ( in the sense that it has the same property as in ) .it must therefore coincide with and this extends the domain of analyticity , in fact bi - analyticity , of .again , by going carefully through the backward conjugation we see that , so is proven .the fact that extends as a continuous function to follows from the analogous property for .point ( 1 ) of theorem [ th : bottcher ] follows from the general theory ( * ? ? ?* theorem 9.5 ) .in fact the bttcher function maps ( in a bi - holomorphic fashion ) to and .let us move to point ( 2 ) and let us start by remarking that therefore , in view of we need to control for and then small .but tells us that for every and therefore it is practical to set ( in particular for ) we claim the following : [ th : ell ] the map is a bijection from to itself and its inverse can be written as , with -periodic .moreover is analytic in the strip .finally for we have _ proof ._ we use the properties of ( or ) and .therefore we see that if we set we have and , given in the statement .thus we have to solve at least for small .we see therefore that it is a matter of inverting .first , observe that is invertible , because is invertible .moreover is ( real ) analytic , hence is too .then set , so that is real analytic and from we directly see that , since is -periodic , is -periodic ( that , is -periodic ) and , since is not constant , is not a constant either. therefore the statement is proven , except for the domain of analyticity . to this end note that the argument above was given by restricting to the real axis , but we do know that is analytic on the symmetric strip of half - width and therefore is analytic in the positive half - plane : all we need to know is the fact that it is invertible in this domain .but since is defined and invertible in the punctured unit disk , is defined and invertible in and , thanks to and , we see that also is invertible , at least if we restrict to the truncated cone of the points such that , for any choice of , and sufficiently small . once again , since is periodic , analyticity on and , for arbitrary and large , implies analyticity on the whole strip .let us complete the proof of ( 2 ) and for this let us go back to and use and to see that a direct consequence of lemma [ th : ell ] and of is that for sufficiently small and therefore , since and , for such values of , we have we now recall once again and apply lemma [ th : ell ] to obtain and we compute , from which we identify in terms of and .[ rem : bpcase]of course one can upgrade the proof we just completed to include proposition [ th : qualit ] , that is to deal with the asymptotic behavior of . this is straightforward , albeit a bit lengthy : we sketch the main steps and make some comments .first of all we have and we notice the analogy with and we write where it is now a matter of applying lemma [ th : ell ] : the net result is the following representation for the generating function of the harris random variable : where we recall that . since for behaves like a constant times , one readily recovers ( let us remark that in fact is equivalent to ) .a detailed numerical approach to can be found in for , for , but without explicit error bounds .the ( two ) methods they employ however do allow an explicit control of the error , with a non - trivial amount of work .notably , by using what they call _ bttcher method _ , which is based on , one uses for , for which it is straightforward to control the error when one truncates the series , and one can easily set up an iterative procedure to get the taylor coefficients of with a control on the remainder by fixed point arguments .note that it is sufficient to control the error for ] , for some , from which we can extract the fourier coefficients of by controlled numerical integration . while in principlethis whole procedure is straightforward , in practice it is quite non - trivial given the fast decay of the coefficients and the fact that even the first coefficients are extremely small .[ rem : add ] once is precisely estimated over an interval of length the period it is of course known with the same precision over .the plot of the julia set in [ fig:1 ] is obtained by following the same principle : we will not perform explicit estimates for on and we content ourselves with remarking the somewhat surprising precision of such a procedure , see fig .[ fig:1 ] and its caption .the graph in fig .[ fig:1 ] has been obtained by keeping 250 terms in both the series for and , and by performing three times the _ backward iteration procedure _ that we explain just below . from wesee that the power series for has positive coefficients .one can obtain the first coefficients by setting , by computing the polynomial and by setting to zero the coefficients of the terms of degree smaller than .this determines . then we define via and use to obtain where is a polynomial of degree and , . with the notation } \vert q(x)\vert ] . by inverting the two polynomials that bound from below and above , and by taking the taylor expansion to order of these two expressions one directly recovers the power series for truncated at , with an explicit control on the rest. a performing way to improve this approximation of is the following : from we obtain where is concave , with slope at the origin and from this we get that if , with and a remainder like above , we get note that the new remainder is better both because it improves by a factor the estimate in the interval in which we have the estimate for ( with no a priori condition on the argument of ) and because it yields an explicit estimate on an interval that is times larger .we sum up this argument : if , with , for ] ( actually , even on a much larger set , since becomes much larger than for large ) .one starts by guessing the ( laurent ) series coefficients for , by using , and it is not difficult to see that where is analytic and is a polynomial of degree that contains only odd powers of .again by using we extract an equation for : where is a polynomial of degree , containing only even powers , with ; contains also only even powers and it is a polynomial of degree .then one can show that if one can exhibit and such that then for ] , then for ] , if , and one has an explicit estimate for in a larger interval . to sum up : if , with and for ] . by applying the arguments of the previous subsection for we have with for . from thiswe extract with for .moreover where for ] , so ] with an error of at most , where is the number of backward iterations employed , _ cf . _ .similarly , the error on in the range of values under consideration is uniformly bounded by .choosing makes the error on smaller than and setting also leads to an error on that is uniformly bounded by .this allows to estimate the _ average _ value of and the first fourier coefficient with a precision of at least : and . at this pointif one wants to recover one has to perform the inversion step in theorem [ th : bottcher](2 ) .it is not difficult to realize by considering that , given the small size of the oscillations , a good approximation of ( defined in theorem [ th : bottcher](2 ) ) is ) .of course a control of the error requires an attentive ( but elementary ) analysis . by performing explicitly the case under consideration and choosing for example , and we obtain and the first fourier coefficient is .g. g. thanks bernard derrida and mathieu merle for enlightening discussions .the research of o. c. was supported in part by nsf dms grant 1108794 .g. g. acknowledges the support of anr ( grant shepi ) , univ .paris diderot ( project scheps ) and the petronio fellowship fund at ias ( princeton ) .niemeijer and j. m. j. van leeuwen _renormalization theory for ising - like spin systems _ , in _ phase transitions and critical phenomena _ * 6 * , c. domb and m. s. green eds . , academic press , new york ( 1976 ) , 425 - 506 .
|
oscillatory critical amplitudes have been repeatedly observed in hierarchical models and , in the cases that have been taken into consideration , these oscillations are so small to be hardly detectable . hierarchical models are tightly related to iteration of maps and , in fact , very similar phenomena have been repeatedly reported in many fields of mathematics , like combinatorial evaluations and discrete branching processes . it is precisely in the context of branching processes with bounded off - spring that t. harris , in 1948 , first set forth the possibility that the logarithm of the moment generating function of the rescaled population size , in the super - critical regime , does not grow near infinity as a power , but it has an oscillatory prefactor . these oscillations have been observed numerically only much later and , while the origin is clearly tied to the discrete character of the iteration , the amplitude size is not so well understood . the purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting the pinning model that actually just boils down to iteration of polynomial maps ( and , notably , quadratic maps ) . in this note we show that the oscillatory critical amplitude for pinning models and the oscillating pre factor connected to the harris random variable coincide . moreover we make explicit the link between these oscillatory functions and the geometry of the julia set of the map , making thus rigorous and quantitative some ideas set forth in . + + 2010 _ mathematics subject classification : 82b27 , 60j80 , 37f10 _ + + _ keywords : hierarchical models , iteration of polynomial maps , oscillatory critical amplitudes , harris random variable , geometry of julia set _
|
in the seventies , there was a flurry of activity in black hole physics which brought out an unexpected interplay between general relativity , quantum field theory and statistical mechanics .that analysis was carried out only in the semi - classical approximation , i.e. , either in the framework of lorentzian quantum field theories in curved space - times or by keeping just the leading order , zero - loop terms in euclidean quantum gravity .nonetheless , since it brought together the three pillars of fundamental physics , it is widely believed that these results capture an essential aspect of the more fundamental description of nature . for over twenty years, a concrete challenge to all candidate quantum theories of gravity has been to derive these results from first principles , without invoking semi - classical approximations .specifically , the early work is based on a somewhat ad - hoc mixture of classical and semi - classical ideas reminiscent of the bohr model of the atom and generally ignored the quantum nature of the gravitational field itself .for example , statistical mechanical parameters were associated with macroscopic black holes as follows .the laws of black hole mechanics were first derived in the framework of _ classical _ general relativity , without any reference to the planck s constant .it was then noted that they have a remarkable similarity with the laws of thermodynamics if one identifies a multiple of the surface gravity of the black hole with temperature and a corresponding multiple of the area of its horizon with entropy . however , simple dimensional considerations and thought experiments showed that the multiples must involve , making quantum considerations indispensable for a fundamental understanding of the relation between black hole mechanics and thermodynamics . subsequently , hawking s investigation of ( test ) quantum fields propagating on a black hole geometry showed that black holes emit thermal radiation at temperature .it therefore seemed natural to assume that black holes themselves are hot and their temperature is the same as .the similarity between the two sets of laws then naturally suggested that one associate an entropy with a black hole of area . while this procedure seems very reasonable, it does not provide a ` fundamental derivation ' of the thermodynamic parameters and .the challenge is to derive these formulas from first principles , i.e. , by regarding large black holes as statistical mechanical systems in a suitable quantum gravity framework .recall the situation in familiar statistical mechanical systems such as a gas , a magnet or a black body . to calculate their thermodynamic parameters such as entropy, one has to first identify the elementary building blocks that constitute the system . for a gas ,these are molecules ; for a magnet , elementary spins ; for the radiation field in a black body , photons .what are the analogous building blocks for black holes ?they can not be gravitons because the underlying space - times were assumed to be stationary .therefore , the elementary constituents must be non - perturbative in the terminology of local field theory .thus , to account for entropy from first principles within a candidate quantum gravity theory , one would have to : i ) isolate these constituents ; ii ) show that , for large black holes , the number of quantum states of these constituents goes as the exponential of the area of the event horizon ; and , iii ) account for the hawking radiation in terms of processes involving these constituents and matter quanta .these are difficult tasks , particularly because the very first step isolating the relevant constituents requires new conceptual as well as mathematical inputs .furthermore , in the semi - classical theory , thermodynamic properties have been associated not only with black holes but also with cosmological horizons .therefore , ideally , the framework has to be sufficiently general to encompass these diverse situations .it is only recently , more than twenty years after the initial flurry of activity , that detailed proposals have emerged .the more well - known of these comes from string theory where the relevant elementary constituents are associated with d - branes which lie outside the original perturbative sector of the theory .the purpose of this contribution is to summarize the ideas and results from another approach which emphasizes the quantum nature of geometry , using non - perturbative techniques from the very beginning . here , the elementary constituents are the quantum excitations of geometry itself and the hawking process now corresponds to the conversion of the quanta of geometry to quanta of matter . although the two approaches seem to be strikingly different from one another , as i will indicate , in a certain sense they are complementary .in the last section , i focussed on quantum issues. however , the status of _ classical _ black hole mechanics , which provided much of the inspiration in quantum considerations , has itself remained unsatisfactory in some ways .therefore , in a systematic approach , one has to revisit the classical theory before embarking on quantization .the zeroth and first laws of black hole mechanics refer to equilibrium situations and small departures therefrom .therefore , in this context , it is natural to focus on isolated black holes .however , in standard treatments , these are generally represented by _stationary _ solutions of field equations , i.e , solutions which admit a time - translation killing vector field _everywhere _ , not just in a small neighborhood of the black hole .while this simple idealization is a natural starting point , it seems to be overly restrictive .physically , it should be sufficient to impose boundary conditions at the horizon which ensure _ only the black hole itself is isolated_. that is , it should suffice to demand only that the intrinsic geometry of the horizon be time independent , whereas the geometry outside may be dynamical and admit gravitational and other radiation .indeed , we adopt a similar viewpoint in ordinary thermodynamics ; in the standard description of equilibrium configurations of systems such as a classical gas , one usually assumes that only the system under consideration is in equilibrium and stationary , not the whole world . for black holes , in realistic situationsone is typically interested in the final stages of collapse where the black hole is formed and has ` settled down ' or in situations in which an already formed black hole is isolated for the duration of the experiment ( see figure 1 ) . in such situations , there is likely to be gravitational radiation and non - stationary matter far away from the black hole , whence the space - time as a whole is not expected to be stationary .surely , black hole mechanics should incorporate in such situations . of the horizon at late timesis isolated .the space - time of interest is the triangular region bounded by , and a partial cauchy slice .( b)space - time diagram of a black hole which is initially in equilibrium , absorbs a small amount of radiation , and again settles down to equilibrium .portions and of the horizon are isolated.,title="fig:",height=226 ] + ( a ) of the horizon at late times is isolated .the space - time of interest is the triangular region bounded by , and a partial cauchy slice .( b)space - time diagram of a black hole which is initially in equilibrium , absorbs a small amount of radiation , and again settles down to equilibrium .portions and of the horizon are isolated.,title="fig:",width=226 ] + ( b ) a second limitation of the standard framework lies in its dependence on _ event _ horizons which can only be constructed retroactively , after knowing the _ complete _ evolution of space - time .consider for example , figure 2 in which a spherical star of mass undergoes a gravitational collapse .the singularity is hidden inside the null surface at which is foliated by a family of marginally trapped surfaces and would be a part of the event horizon if nothing further happens .suppose instead , after a very long time , a thin spherical shell of mass collapses. then would not be a part of the event horizon which would actually lie slightly outside and coincide with the surface in distant future .on physical grounds , it seems unreasonable to exclude a priori from thermodynamical considerations .surely one should be able to establish the standard laws of laws of mechanics not only for the event horizon but also for .another example is provided by cosmological horizons in de sitter space - time . in this case, there are no singularities or black - hole event horizons . on the other hand ,semi - classical considerations enable one to assign entropy and temperature to these horizons as well .this suggests the notion of event horizons is too restrictive for thermodynamical analogies .we will see that this is indeed the case ; as far as equilibrium properties are concerned , the notion of event horizons can be replaced by a more general , quasi - local notion of ` isolated horizons ' for which the familiar laws continue to hold . the surface in figure 2 as well as the cosmological horizons in de sitter space - times are examples of isolated horizons. undergoes collapse .later , a spherical shell of mass falls into the resulting black hole .while and are both isolated horizons , only is part of the event horizon.,height=151 ] at first sight , it may appear that only a small extension of the standard framework , based on stationary event horizons , is needed to overcome the limitations discussed above .however , this is not the case .for example , in the stationary context , one identifies the black - hole mass with the adm mass defined at spatial infinity . in the presence of radiation, this simple strategy is no longer viable since radiation fields well outside the horizon also contribute to the adm mass .hence , to formulate the first law , a new definition of the black hole mass is needed .similarly , in the absence of a global killing field , the notion of surface gravity has to be extended in a non - trivial fashion . indeed ,even if space - time happens to be static in a neighborhood of the horizon already a stronger condition than contemplated above the notion of surface gravity is ambiguous because the standard expression fails to be invariant under constant rescalings of the killing field . when a _global _ killing field exists , the ambiguity is removed by requiring the killing field be unit at _infinity_. thus , contrary to intuitive expectation ,the standard notion of the surface gravity of a stationary black hole refers not just to the structure at the horizon , but also to infinity .this ` normalization problem ' in the definition of the surface gravity seems especially difficult in the case of cosmological horizons in ( lorentzian ) space - times whose cauchy surfaces are compact .apart from these conceptual problems , a host of technical issues must also be resolved . in einstein - maxwell theory ,the space of stationary black hole solutions is three dimensional whereas the space of solutions admitting isolated horizons is _infinite_-dimensional since these solutions admit radiation near infinity . as a result, new techniques have to be used and these involve some functional analytic subtleties .this set of issues has a direct bearing on quantization as well . for , in a systematic approach, one would first extract an appropriate sector of the theory in which space - time geometries satisfy suitable conditions at interior boundaries representing horizons , then introduce a well - defined action principle tailored to these boundary conditions , and , finally , use the resulting lagrangian or hamiltonian frameworks as points of departure for constructing the quantum theory .if one insists on using _ event _ horizons , these steps are difficult to carry out because the resulting boundary conditions do not translate in to ( quasi-)local restrictions on fields . indeed , for event horizon boundaries, there is _ no _ action principle available in the literature .the restriction to _ globally _ stationary space - times causes additional difficulties . for , by no hair theorems ,the space of stationary solutions admitting event horizons is finite dimensional and quantization of this ` mini - superspace ' would ignore all field theoretic effects _ by fiat_. indeed , most treatments of black hole mechanics are based on differential geometric identities and field equations , and are not at all concerned with such issues related to quantization .thus , the first challenge is to find a new framework which achieves , in a single stroke , three goals : i ) it overcomes the two limitations of black hole mechanics by finding a better substitute for stationary event horizons ; ii ) generalizes laws of black hole mechanics to the new , more physical paradigm ; and , iii ) leads to a well - defined action principle and hamiltonian framework which can serve as spring - boards for quantization .the second challenge is then to : i ) carry out quantization non - perturbatively ; ii ) obtain a quantum description of the horizon geometry ; and , iii ) account for the the horizon entropy statistical mechanically by counting the underlying micro - states .as discussed in the next section , these goals have been met for non - rotating isolated horizons .in this section , i will sketch the main ideas and results on the classical and quantum physics of isolated horizons and provide a guide to the literature where details can be found .the detailed boundary conditions defining non - rotating isolated horizons were introduced in .basically , an isolated horizon is a null 3-surface , topologically , foliated by a family of marginally trapped 2-spheres .denote the normal direction field to by ] .there are additional conditions on the newman - penrose spin coefficients associated with ] is assumed to be expansion - free . physically ,as explained above , this restriction captures the idea that the horizon is ` isolated ' , i.e. , we are dealing with an equilibrium situation .the restriction also gives rise to some mathematical simplifications which , in turn , make it possible to introduce a well - defined action principle and hamiltonian framework . as we will see below, these structures play an essential role in the proof of the generalized first law and in passage to quantization .let me begin by placing the present work on mechanics of isolated horizons in the context of other treatments in the literature .the first treatments of the zeroth and first laws were given by bardeen , carter and hawking for black holes surrounded by rings of perfect fluid and this treatment was subsequently generalized to include other matter sources . in all these works ,one restricted oneself to globally stationary space - times admitting event horizons and considered transitions from one such space - time to a nearby one .another approach , based on noether charges , was introduced by wald and collaborators .here , one again considers stationary event horizons but allows the variations to be arbitrary .furthermore , this method is applicable not only for general relativity but for stationary black holes in a large class of theories . in both approaches ,the surface gravity and the mass of the hole were defined using the global killing field and referred to structure at infinity .the zeroth and first laws were generalized to arbitrary , non - rotating isolated horizons in the einstein - maxwell theory in and dilatonic couplings were incorporated in . in this work , the surface gravity andthe mass of the isolated horizon refer only to structures _local _ to . ,electric and magnetic charges and , dilatonic charge , cosmological constant and the dilatonic coupling parameter . of these, and are defined _ at infinity_. in the generalized context of isolated horizons , on the other hand , one must use parameters that are intrinsic to .apriori , it is not obvious that this can be done .it turns out that we can trade with the area of the horizon and with the value of the dilaton field on .boundary conditions ensure that is a constant . ] as mentioned in section [ s3.1 ] , the space of solutions admitting isolated horizons is infinite dimensional and static solutions constitute only a finite dimensional sub - space of .let us restrict ourselves to the non - rotating case for comparison .then , in treatments based on the bardeen - carter - hawking approach , one restricts oneself only to and variations tangential to . in the wald approach, one again restricts oneself to points of but the variations need not be tangential to . in the present approach , on the other hand , the laws hold at _ any _ point of and _ any _ tangent vector at that point .however , so far , our results pertain only to _ non - rotating _horizons in a restricted class of theories .the key ideas in the present work can be summarized as follows .it is clear from the setup that surface gravity should be related to the acceleration of ] .now , the shear , the twist , and the expansion of the direction field ] . having a preferred at our disposal , using the standard normalization we can then select an from the equivalence class ] and ] , the form of the ricci tensor component dictated by our boundary conditions on the matter stress - energy , and our ` normalization condition ' on imply that is also constant along the integral curves of .hence is constant on any isolated horizon . to summarize , even though our boundary conditions allow for the presence of radiation arbitrarily close to , they successfully extract enough structure intrinsic to the horizons of static black holes to ensure the validity of the zeroth law .our derivation brings out the fact that the zeroth law is really local to the horizon : degrees of freedom of the isolated horizon ` decouple ' from excitations present elsewhere in space - time . to establish the first law ,one must first introduce the notion of mass of the isolated horizon .the idea is to define using the hamiltonian framework . for this, one needs a well - defined action principle .fortunately , even though the boundary conditions were designed only to capture the notion of an isolated horizon in a quasi - local fashion , they turn out to be well - suited for the variational principle .however , just as one must add a suitable boundary term at infinity to the einstein - hilbert action to make it differentiable in the asymptotically flat context , we must now add another boundary term at .somewhat surprisingly , the new boundary term turns out to be the well - known chern - simons action ( for the self - dual connection ) .this specific form is not important to classical considerations .however , it plays a key role in the quantization procedure .the boundary term at is different from that at infinity .therefore one can not simultaneously absorb both terms in the bulk integral using stokes theorem . finally , to obtain a well - defined variational principle for the maxwell part of the action, one needs a partial gauge fixing at .one can follow a procedure similar to the one given above for fixing the rescaling freedom in and .it turns out that , not only does this strategy make the maxwell action differentiable , but it also uniquely fixes the scalar potential at the horizon . having the action at one s disposal, one can pass to the hamiltonian framework .now , it turns out that the symplectic structure has , in addition to the standard bulk term , a surface term at .the surface term is inherited from the chern - simons term in the action and is therefore precisely the chern - simons symplectic structure with a specific coefficient ( i.e. , in the language of the chern - simons theory , a specific value of the ` level ' ) .the presence of a surface term in the symplectic structure is somewhat unusual ; for example , the boundary term at infinity in the action does _ not _ induce a boundary term in the symplectic structure .the hamiltonian consists of a bulk integral and two surface integrals , one at infinity and one at .the presence of two surface integrals is not surprising ; for example one encounters it even in the absence of an internal boundary , if the space - times under consideration have two asymptotic regions . as usual , the bulk term is a linear combination of constraints and the boundary term at infinity is the adm energy . using several examples as motivation , we interpret the surface integral at the horizon as the horizon mass .this interpretation is supported by the following result : if the isolated horizon extends to future time - like infinity , under suitable assumptions one can show that is equal to the future limit , along , of the bondi mass .finally , note that is _ not _ a fundamental , independent attribute of the isolated horizon ; it is a function of the area and charges , , which are regarded as the fundamental parameters .thus , we can now assign to any isolated horizon , an area , a surface gravity , an electric potential and a mass .the electric charge can be defined using the electro - magnetic and dilatonic fields field _ at _ .all quantities are defined in terms of the local structure at .therefore , one can now ask : if one moves from _ any _ space - time in to _ any _ nearby space - time through a variation , how do these quantities vary ?an explicit calculation shows : ( for simplicity , i have restricted myself here to the einstein - maxwell case without dilaton . ) thus , the first law of black hole mechanics naturally generalizes to isolated horizons .( as usual , the magnetic charge can be incorporated via the standard duality rotation . )this result provides additional support for our strategy of defining , and . in static space - times, the mass of the isolated horizon coincides with the adm mass defined at infinity . in general, is the difference between and the ` radiative energy ' of space - time . however , as in the static case , continues to include the energy in the ` coulombic ' fields i.e ., the ` hair' associated with the charges of the horizon , even though it is defined locally at .this is a subtle property but absolutely essential if the first law is to hold in the form given above . to my knowledge , none of the quasi - local definitions of mass shares this property with .finally , isolated horizons provide an appropriate framework for discussing the ` physical process version ' of the first law for processes in which the charge of the black hole changes . the standard strategy of using the adm mass in place of appears to run in to difficulties and , as far as i am aware , this issue was never discussed in the literature in the usual context of context of static event horizons . in this sub - section ,i will make a detour to introduce the basic ideas we need from quantum geometry . for simplicity, i will ignore the presence of boundaries and focus just on the structure in the bulk .there is a common expectation that the continuum picture of space - time , used in macroscopic physics , would break down at the planck scale .this expectation has been shown to be correct within a non - perturbative , background independent approach to quantum gravity ( see and references therein ) .the approach is background independent in the sense that , at the fundamental level , there is neither a classical metric nor any other field to perturb around .one only has a bare manifold and _ all _ fields , whether they represent geometry or matter , are quantum mechanical from the beginning .because of the subject matter now under consideration , i will focus on geometry .quantum mechanics of geometry has been developed systematically over the last three years and further exploration continues .the emerging theory is expected to play the same role in quantum gravity that differential geometry plays in classical gravity .that is , quantum geometry is not tied to a specific gravitational theory .rather , it provides a kinematic framework or a language to formulate dynamics in a large class of theories , including general relativity and supergravity . in this framework ,the fundamental excitations of gravity / geometry are one - dimensional , rather like ` polymers ' and the continuum picture arises only as an approximation involving coarse - graining on semi - classical states .the one dimensional excitations can be thought of as flux lines of area .roughly , each line assigns to a surface element it crosses one planck unit of area .more precisely , the area assigned to a surface is obtained by algebraic operations ( involving group - representation theory ) at points where the flux lines intersect the surface . as is usual in quantum mechanics , quantum states of geometryare represented by elements of a hilbert space .i will denote it by .the basic object for spatial riemannian geometry continues to be the triad , but now represented by an operator(-valued distribution ) on .all other geometric quantities such as areas of surfaces and volumes of regions are constructed from the triad and represented by self - adjoint operators on .the eigenvalues of all geometric operators are discrete ; geometry is thus quantized in the same sense that the energy and angular momentum of the hydrogen atom are quantized .there is however , one subtlety : there is a one - parameter ambiguity in this non - perturbative quantization .the parameter is positive , labeled and called the immirzi parameter .this ambiguity is similar to the ambiguity in the quantization of yang - mills theories .for all values of , one obtains the same classical theory , expressed in different canonical variables .however , quantization leads to a one - parameter family of _ inequivalent _ representations of the basic operator algebra .in particular , in the sector labeled by the spectra of the triad and hence , all geometric operators depend on through an overall multiplicative factor . therefore , while the qualitative features of quantum geometry are the same in _ all _ sectors , the precise eigenvalues of geometric operators vary from one sector to another .the -dependence itself is simple effectively , newton s constant is replaced by in the -sector .nonetheless , to obtain unique predictions , it must be eliminated and this requires an additional input .note however that since the ambiguity involves a single parameter , as with the ambiguity in qcd , one judiciously chosen experiment would suffice to eliminate it .thus , for example , if we could measure the quantum of area , i.e. , smallest non - zero value that area of any surface can have , we would know which value of is realized in nature .any further experiment would then be a test of the theory .of course , it is not obvious how to devise a feasible experiment to measure the area quantum directly .however , we will see that it is possible to use black hole thermodynamics to introduce suitable thought experiments .one of them can determine the value of and the other can then serve as consistency checks . ideas introduced in the last three sub - sections were combined and further developed to systematically analyze the quantum geometry of isolated horizons and calculate their statistical mechanical entropy in .( for earlier work , see . ) in this discussion , one is interested in space - times with an isolated horizon with _ fixed _values and of the intrinsic horizon parameters , the area , the electric charge , and the value of the dilaton field .the presence of an isolated horizon manifests itself in the classical theory through boundary conditions .as usual , we can use some of the boundary conditions to eliminate certain gauge degrees of freedom at .the remaining degrees of freedom are coded in an abelian connection defined intrinsically on . is constructed from the self - dual spin connection in the bulk .it is interesting to note that there are _ no _ surface degrees of freedom associated with matter : given the intrinsic parameters of the horizon , boundary conditions imply that matter fields defined intrinsically on can be completely expressed in terms of geometrical ( i.e. , gravitational ) fields at .one can also see this feature in the symplectic structure . while the gravitational symplectic structure acquires a surface term at , matter symplectic structures do not .we will see that this feature provides a simple explanation of the fact that , among the set of intrinsic parameters natural to isolated horizons , entropy depends only on area . of particular interest to the present hamiltonian approachis the pull - back of to the 2-sphere ( orthogonal to and ) at which the space - like 3-surfaces used in the phase space construction intersect .( see figure 1(a ) . )this pull - back which i will also denote by for simplicity is precisely the spin - connection of the 2-sphere . not surprisingly , the chern - simons symplectic structure for the non - abelian self - dual connection that i referred to in section [ s3.2 ] can be re - expressed in terms of .the result is unexpectedly simple : the surface term in the total symplectic structure is now just the chern - simons symplectic structure for the _ abelian _ connection ! the only remaining boundary condition relates the curvature of to the triad vectors .this condition is taken over as an operator equation .thus , in the quantum theory , neither the intrinsic geometry nor the curvature of the horizon are frozen ; neither is a classical field .each is allowed to undergo quantum fluctuations but because of the operator equation relating them , they have to fluctuate in tandem . to obtain the quantum description in presence of isolated horizons , therefore ,one begins with a fiducial hilbert space where is the hilbert space associated with the bulk polymer geometry and is the chern - simons hilbert space for the connection . by continuity .in quantum theory , by contrast , the measure is concentrated on generalized fields which can be arbitrarily discontinuous , whence surface states are no longer determined by bulk states .a compatibility relation does exist but it is introduced by the quantum boundary condition .it ensures that the total state is invariant under the permissible internal rotations of triads . ]the quantum boundary condition says that only those states in are allowed for which there is a precise intertwining between the bulk and the surface parts .however , because the required intertwining is ` rigid ' , apriori it is not clear that the quantum boundary conditions would admit _ any _ solutions at all . for solutions to exist, there has to be a very delicate matching between certain quantities on calculated from the bulk quantum geometry and certain quantities on calculated from the chern - simons theory .the precise numerical coefficients in the surface calculation depend on the numerical factor in front of the surface term in the symplectic structure ( i.e. , on the chern - simons level ) which is itself determined in the classical theory by the coefficient in front of the einstein - hilbert action and our classical boundary conditions .thus , the existence of a coherent quantum theory of isolated horizons requires that the three corner stones classical general relativity , quantum mechanics of geometry and chern - simons theory be united harmoniously .not only should the three conceptual frameworks fit together seamlessly but certain _ numerical coefficients _ , calculated independently within each framework , have to match delicately .fortunately , these delicate constraints are met and the quantum boundary conditions admit a sufficient number of solutions . because we have fixed the intrinsic horizon parameters ,is is natural to construct a micro - canonical ensemble from eigenstates of the corresponding operators with eigenvalues in the range where is very small compared to the fixed value of the intrinsic parameters .since there are no surface degrees of freedom associated with matter fields , let us focus on area , the only gravitational parameter available to us .then , we only have to consider those states in whose polymer excitations intersect in such a way that they endow it with an area in the range where is of the order of ( with , the planck length ) .denote by the set of punctures that any one of these polymer states makes on , each puncture being labeled by the eigenvalue of the area operator at that puncture .given such a bulk state , the quantum boundary condition tells us that only those chern - simons surface states are allowed for which the curvature is concentrated at punctures and the range of allowed value of the curvature at each puncture is dictated by the area eigenvalue at that puncture .thus , for each , the quantum boundary condition picks out a sub - space of the surface hilbert space .thus , the quantum geometry of the isolated horizon is effectively described by states in as runs over all possible punctures and area - labels at each puncture , compatible with the requirement that the total area assigned to lie in the given range .-th polymer excitation of the bulk geometry carries a -integer label . upon puncturing the horizon 2-sphere , it induces planck units of area . at each puncture , in the intrinsic geometry of , there is a deficit angle of , where is a -integer in the interval ] and the ` level ' of the chern - simons theory .( b)magnified view of a puncture .the holonomy of the connection around a loop surrounding any puncture determines the deficit angle at .each deficit angle is quantized and they add up to .,title="fig:",width=151 ] + ( b ) one can visualize this quantum geometry as follows . given any one state in , the connections are flat everywhere except at the punctures and the holonomy around each puncture is fixed . using the classical interpretation of as the metric compatible spin connection on conclude that , in quantum theory , the intrinsic geometry of the horizon is flat except at the punctures . at each puncture, there is a deficit angle , whose value is determined by the holonomy of around that puncture . since each puncture corresponds to a polymer excitation in the bulk , polymer lines can be thought of as ` pulling ' on the horizon , thereby producing deficit angles in an otherwise flat geometry ( see figure [ fig3 ] ) .each deficit angle is quantized and the angles add up to as in a discretized model of a 2-sphere geometry .thus , the quantum geometry of an isolated horizon is quite different from its smooth classical geometry .in addition , of course , each polymer line endows the horizon with a small amount of area and these area elements add up to provide the horizon with total area in the range .thus , one can intuitively picture the quantum horizon as the surface of a large , water - filled balloon which is suspended with a very large number of wires , each exerting a small tug on the surface at the point of contact and giving rise to a ` conical singularity ' in the geometry .finally , one can calculate the entropy of the quantum micro - canonical ensemble .we are not interested in the _ full _ hilbert space since the ` bulk - part ' includes , e.g. , states of gravitational radiation and matter fields far away from .rather , we wish to consider only the states of the isolated horizon itself .therefore , we are led to trace over the ` bulk states ' to construct a density matrix describing a maximum - entropy mixture of surface states for which the intrinsic parameters lie in the given range .the statistical mechanical entropy is then given by .as usual , the trace can be obtained simply by counting states , i.e. , by computing the dimension of .we have : thus , the number of micro - states does go exponentially as area .this is a non - trivial result .for example if , as in the early treatments , one ignores boundary conditions and the chern - simons term in the symplectic structure and does a simple minded counting , one finds that the exponent in is proportional to . however , our numerical coefficient in front of the exponent depends on the immirzi parameter .the appearance of can be traced back directly to the fact that , in the -sector of the theory , the area eigenvalues are proportional to .thus , because of the quantization ambiguity , the -dependence of is inevitable .we can now adopt the following ` phenomenological ' viewpoint . in the infinite dimensional space ,one can fix one space - time admitting isolated horizon , say the schwarzschild space - time with mass , ( or , the de sitter space - time with the cosmological constant ) . for agreement with semi - classical considerations , in these cases, entropy should be given by which can happen only in the sector of the theory .the theory is now completely determined and we can go ahead and calculate the entropy of any other isolated horizon in _ this _ theory .clearly , we obtain : for _ all _ isolated horizons .furthermore , in this -sector , the statistical mechanical temperature of any isolated horizon is given by hawking s semi - classical value .thus , we can do one thought experiment observe the temperature of a large black black hole from far away to eliminate the immirzi ambiguity and fix the theory . this theory then predicts the correct entropy and temperature for all isolated horizons in with .the technical reason behind this universality is trivial .however , the conceptual argument is not because it is quite non - trivial that depends only on the area and not on values of other charges .furthermore , the space is infinite dimensional and it is not apriori obvious that one should be able to give a statistical mechanical account of entropy of _ all _ isolated horizons in one go .indeed , values of fields such as and can be vary from one isolated horizon to another even when they have same intrinsic parameters .this freedom could well have introduced obstructions , making quantization and entropy calculation impossible . that this does not happenis related to but independent of the fact that this feature did not prevent us from extending the laws of mechanics from static event horizons to general isolated horizons .i will conclude this sub - section with two remarks .\i ) in this approach , we began with the sector of general relativity admitting isolated horizons and then quantized that sector . therefore , ours is an ` effective ' description . in a fundamental description , one would begin with the full quantum theory and isolate in it the sector corresponding to quantum horizons . since the notion of horizon is deeply tied to classical geometry , at the present stage of our understanding , this goal appears to be out of reach in all approaches to quantum gravity .however , for thermodynamic considerations of large horizons , the effective description should be sufficient .\ii ) the notion of entropy used here has two important features .first , in this framework , the notion is not an abstract property of the space - time as a whole but depends on the division of space - time in to an exterior and an interior .operationally , it is tied to the class of observers who live in the exterior region for whom the isolated horizon is a _ physical _ boundary that separates the part of the space - time they can access from the part they can not .( this is in sharp contrast to early work which focussed on the interior . )this point is especially transparent in the case of cosmological horizons in de sitter space - time since that space - time does not admit an invariantly defined division .the second feature is that , although there is ` observer dependence ' in this sense , the entropy does _ not _ refer to the degrees of freedom in the interior .indeed , nowhere in our calculation did we analyze the states associated with the interior .rather , our entropy refers to the micro - states of the boundary itself which are compatible with the macroscopic constraints on the area and charges of the horizon ; it counts the physical micro - states which can interact with the outside world , not disconnected from it .perhaps the most pleasing aspect of this analysis is the existence of a single framework to encompass diverse ideas at the interface of general relativity , quantum theory and statistical mechanics . in the classical domain ,this framework generalizes laws of black hole mechanics to physically more realistic situations . at the quantum level, it provides a detailed description of the quantum geometry of horizons and leads to a statistical mechanical calculation of entropy . in both domains , the notion of isolated horizons provides an unifying arena enabling us to handle different types of situations e.g . , black holes and cosmological horizons in a single stroke . in the classical theory ,the same line of reasoning allows one to establish the zeroth and first laws for _ all _ isolated horizons .similarly , in the quantum theory , a single procedure leads one to quantum geometry and entropy of _ all _ isolated horizons .by contrast , in other approaches , fully quantum mechanical treatments seem to be available only for stationary black holes .indeed , to my knowledge , even in the static case , a complete statistical mechanical calculation of the entropy of cosmological horizons has not been available .finally , our extension of the standard _ killing _ horizon framework sheds new light on a number of issues , particularly the notion of mass of associated to an horizon and the physical process version of the first law . however , the framework presented here is far from being complete and provides promising avenues for future work .first , while some of the motivation behind our approach is similar to the considerations that led to the interesting series of papers by brown and york , not much is known about the relation between the two frameworks. it would be interesting to explore this relation , and more generally , to relate the isolated horizon framework to the semi - classical ideas based on euclidean gravity .second , while the understanding of the micro - states of an isolated horizon is fairly deep by now , work on a quantum gravity derivation of the hawking radiation is still in a preliminary stage . using general arguments based on einstein s a and b coefficients and the known micro - states of an isolated horizon, one can argue that the envelope of the line spectrum emitted by a black hole should be thermal . however , further work is necessary to make sure that the details are correct . for the laws of mechanics and the entropy calculation ,the obvious open problem is the extension to incorporate non - zero angular momentum .recently , jerzy lewandowski has performed an exhaustive analysis of the geometrical structure of general isolated horizons and streamlined the necessary background material .further recent work in collaboration with him and with chris beetle and steve fairhurst has led to a generalization of boundary conditions to incorporate rotation ( as well as distortion in absence of rotation ) and a proof of the zeroth law in the general context .construction of the corresponding hamiltonian framework is now under way .the extension of the entropy calculation , on the other hand , may turn out to be trickier for it may well require a new technical insight . on a long range ,the outstanding challenge is to obtain a deeper understanding of the immirzi ambiguity and the associated issue of renormalization of newton s constant . for any value of ,one obtains the ` correct ' classical limit .however , as far as black hole thermodynamics is concerned , it is only for that one seems to obtain agreement with quantum field theory in curved space - times .is this value of robust ?can one make further semi - classical checks ?a pre - requisite for this investigation is a better handle on the issue of semi - classical states .a major effort will soon be devoted to this issue .let me conclude with a comparison between the entropy calculation in this approach and those performed in string theory .first , there are some obvious differences . in the present approach ,one begins with the sector of the classical theory containing space - times with isolated horizons and then proceeds with quantization .consequently , one can keep track of the physical , curved geometry .in particular , as required by physical considerations , the micro - states which account for entropy can interact with the physical exterior of the black hole . in string theory , by contrast , actual calculations are generally performed in flat space and non - renormalization arguments and/or duality conjectures are then invoked to argue that the results so obtained refer to macroscopic black holes .therefore , relation to the curved space geometry and physical meaning of the degrees of freedom which account for entropy is rather obscure .more generally , lack of direct contact with physical space - time can also lead to practical difficulties while dealing with other macroscopic situations .for example , in string theory , it may be difficult to account for the entropy normally associated with de sitter horizons .on the other hand , in the study of genuinely quantum , planck size black holes , this ` distance ' from the curved space - time geometry may turn out to be a blessing , as classical curved geometry will not be an appropriate tool to discuss physics in these situations .in particular , a description which is far removed from space - time pictures may be better suited in the discussion of the last stages of hawking evaporation and the associated issue of ` information loss ' .the calculations based on string theory have been carried out in a number of space - time dimensions while the approach presented here is directly applicable only to four dimensions .an extension of the underlying non - perturbative framework to higher dimensions was recently proposed by freidel , krasnov and puzzio but a systematic development of quantum geometry has not yet been undertaken .also , our quantization procedure has an inherent -ambiguity which trickles down to the entropy calculation .by contrast , calculations in string theory are free of this problem .on the other hand , most detailed calculations in string theory have been carried out only for ( a sub - class of ) extremal or near - extremal black holes . while these black holes are especially simple to deal with mathematically , unfortunately , they are not of direct relevance to astrophysics , i.e. , to the physical world we live in .more recently , using the maldecena conjecture , stringy calculations have been extended to non - extremal black holes with , where is the schwarzschild radius. however , the numerical coefficient in front of the entropy turns out to be incorrect and it is not yet clear whether inclusion of non - abelian interactions , which are ignored in the current calculations , would restore the numerical coefficient to its correct value .furthermore , it appears that a qualitatively new strategy may be needed to go beyond the approximation . finally , as in other results based on the maldecena conjecture , the underlying boundary conditions at infinity are quite unphysical since the radius of the compactified dimensions is required to equal the cosmological radius even near infinity .hence the relevance of these mathematically striking results to our physical world remains unclear .in the current approach , by contrast , ordinary , astrophysical black holes in the physical , four space - time dimensions are included from the beginning . in spite of this differences , there are some striking similarities .our polymer excitations resemble stings .our horizon looks like a ` gravitational 2-brane ' .our polymer excitations ending on the horizon , depicted in figure [ fig3 ] , closely resemble strings with end points on a membrane . as in string theory ,our ` 2-brane ' carries a natural gauge field .furthermore , the horizon degrees of freedom arise from this gauge field .these similarities seem astonishing .however , a closer look brings out a number of differences as well .in particular , being horizon , our ` 2-brane ' has a direct interpretation in terms of the curved _ space - time geometry _ and our connection is the _ gravitational _ spin - connection on the horizon .nonetheless , it may well be that , when quantum gravity is understood at a deeper level , it will reveal that the striking similarities are not accidental , i.e. , that the two descriptions are in fact closely related .* acknowledgments : * the material presented in this report is based on joint work with john baez , chris beetle , alex corichi , steve fairhurst and especially kirill krasnov .i am grateful to them for collaboration and countless discussions .special thanks are due to chris beetle for his help with figures .i would like to thank jerzy lewandowski for sharing his numerous insights on the isolated horizon boundary conditions .i have also profited from comments made by brandon carter , piotr chrusciel , helmut friedrich , sean hayward , gary horowitz , ted jacobson , don marolf , jorge pullin , istavan racz , oscar reula , carlo rovelli , bernd schmidt , daniel sudarsky , thomas thiemann and robert wald .this work was supported in part by the nsf grants phy94 - 07194 , phy95 - 14240 , int97 - 22514 and by the eberly research funds of penn state .a. ashtekar , k. krasnov .quantum geometry and black holes , in _ black holes , gravitational radiation and the universe _b. bhawal and b.r .iyer , kluwer , dordrecht , 149 - 170 ( 1998 ) ; also available as gr - qc/9804039 .j. baez , diffeomorphism invariant generalized measures on the space of connections modulo gauge transformations , in _ the proceedings of the conference on quantum topology _d. yetter , world scientific , singapore , 1994 .
|
the arena normally used in black holes thermodynamics was recently generalized to incorporate a broad class of physically interesting situations . the key idea is to replace the notion of stationary event horizons by that of ` isolated horizons . ' unlike event horizons , isolated horizons can be located in a space - time _ quasi - locally_. furthermore , they need not be killing horizons . in particular , a space - time representing a black hole which is itself in equilibrium , but whose exterior contains radiation , admits an isolated horizon . in spite of this generality , the zeroth and first laws of black hole mechanics extend to isolated horizons . furthermore , by carrying out a systematic , non - perturbative quantization , one can explore the quantum geometry of isolated horizons and account for their entropy from statistical mechanical considerations . after a general introduction to black hole thermodynamics as a whole , these recent developments are briefly summarized . _pl p h
|
path - ensemble averages play a central role in nonequilibrium statistical mechanics , akin to the role of configurational ensemble averages in equilibrium statistical mechanics .expectations of various functionals over processes where a system is driven out of equilibrium by a time - dependent external potential have been shown to be related to equilibrium properties , including free energy differences and thermodynamic expectations . the latter relationship , between equilibrium and nonequilibrium expectations , has been applied to several specific cases , such as : the potential of mean force ( pmf ) along the pulling coordinate ( or other observed coordinates ) in single - molecule pulling experiments ; rna folding free energies as a function of a control parameter ; the root mean square deviation from a reference structure ; the potential energy distribution and average ; and the thermodynamic length . compared to equilibrium sampling , nonequilibrium processes may be advantageous for traversing energetic barriers and accessing larger regions of phase space per unit time .this is useful , for example , in reducing the effects of experimental apparatus drift or increasing the sampling of barrier - crossing events .thus , there has been interest in calculating equilibrium properties from nonequilibrium trajectories collected in simulations or laboratory experiments . indeed , single - molecule pulling data has been used to experimentally verify relationships between equilibrium and nonequilibrium quantities . while many estimators for free energy differences and equilibrium ensemble averages can be constructed from nonequilibrium relationships , they will differ in the efficiency with which they utilize finite data sets , leading to varying amounts of statistical bias and uncertainty .characterization of this bias and uncertainty is helpful for comparing the quality of different estimators and assessing the accuracy of a particular estimate . the statistical uncertainty of an estimator is usually quantified by its variance in the asymptotic , or large sample , limit , where estimates from independent repetitions of the experiment often approach a normal distribution about the true value due to the central limit theorem .it is an important goal to find an optimal estimator which minimizes this asymptotic variance . although numerical estimates of the asymptotic variance may be provided by bootstrapping ( e.g. ref . ) , closed - form expressions can provide computational advantages in the computation of confidence intervals , allow comparison of asymptotic efficiency , and facilitate the design of adaptive sampling strategies to target data collection in a manner that most rapidly reduces statistical error . in the asymptotic limit , the statistical error in functions of the estimated parameters can be estimated by propagating this variance estimate via a first - order taylor series expansion .while this procedure is relatively straightforward for simple estimators , it can be difficult for estimators that involve arbitrary functions ( e.g. nonlinear or implicit equations ) of nonequilibrium path - ensemble averages .fortunately , the extended bridge sampling ( ebs ) estimators , a class of equations for estimating the ratios of normalizing constants , are known to have both minimal - variance forms and associated asymptotic variance expressions .recently , shirts and chodera applied the ebs formalism to generalize the bennett acceptance ratio , producing an optimal estimator combining data from multiple equilibrium states to compute free energy differences , thermodynamic expectations , and their associated uncertainties . here , we apply the ebs formalism to estimators utilizing nonequilibrium trajectories .we first construct a general minimal - variance path - average estimator that can use samples collected from multiple nonequilibrium path - ensembles .we then show that some existing path - average estimators using uni- and bidirectional data are special cases of this general estimator , proving their optimality .this also allows us to develop asymptotic variance expressions for estimators based on jarzynski s equality and the hummer - szabo expressions for the pmf . we then demonstrate them on simulation data from a simple one - dimensional system and comment on their applicability .suppose that we sample paths ( trajectories ) from each of path - ensembles indexed by .the path - ensemble average of an arbitrary functional ] is a probability density over trajectories , = c_i^{-1 } q_i[x ] \:\ : ; \:\ : c_i = \int dx \ , q_i[x ] , \label{eq : path_density}\end{aligned}\ ] ] with unnormalized density > 0 ] is an arbitrary functional of , and all normalization constants are nonzero . summing over the index in eq . [ equation : importance - sampling - identity ] and using the sample mean , ] , all of which are asymptotically consistent , but whose statistical efficiencies will vary . with the choice , = \frac{n_j \hat{c}_j^{-1 } } { \sum\limits_{k=1}^k n_k \ , \hat{c}_k^{-1 } \ , q_k[x]},\ ] ] eq . [eq : ext_bridge ] simplifies to the optimal ebs estimator , }{q_i[x_{jn } ] } \right]^{-1}. \label{eq : opt_bridge}\ ] ] this choice for ] ( although it is sometimes possible to do so in computer simulations via transition path sampling ) . if no paths are drawn from the path - ensemble corresponding to ] . for each defined path - ensemble , the weight matrix is augmented by one column with elements , \ , q_i[x_n ] } { \sum\limits_{k=1}^k \ , n_k \ , \hat{c}_k^{-1 } \ , q_k[x_n ] } .\label{eq : f_weight_elements}\ ] ] the estimator for the path - ensemble average , , can be expressed in terms of weight matrix elements , , \ ] ] and its uncertainty estimated by above formalism is fully general , and may be applied to _ any _ situation where the ratio / q_j[x] ] is the appropriate dimensionless work . in hamiltonian dynamics , for example , this work is = \beta \int_0^t dt ' \ , ( \partial h/\partial t') ]. we will refer to data sets which only include realizations from the forward path - ensemble as ` unidirectional ' , and those with paths from both path - ensembles as ` bidirectional ' .notably , sampling paths from these conjugate ensembles and calculating the associated work ] , requires an estimate of . a method for obtaining this estimatewill be described next .jarzynski s equality , relates nonequilibrium work and free energy differences . to facilitate the use of ebs in jarzynski s equality , we define a path - ensemble by choosing = e^{-w_t[x]} ] in eq .[ equation : bidirectional - path - estimator ] gives the estimator } } { n_f + n_r \ , e^{-\hat{\omega}[x_{fn } ] } } + \sum_{n=1}^{n_r } \frac { e^{-w_t[x_{rn } ] } } { n_f + n_r \ , e^{-\hat{\omega}[x_{rn } ] } } \label{equation : bidirectional - ft}\end{aligned}\ ] ] in this equation , choosing or leads to an implicit function mathematically equivalent to the bennett acceptance ratio method , as previously explained . the asymptotic variance of is calculated by augmenting the matrices and and using in eq .[ eq : cov ] , such that , on jarzynski s equality , hummer and szabo developed expressions for the pmf , the free energy as a function of a _ reaction coordinate _ rather than a thermodynamic state , that may be used to interpret single - molecule pulling experiments . in these experiments , a molecule is mechanically stretched by a force - transducing apparatus , such as an laser optical trap or atomic force microscope tip ( c.f .the hamiltonian governing the time evolution in these experiments , , is assumed to contain both a term corresponding to the unperturbed system , , and a time - dependent ( typically harmonic ) external bias potential imposed by the apparatus , , which acts along a pulling coordinate , .as the coordinate is observed at fixed intervals over the course of the experiment , we will henceforth use as an integer time index .we calculate the work with a discrete sum as ] , where is a column matrix of weights from eqs .[ eq : weight_elements ] and [ eq : f_weight_elements ] corresponding to path - ensemble .the elements of are , } { n_f \hat{c}_f^{-1 } q_f[x_n ] + n_r \hat{c}_r^{-1 } q_r[\tilde{x}_n ] } = \frac { 1 } { n_f + n_r e^{-\hat{\omega}[x_n ] } } = n_f^{-1 } \epsilon(l_n ) \\m_{nr } & = & \frac { \hat{c}_r^{-1 } q_r[\tilde{x}_{fn } ] } { n_f \hat{c}_f^{-1 } q_f[x_n ] + n_r \hat{c}_r^{-1 } q_r[\tilde{x}_n ] } = \frac { 1 } { n_f e^{\hat{\omega}[x_n ] } + n_r } = n_r^{-1 } \epsilon(-l_n ) \\m_{n \mathcal f_f } & = & \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f } \right ) \frac { 1 } { n_f + n_r e^{-\hat{\omega}[x_n ] } } = \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f } \right ) n_f^{-1 } \epsilon(l_n),\end{aligned}\ ] ] where is defined as the fermi function , , and we define - \delta \hat{f}_t + \ln \left ( \frac{n_f}{n_r } \right)$ ] .this allows us to write as , } { \bar{\mathcal f}_f } \right ) \epsilon(l_n)^2 \\\frac{1}{n_f n_r }\epsilon(l_n ) \epsilon(-l_n ) & \frac{1}{n_r^2 } \epsilon(-l_n)^2 & \frac{1}{n_f n_r } \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f } \right )\epsilon(l_n ) \epsilon(-l_n ) \\ \frac{1}{n_f^2 } \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f } \right ) \epsilon(l_n)^2 & \frac{1}{n_f n_r } \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f }\right ) \epsilon(l_n ) \epsilon(-l_n ) & \frac{1}{n_f^2 } \left ( \frac{\mathcal f[x ] } { \bar{\mathcal f}_f } \right)^2 \epsilon(l_n)^2 \end{array } \right ] \nonumber \\ & \equiv & \left [ \begin{array}{ccc } a_{ff } & a_{fr } & a_{f \mathcal f_f } \\ a_{fr } & a_{rr } & a_{r \mathcal f_f } \\a_{\mathcal f_f \mathcal f } & a_{r \mathcal f_f } & a_{\mathcal f_f \mathcal f_f } \\ \end{array } \right].\end{aligned}\ ] ] using the determinant , we write the inverse covariance matrix estimator as , .\end{aligned}\ ] ]
|
existing optimal estimators of nonequilibrium path - ensemble averages are shown to fall within the framework of extended bridge sampling . using this framework , we derive a general minimal - variance estimator that can combine nonequilibrium trajectory data sampled from multiple path - ensembles to estimate arbitrary functions of nonequilibrium expectations . the framework is also applied to obtaining asymptotic variance estimates , which are a useful measure of statistical uncertainty . in particular , we develop asymptotic variance estimates pertaining to jarzynski s equality for free energies and the hummer - szabo expressions for the potential of mean force , calculated from uni- or bidirectional path samples . lastly , they are demonstrated on a model single - molecule pulling experiment . in these simulations , the asymptotic variance expression is found to accurately characterize the confidence intervals around estimators when the bias is small . hence , it does not work well for unidirectional estimates with large bias , but for this model it largely reflects the true error in a bidirectional estimator derived by minh and adib .
|
it is no secret that the cost of configuring and maintaining natural language interfaces to databases is one of the main obstacles to their wider adoption .while recent work has focused on learning approaches , there are less costly alternatives based on only lightly naming database elements ( e.g. relations , attributes , values ) and reducing question interpretation to graph match .what is particularly compelling about precise is the claim that for a large and well defined class of _ semantically tractable _ questions , one can guarantee correct translation to sql . furthermore precise leverages off - the - shelf open domain syntactic parsers to help guide query interpretation , thus requiring no tedious grammar configuration .unfortunately after precise was introduced there has not been much if any follow up .this paper aims to evaluate these claims by implementing the model and conducting experiments equivalent those done by the designers of precise .consider the example database schema depicted at the top of figure [ fig : db ] .although this schema is small , it contains a many - to - many - relationship ( movies to theaters ) and a many - to - one relation from movie to studio .the schema is also cyclic ( via foreign key - based joins ) based on the somewhat contrived foreign key ` premier ` from ` studio ` to ` theater ` to indicate that a studio shows their premiers in a specific theater .databases are represented as a disjoint set of _ relations _ , _ attributes _ and _ values _ which together are the _ database elements _ . the function and the relation of an attribute and the attribute of a value respectively .the boolean function is true for attributes that are primary keys of their corresponding relations .we consider to be the set of _ words _ in a natural language and the set of _ phrases _ to be all finite non - empty word sequences .we speak of being -th word of the phrase ] , and is the length of ., ] , [ how]\ } \subsetneq \mathcal{p} ] , [ a ] , [ in ] , [ is ] , [ be ] , [ of ] , [ do ] , [ with ] , [ have], ] .assume a special function which stems words according to morphology of the natural language .the lexicon is a set of phases paired with database elements .see the bottom part of figure [ fig : db ] for an example lexicon .finally assume the function which associates with every attribute and relation a set of compatible wh - words ( e.g. ,[what]\} ] .an off the shelf syntactic parser determines an _ attachment relation _ between words .formally , . a _ covering assignment _ observes the following properties : * ( words belong to phrases ) + if then * ( phrases are complete ) + if and ) , then ) = \mathsf{stem}(p_j[m])) ] [l]{\usebox\cbox } \rule[0.5\ht\cbox-0.5pt/2]{\wd\cbox}{0.5pt}} ] property 1 states that there is a distinguished attribute or relation that is the focus of the question .property 2 states that values must be paired with either an attribute ( e.g. `` ... title unforgiven ... '' ) , or via ellipsis paired with a relation ( e.g. `` ... the movie unforgiven '' ) , or , if the value is a key itself , we have a highly elliptical case where the value may stand on its own ( e.g. `` unforgiven '' ) .property 3 says that non - focus attributes must pair with a value ( e.g. in `` ... movies of year 2000 ... '' 2000 serves this role ) .property 4 was included in the precise papers , but we found it unnecessary .( semantically tractable question ) for a given question , lexicon and attachment relation , is semantically tractable if there exists a covering assignment over for which there is a valid mapping : and assigns a word in to which is compatible with .( unambiguous semantically tractable question ) for a given question , lexicon and attachment relation , is unambiguous semantically tractable if is semantically tractable and figure [ fig : valid ] shows three valid mappings given the schema and lexicon in figure [ fig : db ] .an additional example is `` what films did don siegal direct with lead clint eastwood ? ''this is a _ unambiguous semantically tractable question _ so long as ` don siegal ' attaches to ` direct ' and not ` lead ' , and ` clint eastwood ' attaches to ` lead ' and not ` direct ' .the precise papers say little about generating sql from sets of database elements .that said , it seems fairly straight forward .the focus element becomes the attribute ( or * in case focus is a relation ) in the sql ` select ` clause .all the involved or implied relation elements are included in the ` from ` clause .the value elements determine the simple equality conditions in the ` where ` clause . adding the join conditions is not formalized in precise , but we assume it means adding the minimal set of equality joins necessary to span all relation elements . for cyclic schemas this can lead to ambiguity . for example , while there is a unique valid mapping for the question `` what movies at the westwood '' , join paths via ` studio ` or ` shows ` are possible in the schema of figure [ fig : db ] .our java - based open - source implementation , corresponds to the formal definition of section 2 .like precise , assignments are computed via a brute force search and candidate valid mappings are solved for via reduction to graph max - flow .candidate solutions are filtered based on attachment relations obtained from the stanford parser .we generate all possible sql queries for all valid mappings .like the earlier work , we evaluated our system on geoquery .since very little information has been disclosed regarding how precise purportedly handled _ superlatives _ ( `` what is the most populous city in america ? '' ) , _ aggregation _ ( `` what is the average population of cities in ohio ? '' ) , and _ negation _ ( `` which states do not border kansas ? '' ) , we simply excluded these types of questions from our evaluation .this reduced our tests to 442 ( of 880 ) geoquery questions . in theory ,precise could be deployed immediately on any relational database .however , we found the automatic approach to be very erratic , generating many irrelevant synonyms .part of speech - tagging ( pos ) , which can help to narrow down the senses of a word , is difficult to determine automatically from database element names .even with the correct pos identified a word might have irrelevant senses which muddle the lexicon .for example , wordnet has 26 noun senses of the word `` point '' in the geoquery attribute ` highlow.lowest_point ` , one of which has a synonym being ` state ' .hence we decided to manually add mappings to the lexicon .another reason to do this was to map relevant phrases which would not have been generated automatically otherwise .for example , to correctly answer the question `` what major rivers are in texas ? '' the phrase ` [ major river ] ` had to be associated with the relation ` river ` . out of these 448 questions ,162 were answered correctly by our replication of precise .this does not accord to previously published recall results ( see figure [ fig : non - repeat ] ) . on the positive side , there were no questions for which precise returned a single wrong query .figure [ fig : problems ] breaks down the reasons why the 286 remaining questions for were rejected by our system : 94 questions contained no wh - word , 17 sentences contained non - stop words which the lexicon did not recognize as part of any phrase , 45 questions had at least one , but no could be found that mapped one - to - one and onto a set of elements , 41 questions had a that was one - to - one and onto , but no valid mapping could be found , and 89 questions produced multiple distinct solutions .a natural question is , `` did we faithfully replicate precise ? '' the description of precise was spread over two conference articles and a couple of unpublished manuscripts .a forthcoming journal article was referenced , but unfortunately it does not seem to have been published .several aspects of precise were ambiguous , contradictory or incomplete and forced us to make interpretations , which , if wrong , could have an impact on recall .still we made every effort to boost evaluation results .for example , in section 2.4 we removed condition 4 from valid mappings and added the condition in 2 . in section 2.2we added the additional stop words and wh - words to boost recall .finally we omitted certain foreign keys from the lexicon to limit needless ambiguity .we stand by the formalization presented in section 2 as a reasonable interpretation of precise , although we are open to correction .while the recall results did not replicate , at face value precision results do appear to hold up ; if one reads the questions under reasonable interpretations , all the semantically tractable questions map to what intuitively seems to be the correct sql .still one must limit this claim .consider that there is only one valid mapping for the question `` what are the titles of films directed by george lucas ? '' , however a user may be disappointed if they expect the database to also contain his student films .similar misconceptions could be present for attributes and values .this aside , our way to judge correctness is based on common sense , assuming that the user fully understands the context of the database .that said , _ the semantically tractable class does not seem to be fundamental_. we have generalized the class and nothing seems to blocks the extension of the class to questions requiring aggregation , superlatives , negation , self - joins , etc .also , the current semantically tractable class excludes questions that seem simple ( e.g. `` which films are showing in los angeles ? '' is not semantically tractable ) .future work is needed to more cleanly define and limit ` semantically tractable ' .an issue that complicates precise is the role of ambiguity .if the user asks `` what are the titles of the clint eastwood films ? '' , there are several possibilities : 1 .the films he directed ; 2 .the films he acted in ; 3 .the films he both acted and directed in ; 4 . the films he either acted or directed in . only 1 and 2are expressible in precise .still if there was a paraphrasing capability , the user could select their intended interpretation .this leads to an immediate strategy to improve practical recall. another immediate idea is to extend precise to handle ellipsis of wh - words .a more serious issue is the hidden assumptions precise makes about the form of the schema .natural language interfaces do better when the schema maintains a clear relation with a conceptual model ( e.g. entity - relationship model ) .this is the case for example we developed , but it is not completely the case for geoquery which contains tables such as ` highlow ` which have no real entity correspondence .not surprisingly many of the rejected questions in our evaluation involved this conceptually suspect table .what is needed is a more specific delineation of exactly what schemas precise is applicable over .we shall look investigate this theoretically as well as empirically , investigating for example how well precise and it generalizations cover qald and other corpora .our replication of precise made no errors in terms of returning a single , incorrect query , giving it the highest possible precision value .however , out of the 448 questions given , precise was only able to produce sql queries for 162 , giving it a recall value of 0.361 . moreover our implementation of precise requires manual lexicon configuration . still , even given this ` negative ' result, we feel that precise is a very appealing approach , but one that needs more careful scrutiny , testing and generalization .this is something we shall continue to investigate .
|
this report describes an initial replication study of the precise system and develops a clearer , more formal description of the approach . based on our evaluation , we conclude that the precise results do not fully replicate . however the formalization developed here suggests a road map to further enhance and extend the approach pioneered by precise . * after a long , productive discussion with ana - maria popescu ( one of the authors of precise ) we got more clarity on the precise approach and how the lexicon was authored for the geo evaluation . based on this we built a more direct implementation over a repaired formalism . although our new evaluation is not yet complete , it is clear that the system is performing much better now . we will continue developing our ideas and implementation and generate a future report / publication that more accurately evaluates precise like approaches . *
|
the inclusion of hydrodynamics and atomic and radiation processes into cosmological structure formation simulations is essential for many applications of interest , including galaxy formation , the structure of the integalactic medium , formation and evolution of x - ray clusters , and cosmic reionization .a key technical challenge in such simulations is obtaining high mass and spatial resolution in collapsing structures within large cosmological volumes .algorithms employing lagrangian particles to represent dark matter and gas have a natural advantage in this regard as they automatically place resolution elements where they are needed .such methods are now well developed and in widespread use .these include the gridless method of hernquist & katz ( 1989 ) and katz , weinberg & hernquist ( 1996 ) , as well as the grid - assisted method ( evrard 1988 ; couchman , thomas & pierce 1995 ) .parallel versions of these methods have recently been developed ( dav , dubinski & hernquist 1997 ; pearce & couchman 1997 ) which , when run on massively parallel computers , can integrate particles in hydrodynamic simulations , and as many as particles in pure dark matter simulations ( couchman , these proceedings ). we and other members of the grand challenge cosmology consortium ( ) have explored eulerian hydrodynamics methods as an alternative to sph in cosmological simulations ( cen 1992 ; anninos , norman & clarke 1994 ; kang 1994 ; bryan 1995 ; gnedin 1995 ; pen 1995 ) . provided gridscan be constructed which achieve the necessary dynamic range ( a non - trivial issue , as we shall see ) , eulerian methods have a number of distinct advantages over sph .these are : ( 1 ) _ speed _ : the use of logically regular data structures avoid time - consuming nearest neighbor searches resulting in higher update rate ; ( 2 ) _ noise _ : fluid is represented as a continuum , not discrete particles , eliminating poisson noise ; ( 3 ) _ density sampling _ : because of point ( 2 ) , low density cells are computed as accurately as high density cells at the same cost ; density gradients spanning many orders of magnitude can be accurately simulated with 3 - 4 cells per decade per dimension . ( 4 ) _ integral form _ : integral conservation laws are straightforward to implement for mass , momentum , energy and magnetic flux which are numerically conservative to machine roundoff . in addition to the above - mentioned advantages which are generic to any eulerian method , we add the following for higher order godunov methods such as ppm or tvd : ( 5 ) _ shock capturing _ :shocks are captured in 1 - 2 cells with correct entropy generation and non - oscillatory shocks ; ( 6 ) _ upwind _ : wave characteristics are properly upwinded for higher fidelity and stability ; ( 7 ) _ low dissipation _ : the use of higher order - accurate interpolation results in a very low numerical viscosity important for angular momentum conservation in protogalactic disks .in addition , radiative transfer and mhd is most easily done on a grid .implicit algorithms generate large sparse matrix equations for which iterative and direct linear systems solvers are available . in reference to sph , we mention several disadvantages of traditional eulerian methods : ( 1 ) _ resolution _ : limited to the grid spacing ; ( 2 ) _ invariance _ : solutions are not strictly translational and rotational invariant due to the dependence of truncation errors on the relative velocity between fluid and grid and grid orientation . in the references cited above , various gridding schemeshave been explored to reduce truncation error in regions of high density gradients , as invariably arise in structure formation simulations . here , we describe a powerful method we have developed based on the adaptive mesh refinement ( amr ) algorithm of berger and colella ( 1989 ) .the paper is organized as follows . in sec .2 we briefly review the elements of berger s amr . in sec .3 we describe the modifications we have made to extend amr to cosmological hydro+n - body simulations . in sec . 4we test the method against several standard test problems in numerical cosmology . in sec .5 we illustrate the power of amr in an application to the formation of the first baryonic objects in a cdm - dominated cosmology .adaptive mesh refinement ( amr ) was developed by berger and oliger ( 1984 ) to achieve high spatial and temporal resolution in regions of solutions of hyperbolic partial differential equations where fixed grid methods fail .algorithmic refinements and an application to shock hydrodynamics calculations were described in berger & collela ( 1989 ) .the hydrodynamic portion of our method is based closely on this latter paper , and we refer the reader to it for details ( see also paper by klein et al . , these proceedings ) . 3.5 in unlike some mesh refinement schemes which move the mesh points around rubber mesh " ( e.g. , dorfi & drury 1987 ) or subdivide individual cells , resulting in an octree data structure ( e.g. , adjerid , s. & flaherty 1998 ) , berger s amr ( also becoming known as _ structured _ amr to differentiate it from other flavors ) utilizes an adaptive hierarchy of grid patches at various levels of resolution .each rectangular grid patch ( hereafter , simply _ grid _ ) covers some region of space in its _ parent grid _ needing higher resolution , and may itself become the parent grid to an even higher resolution _child grid_. a general implementation of amr places no restriction on the number of grids at a given level of refinement , or the number of levels of refinement .the hierarchy of grids can be thought of as a tree data structure , where each leaf is a grid .each grid is evolved as a separate initial boundary value ( ibv ) problem .initial conditions are obtained by interpolation from the parent grid when the grid is created .boundary conditions are obtained either by interpolation from the parent grid or by copies from abutting _ sibling grids _ at the same level . to simplify interpolation ,grids are constrained to be aligned with their parent grid and not overlap their sibling grids .grids at level are refined by an integer factor relative to the grid at level .1 provides an illustration of these concepts in a 2d , 4-level , grid hierarchy .the algorithm which creates this grid hierarchy is _ local _ and _ recursive_. it can be written in pseudocode as follows . here ` level ` is the level of refinement , and procedures are capitalized . ....integrate ( level ) begin if " time for regridding " then refine(level ) collectboundaryvalues(level ) evolve ( level ) if " level is n't finest existing " then begin for r = 1 to r do integrate(level + 1 ) update(level , level + 1 ) end end .... consider a typical calculation which is initialized on a single , coarsely resolved grid called the _ root grid _ , level .the solution is integrated forward in time with procedures ` collectboundaryvalues ` and ` evolve ` . at each timestep, every cell is checked to determine if it requires refinement a process known as _selection_. selection is based either on an estimate of the local truncation error , or some simple threshold criterion , or some boolean combination thereof . if at least one cell is flagged , then procedure ` refine ` is called .it does two things .first , flagged cells are _ clustered _ , and minimal rectangular boundaries of these clusters are determined using the algorithm of berger & rigoustous ( 1991 ) .second , one or more refined grids are allocated having these boundaries , and their intial data is computed via interpolation from the root grid . depending on circumstance , `refine ` may also deallocate unneeded subgrids .the root grid is then advanced one coarse timestep .the subgrids are integrated forward in time with smaller timesteps until they catch up " with the parent grid .if the hyperbolic system is linear ( e.g. , advection ) , then , and the refined grids take r timesteps for each coarse grid timestep .however , for nonlinear systems like gas dynamics , fine grid timesteps are determined by the courant stability condition , and in general r+1 timesteps are needed to match times .this detail is not reflected in the pseudocode above .after the fine grids have been advanced to time , procedure ` update ` is called .it does two things .first , it injects the results of the fine grid calculation into the overlying coarse grid cells through summation or interpolation .second , the values of coarse grid conserved quantities in cells adjacent to fine grid boundaries are modified to reflect the difference between the coarse and fine grid fluxes a procedure known as _ flux correction_. the algorithm described above is recursive , and applies to any level in the grid hierarchy . in a multilevel grid hierarchy ,the temporal integration scheme is analogous to the w - cycle " in classic elliptic multigrid .grids are advanced from coarse to fine with their individually determined timesteps .this order allows us to achieve second order accuracy in time on the entire hierarchy .this is accomplished by interpolating the boundary values not only in space but in time as well using time level n and n+1 values on the parent grid .cosmological hydrodynamic simulations require a robust fluid solver capable of handling the extreme conditions of structure formation , as well as a method for evolving collisionless particles ( dark matter , stars ) subject to their self - consistent gravitational field , the latter requiring a solution of the poisson equation .we briefly describe these methods here .a more detailed paper is in preparation ( bryan & norman 1998 ) .our fluid solver is based on the piecewise parabolic method ( ppm ) of collela & woodward ( 1984 ) , suitably modified for cosmological flows .the algorithm is completely described in bryan ( 1995 ) , so we merely state the essential points .ppm is a higher order - accurate version of godunov s method , featuring third order - accurate piecewise parabolic monotonic interpolation and a nonlinear riemann solver for shock capturing .multidimensional schemes are built up by directional splitting , where the order of the 1d sweeps is permuted a l strang ( 1968 ) , resulting is a scheme which is formally second order - accurate in space and time .for cosmology , the conservation laws for the fluid mass , momentum , and energy density are written in comoving coordinates for a frw spacetime with metric scale factor a(t ) .both the gas internal energy equation and total ( internal + kinetic ) energy equation are solved everywhere on the grid at all times .this _ dual energy formulation _ is adopted in order to have a scheme that both produces the correct entropy jump at strong shocks _ and _ yields accurate pressures and temperatures in the hypersonic parts of the flow .both the conservation laws and the riemann solver must be modified to include gravity . in order to maintain second order accuracy in time ,the gravitational potential is needed at the half time level .we use a predictor - corrector approach , wherein the particles are advanced to using , and then is computed by solving the poisson equation , as described below .we have implemented both lagrange + remap ( lr ) and direct eulerian ( de ) variants of our method , following the example of collela & woodward ( 1984 ) , with comparable results . for amr applications, we use the de version to simplify the flux correction step .adding collisionless particles to amr presents two challenges , one physical ( how do they interact with the fluid in the mesh ) , and one algorithmic ( how to add a new data structure . ) a third challenge how to compute the gravitational interaction between the two components in a consistent fashion is described in the following section .our method utilizes a single set of particles with comoving positions , proper peculiar velocities and masses ( other characteristics may be added as needed ) .there is a unique , one - to - one association between particle and a grid at level if that particle s position lies within the grid s boundary but outside of any finer ( child ) grid . in other words, a particle belongs to the finest grid which contains it .we exploit this association by denoting that grid as the particle s _ home _ grid and store all such particles in a list along with the rest of the data connected to that grid .note that a particle s home grid may change as it moves , requiring redistribution of particles .this disadvantage is offset by a number of factors , including ( 1 ) decreased search time for particle - grid interactions ; ( 2 ) improved data encapsulation ; and ( 3 ) better parallelization characteristics ( see below ) .this association is also very natural from a physical standpoint : because the particles are indirectly connected to the solution on their home grid , they tend to share the same timestep requirement .the particles obey newton s equations , which in the comoving frame are : where the subscript in the last term in eq .2 means the gravitational acceleration is evaluated at position .the gravitational potential is computed from a solution of the poisson equation , which takes the form in comoving coordinates : where is the local comoving mass density of gas and particles , and is its global average value .these equations are finite - differenced and solved with the same timestep as the grid , to reduce bookkeeping .( 1 ) can be solved relatively simply since only quantities local to the particle are involved .we use a predictor - corrector scheme to properly time center the rhs .( 2 ) requires knowing the gravitational acceleration at postion .this is accomplished using the particle - mesh ( pm ) method ( e.g. , hockney & eastwood , 1980 ) . in the first step ,particle masses are assigned to the grid using second order - accurate tsc ( triangular shaped cloud ) interpolation . in the second step ,the gridded particle density is summed with the gas density , and then eq . ( 3 ) is solved on the mesh as described below .finally , in the third step , gravitational accelerations are interpolated to the particle positions using tsc interpolation , and particle velocities are updated .the description above glossed over an important detail .namely , that the gravitational field can not be solved grid by grid , but must have knowledge of the mass distribution on the entire grid hierarchy . however , we wish to use the high spatial resolution afforded by the refined grids to compute accurate accelerations .this is accomplished using a method similar to that set out in couchman ( 1991 ) .the basic idea is to represent the acceleration on a particle as the sum of contributions from each level of the hierarchy less than or equal to the level of its home grid : where the partial accelerations are computed symbolically as follows : in the first step , the mass of _ all _ particles whose positions lie inside the boundaries of grid is assigned to the grid using tsc interpolation , and its fourier transform is computed .in the second step the poisson equation is solved in fourier space using a shaped force law designed to reproduce a potential when summed over all levels .accelerations on the grid are computed in fourier space , and then transformed back into real space in step 3 .this requires three ffts one for each component .finally , the acceleration on particle due to is computed using tsc interpolation . the acceleration on the fluidis computed by treating each cell as a particle with its position given by the center of mass ( as determined by a tri - linear interpolation ) .the assignment of mass and interpolation of acceleration is done with the same tsc scheme as used for the particles . in our implementation ,the refinement factor can have any integer value , and can be different on different levels .however , we have found through experimentation that is optimal for cosmological simulations where the gravitational dynamics is dominated by dark matter .since dark matter is represented by a fixed number of particles , the use of higher refinement factors refines the gas grid to the point where each dark matter particle becomes an accretion center .the choice r=2 maintains commensurate mass resolution in both gas and dark matter . in order to reduce poisson noise ,we initialize a calculation with one particle per cell .we flag a cell for refinement when the baryonic mass has increased by a factor of four over its initial value . immediately after refinement , refined cells in 3d have one half their initial mass and typically contain zero or one particle .thus , in a collapsing stucture , the ratio of baryonic to dark matter mass will vary between one half and four times its initial value .our implementation features arbitrary refinement factors , number of grids , grid shapes ( ) , and grid hierarchy depth .we also have the option , not yet exercised , of calling different physics solvers on different levels .the amr driver is written in object - oriented c++ to simplify logic and memory management .the grid objects encapsulate the grid data ( field and particle ) as well as numerous grid methods ( e.g. , ` evolve ` ) .the floating - point intensive methods are implemented in f77 for the sake of computational efficiency .c wrappers interface the c++ and f77 code .our production version is a shared memory , loop parallel version , where all grids at a given level are executed in parallel . a distributed memory version , wherein the root grid is domain decomposed , is under development .4.5 in we have extensively tested our code against a variety of hydrodynamic and hybrid ( hydro + n - body ) test problems . these include ( 1 ) linear waves and shock waves entering and exiting static subgrids in 1d and 2d at various angles ; ( 2 ) a stationary shock wave in a static 3d subgrid shock in a box " ( anninos , norman & clarke 1994 ) ; ( 3 ) sod shock tube ; ( 4 ) 1d pressureless collapse ; ( 5 ) 1d zeldovich pancake ; ( 6 ) 2d pure baryonic cdm model ( bryan 1995 ) ; ( 7 ) 3d adiabatic x - ray cluster santa barbara cluster " ( frenk 1998 ) ; and ( 8) 3d self - similar infall solution ( bertschinger 1985 ) . in tests , ( 3 - 7 ) amr results were compared with unigrid results up to grid sizes of . in all cases the algorithm performs well , reproducing the reference solutions with a minimum of artifacts at coarse / fine grid interfaces .the use of upwind , second - order accurate fluxes in space and time is found to be essential in minimizing reflections .details are provided in bryan & norman ( 1998 ) . as an example , fig .2 shows the code s performance on the 1d zeldovich pancake test problem .the parameters for this problem , which includes gas , dark matter , gravity , and cosmic expansion , are given in bryan 1995 .here we compare a 3-level amr calculation with a 256 zone uniform grid calculation .the root grid has 16 zones , and the refinement factor r=4 ( in 1d problems one has more latitude with r. ) thus , the amr calculation has the same 256 zone resolution on the finest grid .we can see that the amr algorithm places the refinements only in the central high density pancake .the solutions on the three levels of refinement match smoothly in the supersonic infalling envelope .the density maximum and the local temperature minimum at the midplane agree with the uniform grid results exactly .the shock waves at x=.45 and .55 are captured in two cells without post - shock oscillations . the temperature field the most difficult to get right exhibits a small bump upstream of the shock front .this is caused by our dual energy formulation , and marks the location where we switch from internal energy to total energy as a basis for computing pressures and temperatures .the bump has no dynamical effect as ram pressure dominates by many orders of magnitude ahead of the shock .we have applied our code to the formation of x - ray clusters ( bryan & norman 1997 ) , galaxy formation ( kepner & bryan 1998 ) , and the formation of the first baryonic structures in a cdm - dominated universe ( abel , bryan & norman 1998 ) . in fig .3 we show a result from the latter calculation to illustrate the capabilities of our method . our ultimate goal is to simulate the formation of the first stars in the universe ( population iii stars ) starting from cosmological initial conditions . to resolve the protostellar cloud cores which form these stars requires a spatial dynamic range of at least five orders of magnitude in 3d .the primary coolant in zero metallicity gas for is line cooling ( e.g. , tegmark 1997 ) . forms in the gas phase via nonequilibrium processes ( mcdowell 1961 ; saslaw & zipoy 1967 ) .thus , in addition to gas and dark matter , we also solve a 9-species nonequilibrium chemical network for and in every cell at every level of the hierarchy .this requires adding nine new field variables one for each species and solving the stiff reactive advection equations for the coupled network .the physical model is described in abel ( 1997 ) .the numerical algorithms for cosmological reactive flow are provided in anninos ( 1997 ) .we simulate a standard cdm model in a periodic , comoving volume 128 kpc on a side .the parameters are : km / s / mpc .the starting redshift is z=100 .the intial conditions are realized on a 3-level static nested grid hierarchy with a root grid resolution of cells .the subgrids are centered on the most massive peak which develops .this location is determined by running the simulation once at low resolution .the grid is made large enough so that it initially contains all the mass that eventually ends up in the condensed halo .the initial mass resolution in the gas ( dark matter ) is , respectively .as the calculation proceeds , the amr algorithm generates many more grids at higher levels of refinement up to a preset limit of 13 levels total .in addition to the overdensity refinement criterion described above , we also require that the numerical jeans criterion is satisfied everywhere ( truelove 1997 ; klein , these proceedings . )nonlinear structures form in the gas by .by sufficient has formed in the center of the virialized halo that a cooling flow ensues ( abel 1998 ) . because of the range of mass scales which go nonlinear nearly simultaneously at these epochs , the halo is quite lumpy as structure builds up hierarchically . by ( fig .3 ) , a highly concentrated structure has formed through the collision of two rapidly cooling blobs . a collapsing , primordial protostellar core forms as a result .the core collapses to higher densities , eventually reaching our resolution limit .the proper cell size on the grid is pc au .the overall dynamic range achieved is we stop the calculation when the jeans criterion is violated on the grid .this occurs in the densest cell , which has reached a baryonic overdensity of almost and a proper number density of more than . at these densities ,three body reactions become important , which we have not included in our model . at the end of the calculation ,we have formed a parsec - sized collapsing primordial core with about of material , roughly equal parts gas and dark matter .the further evolution of this cloud , including the important question of fragmentation , is being studied with a separate , smaller scale amr simulation in progress .* acknowledgements * we thank our collaborators tom abel and peter anninos for joint work cited here , as well as sir martin rees for drawing our attention to the problem of first structure formation .this work was carried out under the auspices of the grand challenge cosmology consortium ( ) , with partial funding provided by nsf grant asc-9318185 and nasa grant nag5 - 3923 .simulations were carried out using the sgi origin2000 at the national center for supercomputing applications , university of illinois , urbana - champaign .abel , t. , anninos , p. , zhang , y. & norman , m. l. 1997 , new astronomy , 2 , 181 abel , t. , bryan , g. l. & norman , m. l. 1998 ._ in preparation _ adjerid , s. & flaherty , j.e .1998 , siam j. of sci . and stat . comp ., vol . 9 , no . 5 , 792 anninos , p. , zhang .y. , abel , t. and norman , m. l. 1997 , new astronomy , 2 , 209 anninos , p. , norman , m. l. & clarke , d. a. 1994 ., 436 , 11 berger , m. j. & rigoustous , i 1991 .ieee transactions on systems , man , cybernetics , 21(5 ) .berger , m. j. & oliger , j. 1984 .53 , 484 berger , m. j. & collela , p. 1989 . j. comp82 , 64 bertschinger , e. 1985 . , 58 , 39 bryan , g. l. & norman , m. l. 1997 . in _ computational astrophysicsd. a. clarke & m. fall , asp conference # 123 bryan , g. l. & norman , m. l. 1998 ._ in preparation _bryan , g. l. , norman , m. l. , stone , j. m. , cen , r. & ostriker , j. p. 1995comm . , 89 , 149 cen , r. 1992 , , 78 , 341 colella , p. & woodward , p. r. 1984 . j. comp .phys . , 54 , 174 couchman , h. 1991 .368 , l23 couchman , h. , thomas , p. & pierce , f. 1995 . ,452 , 797 dav , r. , dubinski , j. & hernquist , l. 1997 , new astronomy , 2 , 227 dorfi , e. & drury , l. 1987 . j. comp . phys . , 69 , 175 evrard , a. e. 1988 ., 235 , 911 frenk , c. , white , s. 1998 ., _ submitted _ gnedin , n. y. 1995 , , 97 , 231 hernquist , l. & katz , n. 1989 . , 70 , 419 hockney , r. w. & eastwood , j. w. 1980 . _ computer simulation using particles _ , mcgraw - hill , new york kang , h. , cen , r. , ostriker , j. p. & ryu , d. 1994 , , 428 , 1 katz , n. , weinberg , d. & hernquist , l. 1996 , , 105 , 19 kepner , j. & bryan , g. l. 1998 . _ in preparation _ mcdowell , m. r. c. 1961 ._ observatory _ , 81 , 240 pearce , f. & couchman , h. 1997 .new astronomy , 2 , 411 pen , u. 1995 , , 100 , 269 saslaw , w.c . ,zipoy , d. 1967 .nature , 216 , 976 strang , g. 1968 .siam j. num ., 5 , 506 tegmark , m. , silk , j. , rees , m.j . , blanchard , a. , abel , t. , & palla , f. 1997 . , 474 , 1 truelove , j. k. 1998 . , 495 , 821
|
we describe a grid - based numerical method for 3d hydrodynamic cosmological simulations which is adaptive in space and time and combines the best features of higher order accurate godunov schemes for eulerian hydrodynamics with adaptive particle mesh methods for collisionless particles . the basis for our method is the structured adaptive mesh refinement ( amr ) algorithm of berger & collela ( 1989 ) , which we have extended to cosmological hydro + n - body simulations . the resulting _ multiscale hybrid _ method is a powerful alternative to particle - based methods in current use . the choices we have made in constructing this algorithm are discussed , and its performance on the zeldovich pancake test problem is given . we present a sample application of our method to the problem of _ first structure formation_. we have achieved a spatial dynamic range in a 3d multispecies gas + dark matter calculation , which is sufficient to resolve the formation of primordial protostellar cloud cores starting from linear matter fluctuations in an expanding frw universe . ifundefinedepsfboxinputepsf.sty # 1= _ reduction#1#2=#2 # 1#2=.45 = .45 # 1#2#3#4#5#6#7 to#2 ' '' ''
|
following efron s seminal paper on the i.i.d .bootstrap , researchers have been able to apply resampling ideas in a variety of non - i.i.d .situations including the interesting case of dependent data .bhlmann , lahiri and politis give reviews of the state - of - the - art in resampling time series and dependent data . in the last two decades , in particular , resampling methods in the frequency domain have become increasingly popular ( see paparoditis for a recent survey ) .one of the first papers to that effect was franke and hrdle who proposed a bootstrap method based on resampling the periodogram in order to devise confidence intervals for the spectral density .the idea behind that approach is that a random vector of the periodogram ordinates at finitely many frequencies is approximately independent and exponentially distributed ( cf ., e.g. , brockwell and davis , theorem 10.3.1 ) . later this approach was also pursued for different set - ups , for example , for ratio statistics such as autocorrelations by dahlhaus and janas or in regression models by hidalgo .dahlhaus and janas suggested a modification of the periodogram bootstrap which leads to a correct approximation for a wider class of statistics such as the sample auto - covariance which in contrast to the sample autocorrelation is not a ratio statistic . kreiss and paparoditis propose the autoregressive - aided periodogram bootstrap where a parametric time domain bootstrap is combined with a nonparametric frequency domain bootstrap in order to widen the class of statistics for which the bootstrap is valid .we will refer to the above methods as periodogram bootstrapping as all of the statistics of interest there were functionals of the periodogram . since these bootstrap methods resample the periodogram , they generally do not produce bootstrap pseudo - series in the time domain .a recent exception is a `` hybrid '' bootstrap of jentsch and kreiss , that is , an extension of the aforementioned method of kreiss and paparoditis .we now wish to focus on two well - known proposals on frequency - domain bootstrap methods that also yield replicates in the time domain , notably : * the early preprint by hurvich and zeger who proposed a parametric bootstrap very similar to our tft wild bootstrap of section [ section_descr_boot ] , as well as a nonparametric frequency - domain bootstrap based on prewhitening via an estimate of the ma( ) transfer function .although never published , this paper has had substantial influence on time series literature as it helped inspire many of the above periodogram bootstrap methods .note hurvich and zeger provide some simulations but give no theoretical justification for their proposed procedures ; indeed , the first theoretical justification for these ideas is given in the paper at hand as special cases of the tft - bootstrap .* the `` _ _ surrogate data _ _ '' approach of theiler et al . has received significant attention in the physics literature .the idea of the surrogate data method is to bootstrap the phase of the fourier coefficients but keep their magnitude unchanged .while most of the literature focuses on heuristics and applications , some mathematical proofs have been recently provided ( see braun and kulperger , chan , mammen and nandi , and the recent survey by maiwald et al .the surrogate data method was developed for the specific purpose of testing the null hypothesis of time series linearity and is not applicable in more general settings . to see why , note that every surrogate sample has exactly the same periodogram ( and mean ) as the original sequence .hence , the method fails to approximate the distribution of any statistic that is a function of first- and second - order moments , thus excluding all cases where periodogram resampling has proven to be useful ; see our proposition [ prop1 ] in section [ section_diff_methods ] . in the paper at hand, we propose to resample the fourier coefficients which can effectively be computed using a fast fourier transform ( fft)in a variety of ways similar to modern periodogram bootstrap methods , and then obtain time series resamples using an inverse fft .since we start out with an observation sequence in the time domain , then jump to the frequency domain for resampling just to get back to the time domain again , we call this type of resampling a _ time frequency toggle _ ( tft ) bootstrap .the tft - bootstrap is an extension of existing periodogram bootstrap methods as it yields almost identical procedures when applied to statistics based on periodograms , but it is also applicable in situations where the statistics of interest are not expressible by periodograms ; for more details we refer to section [ section_appl ] .the tft - bootstrap is related to the surrogate data approach but is more general since it also resamples the magnitudes of fourier coefficients and not just their phases . as a result ,the tft is able to correctly capture the distribution of statistics that are based on the periodogram .the tft , however , shares with the surrogate data approach the inability to approximate the distribution of the sample mean ; luckily , there are plenty of methods in the bootstrap literature to accomplish that , for example , the block bootstrap and its variations , the ar - sieve bootstrap , etc .( for details , see lahiri , bhlmann , politis ) .in this paper we provide some general theory for the tft - bootstrap which not only gives a long - due theoretical justification for one of the proposals by hurvich and zeger but also allows for several modern extensions of these early ideas .in particular , we prove that the tft sample has asymptotically the correct second - order moment structure ( lemma [ lem_cov ] ) and provide a functional central limit theorem ( fclt , theorem [ th_main ] and corollary [ cor_main ] ) for the tft - sample .this is a much stronger result than the asymptotic normality with correct covariance structure of a finite subset as proved , for example , by braun and kulperger for the surrogate data method . as in the surrogate data method ,the tft sample paths are shown to be ( asymptotically ) gaussian ; so in a sense the tft approximates possibly nonlinear time series with a gaussian process having the correct second - order moment structure .this seems to be inevitable in all methods using discrete fourier transforms due to the fact that fourier coefficients are asymptotically normal under very general assumptions .however , in contrast to the surrogate data method , the tft is able to capture the distribution of many useful statistics ( cf .section [ section_appl ] ) .for example , our fclt implies the validity of inference for statistics such as cusum - type statistics in change - point analysis ( cf .section [ section_cpa ] ) or least - squares statistics in unit - root testing ( cf .section [ section_unitroot ] ) .the tft - bootstrap is also valid for periodogram - based ( ratio ) statistics such as sample autocorrelations or yule walker estimators ; this validity is inherited by the corresponding results of the periodogram bootstrapping employed for the tft ( cf .section [ sec_stat_perio ] ) .furthermore , in many practical situations one does not directly observe a stationary sequence but needs to estimate it first . in corollary [ cor_boot_est ]we prove the validity of the tft - bootstrap when applied to such estimated sequences .for example , in change - point analysis ( section [ section_cpa ] ) as well as unit - root testing ( section [ section_unitroot ] ) one can use estimators to obtain an approximation of the underlying stationary sequence under the null hypothesis as well as under the alternative . as in both examplesthe null hypothesis is that of a stationary sequence ; this feature enables us to construct bootstrap tests that capture the null distribution of the statistic in question even when presented with data that obey the alternative hypothesis . as a consequence, these bootstrap tests asymptotically capture the correct critical value even under the alternative hypothesis which not only leads to the correct size of the tests but also to a good power behavior .the remainder of the paper is organized as follows . in the next sectionwe give a detailed description on how the tft - bootstrap works .in particular we describe several specific possibilities of how to get pseudo - fourier coefficients . in section [ section_fclt ] we state the main theorem , a functional limit theorem for the tft - bootstrap .the fclt holds true under certain high - level assumptions on the bootstrapped fourier coefficients ; these are further explored in sections [ section_valboot ] and [ section_prop_freq ] .in particular it is shown that the tft - bootstrap replicates the correct second - order moment structure for a large class of observed processes including nonlinear processes ( cf .section [ section_prop_freq ] ) .finally , we prove the validity of the tft - bootstrap for certain applications such as unit - root testing or change - point tests in section [ section_appl ] and explore the small sample performance in the simulation study of section [ section_sim ] .our conclusions are summarized in section [ section_conclusions ] .proofs are sketched in section [ section_fclt_proof ] , while the complete technical proofs can be found in electronic supplementary material .assume we have observed , where [ ass_w ] is a stationary process with absolutely summable auto - covariance function . in this casethe spectral density of the process exists , is continuous and bounded .it is defined by ( see , e.g. , brockwell and davis , corollary 4.3.2 ) .since we will prove a functional central limit theorem for the bootstrap sequence , the procedure only makes sense if the original process fulfills the same limit theorem .[ ass_clt ] fulfills the following functional central limit theorem : where is the spectral density of and is a standard wiener process .we may need the following assumption on the spectral density : [ ass_process_dens ] let the spectral density be bounded from below for all .we denote by the centered counterpart of the observations where .consider the fft coefficients of the observed stretch , that is , thus where for .note that the fourier coefficients , depend on , but to keep the notation simple we suppress this dependence .the principal idea behind all bootstrap methods in the frequency domain is to make use of the fact that the fourier coefficients are asymptotically independent and normally distributed , where denotes the largest integer smaller or equal to , and \\[-8pt ] { \operatorname{var}}x(j)&=&\pi f(\lambda_j)+o(1),\qquad { \operatorname{var}}y(j)= \pi f(\lambda_j)+o(1)\nonumber\end{aligned}\ ] ] for as where is the spectral density ( see , e.g. , chapter 4 of brillinger for a precise formulation of this vague statement ) .lahiri gives necessary as well as sufficient conditions for the asymptotic independence and normality of tapered as well as nontapered fourier coefficients for a much larger class of time series not limited to linear processes in the strict sense .shao and wu prove this statement uniformly over all finite subsets for a large class of linear as well as nonlinear processes with nonvanishing spectral density .the uniformity of their result is very helpful since it implies convergence of the corresponding empirical distribution function ( see the proof of lemma [ lem_edf ] below ) . the better known result on the periodogram ordinates states that are asymptotic independent exponentially distributed with expectation ( see , e.g. , chapter 10 of brockwell and davis ) .the latter is what most bootstrap versions are based on .by contrast , our tft - bootstrap will focus on the former property ( [ eq_fft_distr ] ) , that is , the fact that are asymptotically i.i.d . us now recall some structural properties of the fourier coefficients , which are important in order to understand the procedure below .first note that this symmetry implies that the fourier coefficients for carry all the necessary information required in order to recapture ( by inverse fft ) the original series up to an additive constant. the symmetry relation ( [ eq_conj_com ] ) shows in particular that all the information carried in the coefficients for , , is already contained in the coefficients for , .the information that is missing is the information about the mean of the time series which is carried by the remaining coefficients belonging to and ( the latter only when is even ) . to elaborate, carries the information about the mean of the observations ; moreover , for even , we further have some additional information about the `` alternating '' mean note that the value of the fft coefficients for is the same for a sequence for all .hence those fourier coefficients are invariant under additive constants and thus contain no information about the mean .similarly all the information about the bootstrap mean is carried only in the bootstrap version of [ as well as .the problem of bootstrapping the mean is therefore separated from getting a time series with the appropriate covariance structure and will not be considered here .in fact , we show that any asymptotically correct bootstrap of the mean if added to our bootstrap time series [ cf .( [ eq_form_bs2 ] ) ] yields the same asymptotic behavior in terms of its partial sums as the original uncentered time series .our procedure works as follows : calculate the fourier coefficients using the fast fourier transform ( fft ) algorithm . let ; if is even , additionally let .obtain a bootstrap sequence using , for example , one of bootstrap procedures described below .set the remaining bootstrap fourier coefficients according to ( [ eq_conj_com ] ) , that is , and .use the inverse fft algorithm to transform the bootstrap fourier coefficients , , back into the time domain .we thus obtain a bootstrap sequence which is real - valued and centered , and can be used for inference on a large class of statistics that are based on partial sums of the centered process ; see section [ section_cpa ] for examples .note that the exact form of is the following : \\[-8pt ] & = & \frac{2}{\sqrt{t}}\sum_{j=1}^n\bigl ( x^*(j)\cos(2\pi t j / t)-y^*(j)\sin(2\pitj / t ) \bigr).\nonumber\end{aligned}\ ] ] [ rem_centering ] in order to obtain a bootstrap sequence of the noncentered observation process we can add a bootstrap mean to the process ; here , is obtained by a separate bootstrap process independently from , which is asymptotically normal with the correct variance , that is , it fulfills ( [ eq_boot_mean ] ) .precisely , the bootstrap sequence gives a bootstrap approximation of .here , contains the information about the covariance structure of the time series , and contains the information of the sample mean as a random variable of the time series . how to obtain the latter will not be considered in this paper . in corollary [ cor_boot_est ]we give some conditions under which the above procedure remains asymptotically valid if instead of the process we use an estimated process ; this is important in some applications .now we are ready to state some popular bootstrap algorithms in the frequency domain .we have adapted them in order to bootstrap the fourier coefficients rather than the periodograms .our procedure can easily be extended to different approaches ._ step _ 1 : first estimate the spectral density by satisfying }}|\widehat{f}(\lambda)-f(\lambda ) |{\stackrel{p}{\longrightarrow}}0.\ ] ] this will be denoted assumption [ ass_a1 ] in section [ section_prop_freq ] .robinson proves such a result for certain kernel estimates of the spectral density based on periodograms for a large class of processes including but not limited to linear processes . for linear processes he also proves the consistency of the spectral density estimate as given abovewhen an automatic bandwidth selection procedure is used .shao and wu also prove this result for certain kernel estimates of the spectral density for processes satisfying some geometric - moment contraction condition , which includes a large class of nonlinear processes .both results are summarized in lemma [ lem_spectralestimate ] ._ step _ 2 : next estimate the residuals of the real , as well as imaginary , part of the fourier coefficients and put them together into a vector ; precisely let . then standardize them ,that is , let heuristically these residuals are approximately i.i.d ., so that i.i.d .resampling methods are reasonable ._ step _ 3 : let , , denote an i.i.d . sample drawn randomly and with replacement from . as usual , the resampling step is performed conditionally on the data ._ step _ 4 : define the bootstrapped fourier coefficients by an analogous approach albeit focusing on the periodogram ordinates instead of the fft coefficients was proposed by franke and hrdle in order to yield a bootstrap distribution of kernel spectral density estimators .the wild bootstrap also makes use of an estimated spectral density further exploiting the knowledge about the asymptotic normal distribution of the fourier coefficients .precisely , the wb replaces above by independent standard normal distributed random variables in order to obtain the bootstrap fourier coefficients as in ( [ eq_bc_b1 ] ) .this bootstrap was already suggested by hurvich and zeger , who considered it in a simulation study , but did not obtain any theoretical results . an analogous approach albeit focusing on the periodogram was discussed by franke and hrdle who proposed multiplying the periodogram with i.i.d .exponential random variables .the advantage of the local bootstrap is that it does not need an initial estimation of the spectral density .the idea is that in a neighborhood of each frequency the distribution of the different coefficients is almost identical ( if the spectral density is smooth ). it might therefore be better able to preserve some information beyond the spectral density that is contained in the fourier coefficients .an analogous procedure for periodogram ordinates was first proposed by paparoditis and politis . for the sake of simplicitywe will only consider bootstrap schemes that are related to kernels .recall that , and , for , and , for even , ( denotes the smallest integer larger or equal than ) .furthermore let be the fourier coefficients of the centered sequence .for and the coefficients are periodically extended with period ._ step _ 1 : select a symmetric , nonnegative kernel with in section [ section_prop_freq ] we assume some additional regularity conditions on the kernel in order to get the desired results .moreover select a bandwidth fulfilling but ._ step _ 2 : define i.i.d .random variables on with independent of them define i.i.d .bernoulli r.v . with parameter ._ step _ 3 : consider now the following bootstrap sample : and finally the bootstrap fourier coefficients are defined as the centered versions of , respectively , , namely by this is slightly different from paparoditis and politis , since they require that and share the same which is reasonable if one is interested in bootstrapping the periodogram but not necessary for bootstrapping the fourier coefficients . the aforementioned three bootstrap methods , residual bootstrap ( rb ) , wild bootstrap ( wb ) and local bootstrap ( lb ) , are all first - order consistent under standard conditions .a rigorous theoretical comparison would entail higher - order considerations which are not available in the literature and are beyond the scope of this work .intuitively , one would expect the rb and lb procedures to perform similarly in applications since these two bootstap methods share a common underlying idea , that is , that nearby periodogram / fft ordinates are i.i.d .by contrast , the wb involves the generation of extraneous gaussian random variables thus forcing the time - domain bootstrap sample paths to be gaussian .for this reason alone , it is expected that if a higher - order property holds true in our setting , it will likely be shared by rb and lb but not wb . our finite - sample simulations in section [ section_sim ] may hopefully shed some additional light on the comparison between rb and lb .first , note that the tft wild bootstrap is identical to the parametric frequency - domain bootstrap proposal of hurvich and zeger .by contrast , the nonparametric bootstrap proposal of hurvich and zeger was based on prewhitening via an estimate of the ma( ) transfer function . estimating the transfer function presents an undesirable complication since prewhitening can be done in an easier fashion using any consistent estimator of the spectral density ; the residual - based tft exploits this idea based on the work of franke and hrdle .the local bootstrap tft is a more modern extension of the same underlying principle , that is , exploiting the approximate independence ( but not i.i.d .- ness ) of periodogram ordinates .we now attempt to shed some light on the relation between the tft and the surrogate data method of theiler et al . . recall that the surrogate data approach amounts to using as bootstrap fourier coefficients at point where is the periodogram at point , and are i.i.d .uniform on ] independent from each other .comparing equation ( [ eq.s1234 ] ) to equation ( [ eq.t1234 ] ) we see that the surrogate data approach is closely related to the nonsmoothed wild bootstrap ; the main difference is that the wild bootstrap does not only bootstrap the phase but also the magnitude of the fourier coefficients .nevertheless , the nonsmoothed wild bootstrap does not suffer from the severe deficiency outlined in proposition [ prop1 ] since it does manage to capture the variability of the periodogram to some extent . to elaborate , note that it is possible to prove a functional limit theorem [ like our theorem [ th_main](a ) in the next section ] for the nonsmoothed wild bootstrap but only under the provision that a _ smaller resample size _ is employed ; that is , only a fraction of the bootstrap sample is used to construct the partial sum process ( ) .this undersampling condition is necessary here since without it the asymptotic covariance structure would not be correct .hence , even the nonsmoothed wild bootstrap , although crude , seems ( a ) preferable to the surrogate data method and ( b ) inferior with respect to the tft - bootstrap ; this relative performance comparison is clearly born out in simulations that are not reported here due to lack of space .in this section we state the main result , namely a functional limit theorem for the partial sum processes of the bootstrap sample . the theorem is formulated in a general way under some meta - assumptions on the resampling scheme in the frequency domain that ensure the functional limit theorem back in the time domain . in section [ section_valboot ]we verify those conditions for the bootstrap schemes given in the previous section .we would like to point out that the meta - assumptions we give are the analogues of what is usually proved for the corresponding resampling schemes of the periodograms , which are known to hold for a large class of processes .the usage of meta - assumptions allows the reader to extend results to different bootstrap schemes in the frequency domain . by , , and denote as usual the bootstrap expectation , variance , covariance and probability .we essentially investigate three sets of assumptions . the first one is already implied by the above mentioned bootstrap schemes . [ ass_boot_1 ] for the bootstrap scheme in the frequency domain , the coefficients and are independent sequences as well as mutually independent ( conditionally on the data ) with [ rem_moments ] instead of assuming that the bootstrap samples are already centered it is sufficient that the bootstrap means in the frequency domain converge uniformly to with a certain rate , that is , where is the parameter figuring in lemma [ lem_cov ] ( resp . , theorem [ th_main ] below ) .[ ass_boot_2b ] uniform convergence of the second moments of the bootstrap sequence in the frequency domain , that is , [ ass_boot_4 ] uniform boundedness of the fourth moments of the bootstrap sequence in the frequency domain let us now recall the definition of the mallows distance on the space of all real borel probability measures with finite variance .it is defined as where the infimum is taken over all real - valued variables with marginal distributions and , respectively .mallows has proved the equivalence of convergence in this metric with distributional convergence in addition to convergence of the second moments .the results remain true if we have convergence in a uniform way as in assumption [ ass_boot_3 ] below .this shows that assumption [ ass_boot_3 ] implies assumption [ ass_boot_2b ] .[ ass_boot_3 ] let the bootstrap scheme in the frequency domain converge uniformly in the mallows distance to the same limit as the fourier coefficients do we will start with some results concerning the asymptotic covariance structure of the partial sum process ; all asymptotic results are taken as .[ lem_cov ] let assumption [ ass_boot_1 ] be fulfilled .then , for any and , let assumptions [ ass_w ] , [ ass_boot_1 ] and [ ass_boot_2b ] be fulfilled . then , for , ,&\quad .}\end{aligned}\ ] ] moreover , under assumptions [ ass_w ] , [ ass_boot_1 ] and [ ass_boot_2b ] , for all fixed .as already pointed , out using frequency domain methods separates the problem of an appropriate bootstrap mean from the problem of obtaining a bootstrap sample with the appropriate covariance structure . as a result ,the bootstrap sample is centered and thus the bootstrap version of the centered time series .the above lemma shows that the bootstrap process as well as its partial sum process has the correct auto - covariance structure . the following theorem gives a functional central limit theorem in the bootstrap world , showing that the bootstrap partial sum process also has the correct second - order moment structure .in fact , the partial sum process of a centered time series converges to a brownian bridge , while the subsampled partial sum processes converges to a wiener process . as the following theorem shows this behavioris exactly mimicked by our tft - bootstrap sample .[ th_main ] let assumptions [ ass_w ] , [ ass_boot_1][ass_boot_4 ] be fulfilled . if , then it holds ( in probability ) }{\longrightarrow}\{w(u)\dvtx0{\leq}u{\leq}1\},\ ] ] where is a wiener process .if additionally assumption [ ass_boot_3 ] is fulfilled , we obtain ( in probability ) }{\longrightarrow}\{b(u)\dvtx0{\leq}u{\leq}1\},\ ] ] where is a brownian bridge .[ rem_lind ] the stronger assumption [ ass_boot_3 ] is needed only to get asymptotic normality of the partial sum process , that is , part ( b ) above . in the proof of theorem [ th_main ] for use the lindeberg condition to obtain asymptotic normality .however , for the latter is not fulfilled because the variance of single summands ( e.g. , ) is not negligible anymore .but for the same reason the feller condition is also not fulfilled which means that we can not conclude that the sequence is not asymptotically normal .in fact , failure of asymptotic normality is hard to imagine in view of corollary [ cor_braun ]. therefore we recommend to always use in applications even in situations where assumption [ ass_boot_3 ] is hard to verify .[ rem_surrogate_1]the bootstrap variance is usually related to the periodogram which is not consistent without smoothing .therefore assumption [ ass_boot_2b ] ensures that the bootstrap scheme includes some smoothing . for the bootstrap , however , this is not entirely necessary , and we can also bootstrap without smoothing first .the simplest example is the nonsmoothed wild bootstrap as described in section [ section_diff_methods ] .one can then still prove the result of theorem [ th_main ] , but only for .in this situation this condition is necessary , since without it the asymptotic covariance structure is not correct ( i.e. , the assertion of lemma [ lem_cov ] is only true for ) , would be a good rule of thumb .while this is a very simple approach ( without any additional parameters ) it does not give as good results as the procedure we propose .heuristically , this still works because the back - transformation does the smoothing , but to obtain a consistent bootstrap procedure we either need some smoothing in the frequency domain as in assumption [ ass_boot_2b ] or do some under - sampling back in the time domain , that is , .in fact , one of the main differences between our wild tft - bootstrap and the surrogate data approach is the fact that the latter does not involve any smoothing in the frequency domain . the other difference being that the surrogate data approach only resamples the phase but not the magnitude of the fourier coefficients . for more detailswe refer to section [ section_diff_methods ] .some applications are based on partial sums rather than centered partial sums .this can be obtained as described in remark [ rem_centering ] .the following corollary then is an immediate consequence of theorem [ th_main](b ) .[ cor_main ] let the assumptions of theorem [ th_main ] be fulfilled . let be a bootstrap version of the mean [ taken independently from ] such that for all , where denotes the standard normal distribution function , so that the asymptotic distribution is normal with mean 0 and variance . then it holds ( in probability ) }{\longrightarrow}\{w(u)\dvtx0{\leq}u{\leq}1\},\ ] ] where .along the same lines of the proof we also obtain the analogue of the finite - sample result of braun and kulperger for the surrogate data method .this shows that any finite sample has the same covariance structure as the corresponding finite sample of the original sequence ; however , it also shows that not only do the partial sums of the bootstrap sample become more and more gaussian , but each individual bootstrap observation does as well .[ cor_braun ] if assumptions [ ass_w ] , [ ass_boot_1][ass_boot_4 ] are fulfilled , then for any subset of fixed positive integers it holds ( in probability ) where with .in this section we prove the validity of the bootstrap schemes if the fourier coefficients satisfy certain properties .these or related properties have been investigated by many researchers in the last decades and hold true for a large variety of processes . some of these results are given in section [ section_prop_freq ] .recall assumption [ ass_a1 ] , which is important for the residual - based bootstrap ( rb ) as well as the wild bootstrap ( wb ) . [ ass_a1 ] let estimate the spectral density in a uniform way , that is , fulfilling ( [ eq_est_spec_dens ] ) , }}|\widehat{f}(\lambda)-f(\lambda ) |{\stackrel{p}{\longrightarrow}}0 ] .[ ass_process_robinson ] assume satisfies uniformly in as for some . a detailed discussion of this assumption can be found in robinson ; for linear processes with existing fourth moments assumption [ ass_process_robinson ] is always fulfilled with .[ ass_process_shaowu1 ] assume that , where is a measurable function and is an i.i.d .assume further that where . in case of linear processes this conditionis equivalent to the absolute summability of the coefficients .the next assumption is stronger : [ ass_process_shaowu ] assume that , where is a measurable function and is an i.i.d .further assume the following geometric - moment contraction condition holds .let be an i.i.d .copy of , let be a coupled version of .assume there exist and , such that for all this condition is fulfilled for linear processes with finite variance that are short - range dependent .furthermore it is fulfilled for a large class of nonlinear processes . for a detailed discussion of this conditionwe refer to shao and wu , section 5 .[ ass_process_wu ] assume that is a stationary causal process , where is a measurable function and is an i.i.d . sequence .let be a coupled version of where independent of . furthermore assume the following lemma gives some conditions under which assumption [ ass_a1 ] holds , which is necessary for the residual - based and wild bootstrap ( rb and wb ) to be valid .moreover it yields the validity of assumption [ ass_a4](ii ) , which is needed for the local bootstrap lb .lemma [ lem_llnperio2 ] also gives some assumptions under which the kernel spectral density estimate uniformly approximates the spectral density .[ lem_spectralestimate ] assume that assumptions [ ass_w ] and [ ass_kernelneu1 ] are fulfilled , and let be as in ( [ eq_lem_spectralestimate ] ) .let assumptions [ ass_kern_robinson ] and [ ass_process_robinson ] be fulfilled ; additionally the bandwidth needs to fulfill .then }|\widehat{f}_t(\lambda)-f(\lambda ) |{\stackrel{p}{\longrightarrow}}0.\ ] ] let assumptions [ ass_kern_shaowu ] , [ ass_clt ] , [ ass_process_dens ] and [ ass_process_shaowu ] be fulfilled . furthermore let for some and , for some , then }}|\widehat{f}_t(\lambda)-f(\lambda ) |{\stackrel{p}{\longrightarrow}}0.\ ] ] for linear processes robinson gives an automatic bandwidth selection procedure for the above estimator ; see also politis .the following lemma establishes the validity of assumptions [ ass_a2 ] .[ lem_llnperio]let assumption [ ass_w ] be fulfilled. then assumption [ ass_a2 ] holds . if additionally where if and else , then assumption [ ass_a2 ] holds .if additionally then assumption [ ass_a2 ] holds , and , more precisely , [ rem_llnperio ] the conditions of lemma [ lem_llnperio ] are fulfilled for a large class of processes .theorem 10.3.2 in brockwell and davis shows ( [ eq_cov_1 ] ) for linear processes , where is i.i.d . with , and is uniformly .an analogous proof also yields ( [ eq_cov_2 ] ) under the existence of 8th moments , that is , if .lemma a.4 in shao and wu shows that ( [ eq_cov_1 ] ) [ resp . , ( [ eq_cov_2 ] ) ] is fulfilled if the 4th - order cumulants ( resp . , 8th - order cumulants )are summable , 4th ( resp . ,8th ) moments exist and ( cf . also theorem 4.3.1 in brillinger ) .more precisely they show that the convergence rate is in ( [ eq_cov_1 ] ) . by remark 4.2 in shao and wu this cumulant conditionis fulfilled for processes fulfilling assumption [ ass_process_shaowu ] for ( resp . , ) .furthermore , assumption [ ass_a2 ] is fulfilled if assumptions [ ass_process_dens ] and [ ass_process_wu ] are fulfilled ( cf .wu ) .chiu uses cumulant conditions to prove strong laws of large numbers and the corresponding central limit theorems for for .the next lemma shows that weighted and unweighted empirical distribution functions of fourier coefficients converge to a normal distribution , hence showing that assumptions [ ass_a3 ] and [ ass_a5 ] are valid .the proof is based on theorem 2.1 in shao and wu , which is somewhat stronger than the usual statement on asymptotic normality of finitely many fourier coefficients as it gives the assertion uniformly over all finite sets of fixed cardinal numbers ; this is crucial for the proof of lemma [ lem_edf ] . [ lem_edf ] let assumptions [ ass_w ] , [ ass_process_dens ] and [ ass_process_shaowu1 ] be fulfilled . furthermore consider weights such that and as , then where denotes the distribution function of the standard normal distribution .if we have weights with and , then the assertion remains true in the sense that for any it holds that the next lemma shows the validity of assumptions [ ass_a4 ] and again [ ass_a1 ] under a different set of assumptions .for this we need to introduce yet another assumption on the kernel .[ ass_kernelneu3 ] let as in ( [ eq_def_kh ] ) fulfill the following uniform lipschitz condition ( ) : in case of a uniform lipschitz continuous kernel with compact support , the assertion is fulfilled for small enough . for infinite support kernels we still get assumption [ ass_kernelneu3 ] as in remark [ rem_new_kernel ] under certain stronger regularity conditions .[ lem_llnperio2 ] let the process fulfill assumptions [ ass_w ] and [ ass_process_dens ] .furthermore the bandwidth fulfills and the kernel fulfills assumptions [ ass_kernelneu1 ] and [ ass_kernelneu3 ] in addition to .assumption [ ass_a4 ] holds , if assumption [ ass_a2 ] together with ( [ eq_cov_1 ] ) , where the convergence for is uniformly of rate , implies assumption [ ass_a4 ] as well as assumption [ ass_a1 ] for the spectral density estimator given in ( [ eq_lem_spectralestimate ] ) .assumption [ ass_a2 ] together with ( [ eq_cov_2 ] ) , where the convergence for is uniformly of rate , implies assumption [ ass_a4 ] .[ rem_llnperio2 ] by the boundedness of the spectral density ( cf .assumption [ ass_w ] ) ( [ eq_cov_coef ] ) follows , for example , from assumption [ ass_a2](ii ) .if for some the rate of convergence in ( [ eq_cov_coef ] ) is for a proof we refer to the supplementary material , proof of lemma [ lem_llnperio ] .some conditions leading to ( [ eq_cov_1 ] ) , respectively , ( [ eq_cov_2 ] ) , with the required convergence rates can be found in remark [ rem_llnperio ] .in this section we show that while our procedure still works for the same class of periodogram - based statistics as the classical frequency bootstrap methods , we are also able to apply it to statistics that are completely based on the time domain representation of the observations , such as the cusum statistic for the detection of a change point in the location model or the least - squares test statistic in unit - root testing . the classical applications of bootstrap methods in the frequency domain are kernel spectral density estimators ( cf .franke and hrdle , paparoditis and politis ) as well as ratio statistics and whittle estimators ( cf . dahlhaus and janas , paparoditis and politis ) .this includes , for example , yule walker estimators for autoregressive processes .a simple calculation yields where is the periodogram of the tft - bootstrap time series at , and , are defined as in section [ section_descr_boot ] . comparing that with the original bootstrap procedures for the periodograms, we realize that for the wild bootstrap we obtain exactly the same bootstrap periodogram , whereas for the residual - based as well as local bootstrap we obtain a closely related bootstrap periodogram but not exactly the same one .the reason is that we did not simultaneously draw the real and imaginary part of the bootstrap fourier coefficient but further exploited the information that real and imaginary part are asymptotically independent .yet , the proofs showing the validity of the bootstrap procedure for the above mentioned applications go through , noting that the bootstrap s real part and imaginary part are ( conditionally on the data ) independent .the above discussion shows that the procedures discussed in this paper inherit the advantages as well as disadvantages of the classical frequency bootstrap procedures . in change - point analysis oneis interested in detecting structural changes in time - series such as , for example , a mean change in the following amoc ( at - most - one - change ) location model : where is a stationary process with ; , and are unknown .the question is whether a mean change occurred at some unknown time , the so called change - point .this shows that we are interested in testing typically , test statistics in this context are based on centered partial sums such as the well - known cusum statistic , for simplicity we only discuss the classical cusum statistic above .however , extensions to other test statistics in change - point analysis , such as are straightforward using standard techniques of change - point analysis ( cf . ,e.g. , kirch , proof of corollary 6.1 ) .this is not true for extreme - value type test statistics for which stronger results are needed . for a detailed discussion of typical test statisticswe refer to csrg and horvth .if fulfills assumption [ ass_clt ] we obtain the following limit under ( cf . also horvth and antoch , hukov and prkov ) : where is a brownian bridge and , where is the spectral density of .kirch has already used permutation methods in the frequency domain to obtain approximations of critical values for change - point tests .her idea was to use random permutations of the fourier coefficients taking some symmetry properties into account before back - transforming them to the time domain using the fft .however , the covariance structure of a time series is encoded in the variances of the fourier coefficients ; hence , this structure is destroyed by a simple permutation .we will now apply our tft - bootstrap to obtain critical values for the above change - point tests .we do not directly observe the process since we do not know whether the null hypothesis or the alternative holds true ; thus , we estimate by where ( e.g. ) , and .the bootstrap statistic is then given by where is the tft - bootstrap sequence defined in section [ section_descr_boot ] .the following theorem shows that the ( conditional ) limit of the bootstrap statistic is the same as that of the original statistic under even if the alternative holds true .hence , the distribution of the bootstrap statistic is a good approximation of the null distribution of the statistic , and the bootstrap critical values are asymptotically equivalent to the asymptotic critical values under both the null hypothesis as well as alternatives .this shows that the asymptotic test and the bootstrap test are asymptotically equivalent . in the next sectiona simulation study shows that frequently we get better results in small samples when using the tft - bootstrap .[ th_cpa ] suppose that the process fulfills the hjek renyi inequality ( cf . , e.g. , lemma in kirch for linear processes ) , and let under , .furthermore let the assumptions in theorem [ th_boot ] hold and for as in corollary [ cor_boot_est ] .then it holds under as well as for all ) .this shows that the corresponding bootstrap test ( where one calculates the critical value from the bootstrap distribution ) is asymptotically equivalent to the asymptotic test above .the condition is fulfilled for a large class of processes with varying convergence rates ; for certain linear processes we get the best possible rate ( cf . , e.g. , antoch , hukov and prkov ) , but often in the dependent situation the rates are not as good ( cf . , e.g. , kokoszka and leipus ) .it is still possible to get the above result under somewhat stronger assumptions on , that is , on the bandwidth , if only weaker versions of the hjek renyi inequality are fulfilled as , for example , given in appendix b.1 in kirch for fairly general processes .[ rem_stud ] for practical purposes it is advisable to use some type of studentizing here .we propose to use the adapted flat - top estimator with automatic bandwidth choice described in politis for the asymptotic test as well as for the statistic of the original sample .let , and the bandwidth , where is the smallest positive integer such that , for . then , the estimator is given by the rightmost part in the parenthesis is chosen to ensure positivity and scale invariance of the estimator .for a discussion of a related estimator in change - point analysis we refer to hukov and kirch .in the bootstrap domain we propose to use an estimator that is closely related to the bootstrap procedure , namely an estimator based on the bootstrap periodograms using the same kernel and bandwidth as for the bootstrap procedure ( cf . also ( 10.4.7 ) in brockwell and davis ) where is the bootstrap periodogram and it can easily be seen using assumptions [ ass_boot_1][ass_boot_4 ] that if which holds under very weak regularity conditions on the kernel .this shows that the studentized bootstrap procedure is asymptotically consistent .this estimator is naturally related to the bootstrap procedure and has proved to work best in simulations .this is similar ( although maybe for different reasons ) to the block bootstrap for which gtze and knsch showed that , in order to obtain second - order correctness of the procedure , one needs to studentize the bootstrap statistic with the true conditional variance of the bootstrap statistic ( which is closely related to the bartlett estimator ) , while for the original statistic one needs to use a different estimator such as the above mentioned flat - top estimator .however , a detailed theoretical investigation of which type of studentizing is best suitable for the tft - bootstrap is beyond the scope of this paper .unit root testing is a well studied but difficult problem .key early references include phillips and dickey and fuller ( see also the books by fuller and by hamilton ) . nevertheless , the subject is still very much under investigation ( see , e.g. , cavaliere and taylor , chang and park , park , paparoditis and politis and the references therein ) .the objective here is to test whether a given set of observations belongs to a stationary or a -time series ( integrated of order one ) , which means that the time series is not stationary , but its first - order difference is stationary . for simplicitywe assume that , and we do not consider a deterministic trend component in this paper .the hypothesis test of interest can then be stated as now we note that for the null hypothesis is equivalent to ( for a detailed discussion we refer to paparoditis and politis , example 2.1 ) .denote , which is a stationary sequence under as well as .while the bootstrap test below is valid for the general situation ( ) above , it is intuitively easier to understand if one considers the following restricted situation , where for some stationary with mean 0 and tests versus ; this is the setup we use in the simulations below .an intuitive test statistic ( cf .phillips ) is given by rejecting the null hypothesis if for some appropriate critical value , where is a consistent estimator for under both the null hypothesis as well as the alternative .other choices for and are also possible ( for a detailed discussion , see section 2 of paparoditis and politis ) .if fulfills assumption [ ass_clt ] with mean 0 and additionally then it holds under that where and is the spectral density of the stationary sequence , , and is a wiener process ( see also phillips , theorem 3.1 ) .this shows that the limit distribution of depends on the unknown parameters as well as if the errors are dependent .the famous dickey fuller test is closely related to the above test statistic just using a slightly different normalization , but it suffers from the same problem .phillips and phillips and perron suggest some modifications of the two tests mentioned above which do have a pivotal limit for time series errors as well .later on , perron and ng , stock and ng and perron propose to use the trinity of so - called unit root statistics , which are also closely related to the above two tests but have pivotal limits for time series errors as well . those unit root testsare given by as well as the product of the above two statistics .as before denotes an estimator of .in the simulations we use the estimator as given in ( [ est_flat_top ] ) .all of the above mentioned statistics are continuous functions of the partial sum process under the null hypothesis [ as , so that the null asymptotics are immediate consequences of the functional central limit theorem as given in assumption [ ass_clt ] . for the test statistic and the dickey fuller test it is additionally needed that .for example , the statistic has the same asymptotic limit as the statistic with independent errors in the following we concentrate on the statistic but the results for the other mentioned statistics follow analogously .we would like to apply the tft - bootstrap to obtain critical values ; that means we need a bootstrap sequence which is ( conditionally ) . in order to obtain thiswe estimate , which is stationary under both as well as , by then we can use the tft - bootstrap based on , that is , create a tft - bootstrap sample and obtain a bootstrap sequence ( i.e. , a sequence fulfilling ) by letting the bootstrap analogue of the statistic is then given by where we use again the estimator as in ( [ boot_est_tau ] ) for the bootstrap sequence .the following theorem shows that the conditional limit of the bootstrap statistic is the same as that appearing in the rhs of ( [ eq_asym_mut ] ) no matter whether the original sequence follows the null or alternative hypothesis .this shows that the bootstrap critical values and thus also the bootstrap test is equivalent to the asymptotic critical values ( and thus the asymptotic test ) , under both the null hypothesis as well as alternatives .[ th_unitroot ] suppose that the process has mean 0 and fulfills the assumptions in theorem [ th_boot ] and let ( [ eq_boot_mean ] ) be fulfilled .furthermore assume that under as well as it holds that for as in corollary [ cor_boot_est ] .then it holds under as well as for all that this shows that the corresponding bootstrap test ( where one calculates the critical value from the bootstrap distribution ) is asymptotically equivalent to the test based on ( [ eq_asym_mut ] ) .condition ( [ eq_unit_est ] ) is fulfilled for a large class of processes and if , theorem 3.1 in phillips , for example , shows under rather general assumptions that under under ( [ eq_unit_est ] ) also holds under fairly general assumptions ( cf . ,e.g. , romano and thombs , theorem 3.1 ) if ; more precisely the previous sections , the asymptotic applicability of the tft - bootstrap was investigated . in this sectionwe conduct a small simulation study in order to show its applicability in finite samples . with parameter and corresponding tft - bootstrap sample : residual - based bootstrap , bartlett priestley kernel , . ] to get a first impression of what a tft - bootstrap sample looks like , we refer to figure [ fig_1 ] , which shows the original time series as well as one bootstrap sample . at first glance the covariance structureis well preserved .we use the statistical applications given in section [ section_appl ] to show that the tft - bootstrap is indeed applicable .the usefulness of the procedure for statistics based on periodograms have already been shown by several authors ( cf .franke and hrdle , dahlhaus and janas and paparoditis and politis ) and will not be considered again .however , the applicability for statistics that are completely based on time domain properties , such as the cusum statistic in change - point analysis or the above unit - root test statistics , is of special interest .more precisely we will compare the size and power of the tests with different parameters as well as give a comparison between the tft , an asymptotic test , and alternative block resampling techniques .for change - point tests , the comparison is with the block permutation test of kirch ; in the unit - root situation we compare the tft - bootstrap to the block bootstrap of paparoditis and politis . for the tft we use the local bootstrap ( lb ) as well as residual - based bootstrap ( rb ) with a uniform kernel ( uk ) as well as bartlett priestley kernel ( bpk ) with different bandwidths .we visualize these qualities by the following plot : the line corresponding to the null hypothesis shows the actual achieved level on the -axis for a nominal one as given by the -axis .this can easily be done by plotting the empirical distribution function ( edf ) of the -values of the statistic under .the line corresponding to the alternative shows the size - corrected power , that is , the power of the test belonging to a true level test where is given by the -axis .this can easily be done by plotting the edf of the -values under the null hypothesis against the edf of the -values under the alternative.=1 in the simulations we calculate all bootstrap critical values based on 1,000 bootstrap samples , and the asps are calculated on the basis of 1,000 repetitions . concerning the parameters for the tft - bootstrap we have used a uniform [ as well as bartlett priestley kernel [ with various bandwidths .all errors are centered exponential hence non - gaussian .furthermore time series are used with coefficient .furthermore we consider garch processes as an example of nonlinear error sequences .we compare the power using an alternative that is detectable but has not power one already in order to pick up power differences . for the process with parameter ,we choose ; for we choose as changes are more difficult to detect for these time series .a comparison involving the uniform kernel ( uk ) , the bartlett priestley kernel ( bpk ) as well as bandwidth and can be found in figure .it becomes clear that small bandwidths are best in terms of keeping the correct size , where the bpk works even better than the uk .however , this goes along with a loss in power which is especially severe for the bpk .furthermore , the power loss for the bpk kernel is worse if combined with the local bootstrap . generally speaking, the tft works better for negatively correlated errors which is probably due to the fact that the correlation between fourier coefficients is smaller in that case .process with parameter with centered exponential errors , respectively , , , , , bpk : bartlett priestley kernel , uk : uniform kernel , lb : local bootstrap , rb : residual - based bootstrap . ] in a second step , we compare the residual - based bootstrap ( rb ) with both kernels and bandwidth with the block permutation method of kirch as well as the asymptotic test .the results are given in figure [ fig_2](e)[fig_2](f ) .the tft works best in terms of obtaining correct size , where the bpk beats the uk as already pointed out above .the power loss of the bpk is also present in comparison with the asymptotic as well as the block permutation methods ; the power of the uniform kernel is also smaller than for the other method but not as severe as for the bpk .the reason probably is the sensitivity of the tft with respect to the estimation of the underlying stationary sequence as in corollary [ cor_main ] . in this example a mis - estimation of the change - point or the mean difference can result in an estimated sequence that largely deviates from a stationary sequence , while in the unit - root example below , this is not as important and in fact the power loss does not occur there . the simulation results for a garch time series with parameters [ i.e. , , , are given in figure [ fig_2](g ) and [ fig_2](h ) , and it becomes clear that the conclusions are similar .the alternative in these plots is given by .in the unit root situation we need a bootstrap sample of as in corollary [ cor_main ] , where we additionally use the fact that has mean . in this casewe additionally need a bootstrap version of the mean . for simplicitywe use a wild bootstrap , where is standard normal distributed and is as in ( [ est_flat_top ] ) , where we replace by .the alternative in all plots is given by .process with parameter with centered exponential errors , respectively , , , , , bpk : bartlett priestley kernel , uk : uniform kernel , lb : local bootstrap , rb : residual - based bootstrap . ] in figure [ fig_3](a)[fig_3](d ) a comparison of different kernels and bandwidths for an error sequence is given .it can be seen that again a small bandwidth yields best results , and in particular the bpk works better than the uk . unlike in the change - point example we do not have the effect of a power loss .furthermore unlike in change - point analysis the bootstrap works better for positively correlated errors. a comparison with the asymptotic test as well as the block bootstrap by paparoditis and politis can be found in figure [ fig_3](e)[fig_3](f ) . in the case of a positive correlationall methods perform approximately equivalently , at least if we use the better working bartlett priestley kernel ; however for a negative correlation the tft holds the level better than the other methods .some results for a garch error sequence with parameters are shown in figure [ fig_3](g ) and [ fig_3](h ) .in this situation a somewhat larger bandwidth of works slightly better and the tft test leads to an improvement of the power of the tests .it is noteworthy that the appropriate bandwidth in all cases is smaller than what one might have expected to be a good choice .a possible explanation for this is that some undersmoothing is appropriate since the back - transformation will account for some additional smoothing .the subject of the paper is the tft - bootstrap which is a general frequency domain bootstrap method that also generates bootstrap data in the time domain .connections of the tft - bootstrap with other methods including the surrogate data method of theiler et al . , and the original proposals of hurvich and zeger were thoroughly explored .it was shown that the tft - bootstrap samples have asymptotically the same second - order moment structure as the original time series .however , the bootstrap pseudo - series are ( asymptotically ) gaussian showing that the tft - bootstrap approximates the original time series by a gaussian process with the same covariance structure even when the original data sequence is nonlinear ; see section [ section_fclt ] .nevertheless , our simulations suggest that for small samples the tft - bootstrap gives a better approximation to the critical values ( as compared with the asymptotic ones ) especially when studentizing is possible . whether appropriate studentization results in higher - order correctness is a subject for future theoretical investigation. choosing somewhat different types of bootstrapping in the frequency domain could also lead to higher - order correctness without bootstrapping as in , for example , dahlhaus and janas for the periodogram bootstrap .in fact the simulations suggest that a surprisingly small bandwidth ( in comparison to spectral density estimation procedures ) works best .when applied in a careful manner no smoothing at all still results in theory in a correct second - moment behavior in the time domain ( cf .section [ section_diff_methods ] ) , suggesting that due to the smoothing obtained by the back - transformation a coarser approximation in the frequency domain is necessary to avoid oversmoothing in the time domain .in this section we only give a short outline of some of the proofs .all technical details can be obtained as electronic supplementary material .proof of lemma [ lem_cov ] by lemma a.4 in kirch it holds ( uniformly in ) that thus it holds uniformly in and that by assumptions [ ass_boot_1 ] and [ ass_boot_2b ] and by ( [ eq_form_bs ] ) it holds that where the last line follows for as well as by ( [ eq_sum_trig ] ) .assertion ( b ) follows by an application of lemma [ lem_proof_1 ] as well as standard representation of the spectral density as sums of auto - covariances ( cf . , e.g. , corollary 4.3.2 in brockwell and davis ) . for assertion ( a )we use the cramr wold device and prove a lyapunov - type condition .again arguments similar to ( [ eq_sum_trig ] ) are needed . to use this kind of argumentit is essential that because for the feller condition is not fulfilled , and thus the lindeberg condition can also not be fulfilled .therefore a different argument is needed to obtain asymptotic normality for .we make use of the cramr wold device and lemma 3 in mallows . as a resultsomewhat stronger assumptions are needed , but it is not clear whether they are really necessary ( cf . also remark [ rem_lind ] ) .proof of theorem [ th_main ] lemmas [ lem_tight ] and [ lem_finite ] ensure convergence of the finite - dimensional distribution as well as tightness , which imply by billingsley , theorem 13.5 , }{\longrightarrow } \cases { \{w(u)\dvtx0{\leq}u{\leq}1\ } , & \quad ,\vspace*{2pt}\cr \{b(u)\dvtx0{\leq}u{\leq}1\},&\quad .}\end{aligned}\ ] ] proof of theorem [ th_boot ] concerning assumption [ ass_boot_2b ] it holds by assumption [ ass_a1 ] that since concerning assumption [ ass_boot_3 ] let , then .then we put an index , respectively , , on our previous notation indicating whether we use or in the calculation of it , for example , , , respectively , , denote the fourier coefficients based on , respectively , .we obtain the assertion by verifying that assumptions [ ass_a1 ] , [ ass_a2 ] as well as assumption [ ass_a4 ] remain true .this in turn implies assumptions [ ass_boot_2b ] as well as assumption [ ass_boot_4 ] . concerning assumption [ ass_boot_3 ]we show that the mallows distance between the bootstrap r.v . based on and the bootstrap r.v .based on converges to 0 .the key to the proof is where this can be seen as follows : by theorem 4.4.1 in kirch , it holds that by ( [ cor_boot_est ] ) and an application of the cauchy schwarz inequality this implies \sum_{j=1}^{2n}f^2_t(j)&\preceq & n\sum_{t=1}^t\bigl(v(t)-{\widehat{v}}(t)\bigr)^2\nonumber\\[-9pt]\\[-9pt ] & & { } + \sum _ { t_1\neq t_2}\bigl|\bigl(v(t_1)-{\widehat{v}}(t_1)\bigr)\bigl(v(t_2)-{\widehat{v}}(t_2)\bigr)\bigr|\nonumber\\[-2pt ] & = & o_p ( t^2\alpha_t^{-1 } ) .\nonumber\end{aligned}\ ] ] equation ( [ eq_pcor_boot_3 ] ) follows by \\[-9pt ] y_v(j)-y_{{\widehat{v}}}(j)&= & t^{-1/2 } f_t(n+j).\nonumber\end{aligned}\ ] ] proof of lemma [ lem_llnperio ] some careful calculations yield \\[-9pt ] \sup_{1{\leq}l , k{\leq}n}| { \operatorname{cov}}(y(l),y(k))-\pi f(\lambda_k)\delta _ { l , k}|&\to&0.\nonumber\end{aligned}\ ] ] this implies by and an application of the markov inequality yields hence assertion ( a ) .similar arguments using proposition 10.3.1 in brockwell and davis yield assertions ( b ) and ( c ) .proof of theorem [ th_cpa ] it is sufficient to prove the assertion of corollary [ cor_boot_est ] under as well as , then the assertion follows from theorem [ th_main ] as well as the continuous mapping theorem . by the hjek renyi inequality it follows under that & = & \frac{\log t}{t}\biggl(\frac{1}{\sqrt{(\log t ) \widehat{\widetilde { k } } } } \sum_{j=1}^{\widehat{\widetilde{k}}}\bigl(v(t)-{\mathrm{e}}(v(t))\bigr ) \biggr)^2\\[-2pt ] & & { } + \frac{\log t}{t}\biggl(\frac{1}{\sqrt{(\log t ) ( t-\widehat { \widetilde{k } } ) } } \sum_{j=\widehat{\widetilde{k}}+1}^t\bigl(v(t)-{\mathrm{e}}(v(t))\bigr ) \biggr)^2\\[-2pt ] & = & o_p\biggl ( \frac{\log t}{t } \biggr),\end{aligned}\ ] ] which yields the assertion of corollary [ cor_boot_est ] . similarly , under alternatives & & \qquad= \frac{\min(\widehat{\widetilde{k}},\widetilde{k})}{t } ( \mu_1-\widehat{\mu}_1)^2 + |d+\mu_j-\widehat{\mu}_j|^2 \frac { |\widehat{\widetilde{k}}-{\widetilde{k}}|}{t}\\[-2pt ] & & \qquad\quad { } + \frac{t-\max(\widehat { \widetilde{k}},\widetilde{k})}{t } ( \mu_2-\widehat{\mu}_2)^2\\[-2pt ] & & \qquad = o_p\biggl(\max\biggl ( \frac{\log t}{t } , \beta_t\biggr ) \biggr),\end{aligned}\ ] ] where and if and and otherwise , which yields the assertion of corollary [ cor_boot_est ] .
|
a new time series bootstrap scheme , the time frequency toggle ( tft)-bootstrap , is proposed . its basic idea is to bootstrap the fourier coefficients of the observed time series , and then to back - transform them to obtain a bootstrap sample in the time domain . related previous proposals , such as the `` surrogate data '' approach , resampled only the phase of the fourier coefficients and thus had only limited validity . by contrast , we show that the appropriate resampling of phase _ and _ magnitude , in addition to some smoothing of fourier coefficients , yields a bootstrap scheme that mimics the correct second - order moment structure for a large class of time series processes . as a main result we obtain a functional limit theorem for the tft - bootstrap under a variety of popular ways of frequency domain bootstrapping . possible applications of the tft - bootstrap naturally arise in change - point analysis and unit - root testing where statistics are frequently based on functionals of partial sums . finally , a small simulation study explores the potential of the tft - bootstrap for small samples showing that for the discussed tests in change - point analysis as well as unit - root testing , it yields better results than the corresponding asymptotic tests if measured by size and power . and . .
|
entanglement is the central theme in quantum information processing .it allows to design faster algorithms than classically , to communicate in a secure way , or to perform protocols that have no classical analogue .entangled states of few particles ( e.g. photons , ions ) can be routinely created in experiments , and their entanglement can be confirmed using state tomography , bell inequalities or entanglement witnesses .all of these tools are well - established methods for the detection of entanglement . but can one be sure that they give a confirmative answer even when realistic , i.e. erroneous detectors are used ? here , we will introduce and study a loophole - problem for the detection of entanglement via witness operators .loophole - problems have been widely discussed in the context of ruling out local hidden variable ( lhv ) models , by measuring a violation of certain inequalities , as suggested in the seminal work of j.s .bell in 1964 .many experiments have been carried out along that line , but all of them so far suffer from the locality loophole ( i.e. no causal separation of the detectors ) and/or the detection loophole ( i.e. low detector efficiency ) . as a consequence of a loophole , quantum correlations are also explainable by lhv theories . in this paperwe discuss a possible _ detection loophole _ for experiments that measure entanglement witnesses . here , the goal is not to prove the completeness of quantum mechanics ( as in bell experiments ) , but , assuming the correctness of quantum mechanics , to prove the existence of entanglement in a given state .one advantage of witness operators is that they require only few _ local _ measurements to detect entanglement ; global measurements are experimentally not easily accessible at present .a local projection measurement with realistic imperfect detectors ( in the computational basis , for qubits and isotropic noise ) can be described by the following positive operator valued measurement ( povm ) : and , where is the efficiency of the detector . however , in general the _ global _ properties of the detectors are not fully characterised , e.g. there may exist correlations between povm elements of different detectors .provided that only local detector properties are given , what are the conditions for being nevertheless able to prove the existence of entanglement without doubt ?an entanglement witness is a hermitian operator that fulfils for all separable -partite states , where the index numbers the subsystem , the probabilities are real and non - negative , , and for at least one entangled state . throughout this paper, we will use without loss of generality normalized witnesses , i.e. . our goal is to ensure that a negative measured expectation value is really due to the state being entangled , rather than to imperfect detectors .the following line of arguments also holds for specialized witnesses which are constructed such that , e.g. , they detect only genuine multi - partite entanglement or states prohibiting lhv models .this paper is organized as follows .after introducing the local decomposition of entanglement witnesses , we study the effect of lost events as well as additional events on the experimental expectation value of . here, we use the worst case approach to derive inequalities which need to be fulfilled to ensure entanglement of the given state .the parameters in these inequalities are the measured expectation value of the witness , the detection efficiencies , and the coefficients for the local decomposition of the witness .we show for the two - qubit case how to optimize the local operator decomposition of the witness , such that the detection efficiency which is needed to close the loophole is minimized .some recent experiments measuring witness operators are presented , to demonstrate that the detection loophole is a problem in current experiments .any witness for an -partite quantum state in dimensions can be decomposed in a local operator basis , i.e. an -fold tensor product of operators , where the coefficients are real .each operator is traceless or the identity and corresponds to the local setting for the party number . in this expansion , we include implicitly also the local identity operators which do not need to be measured . the number of terms in eq .( [ eq - wdec ] ) depends on the decomposition , i.e. on the choice of operators . a straightforward , but not necessarily optimal choice ( concerning the needed detector efficiency ) are the hermitian generators of su( ) and the respective identity operators .in the following , we will use a simpler notation , namely where stands for one term from the local expansion ( [ witness ] ) . here , we exclude the identity ( acting on the total space ) from the sum over , because it does not have to be measured and therefore has a special role .we will now investigate one local measurement setting described by , and drop the index for convenience .the _ measured _ expectation value is given by where is the eigenvalue of , the number of measured events for the outcome is denoted as , and is the total number of measured events for that setting . in the second part of eq .( [ eq - settingexpect ] ) we expressed this expectation value as a sum of the ideal number of events , denoted as ( i.e. for perfect detectors ) , the additional events ( e.g. dark counts ) and the lost events for the outcome .the total ideal number of events for such a setting is denoted as , and the total number of additional / lost events as .we have and .the experimental data usually gives no information about the number of errors for a specific measurement outcome .only the detection imperfections are known , namely the detection efficiency ( `` lost events efficiency '' ) : and the `` additional events efficiency '' : where holds . here , denotes the global detection efficiency for a given measurement setting . in the following , we assume that is the same for all settings .( other cases can be included by indexing with . )usually , one makes the fair - sampling assumption about the statistical distribution of the unknown errors , i.e. one assumes the same statistical distribution for detected and lost events ; the additional events are assumed to have a flat distribution . here, we will give up this assumption and will consider the worst case , where both lost and additional events contribute such that the expectation value of the witness is shifted towards negative values .we point out that in order to reach this worst case scenario , it is already sufficient that the global povm elements exhibit certain classical correlations ( while being compatible with the local measurement operators ) .the worst case is equivalent to finding the lowest possible , which is achieved by minimizing the contribution of the additional events , and maximizing the contribution of the lost events in eq .( [ eq - settingexpect ] ) .this minimization / maximization can be easily shown to have the form and , with where denotes the heaviside function , and is the minimal / maximal eigenvalue of . inserting this into eq .( [ eq - settingexpect ] ) , one finds the following worst case estimate for the measured expectation value : where . here , we have introduced the notation for the _ true _ expectation value ( without any errors ) . using and eq .( [ expectvalue ] ) , with isotropic detection efficiencies , we can express the measured witness expectation value as a function of the _ true _one , for the worst case . to close the loophole it is necessary to ensure that .this leads to a condition for the maximal that depends on the decomposition of and the efficiencies : where we have re - introduced the summation index .> from now on we want to focus on the case where the subsystems are two - dimensional , i.e. qubits . for qubitsthe measurement operators are chosen to be tensor products of pauli operators with eigenvalues .this simplifies to , and eq .( [ eq - wmineq ] ) reads for qubits in fig .[ fig - wmetpetm ] a contour plot of this function is shown : given a certain measured expectation value , the corresponding efficiencies ensure that the state is indeed entangled .this plot assumes , and can be easily redrawn for other decompositions .> from eq .( [ eq - wmetpetmqbts ] ) it is obvious that an _optimal _ decomposition of such a witness with respect to the needed efficiencies is achieved by minimizing . for the case of two - qubit witnessesa constructive optimization can be achieved , when arbitrary local stern - gerlach measurements ( described by pauli operators or rotations thereof ) and the identity are allowed .we start with a two - qubit witness in its pauli operator decomposition , i.e. where and .the normalization condition leads to .the three remaining terms in eq.([eq-2qwitnessterms ] ) can be optimized separately ( note the special role of the identity ) .let us first consider the term .this expression is optimized by doing a singular value decomposition of the coefficient matrix , i.e. , where is the diagonal matrix that contains the singular values .the matrices and are orthogonal and have entries and . the new orthogonal basis is simply constructed by using the orthonormal rows of and , i.e. and , such that we get the schmidt operator decomposition with and the same orthogonality relation for party .the optimality of this biorthogonal decomposition with respect to the detector efficiencies is shown as follows : consider the most general decomposition , where are arbitrary ( not necessarily orthogonal ) rotated pauli operators and without loss of generality ( we can include a minus sign in one of the operators ) . here , the number of terms is finite and an operator may appear more than once .we can express this decomposition in terms of our orthogonal basis , with for all .the right hand sides of eq .( [ eq - optdec ] ) and eq .( [ eq - gendec ] ) are equal .we multiply these two expressions by and take the trace on both sides .this leads to \left [ \tilde{\sigma}_{am}\otimes\tilde{\sigma}_{bm}\right ] \right ) } = \nonumber\\ \sum_{j , k , l , m } b_j\alpha_k^{(j)}\beta_l^{(j ) } { \mathrm{tr}_{}\ ! \left ( \left[\tilde{\sigma}_{ak}\otimes\tilde{\sigma}_{bl } \right]\left [ \tilde{\sigma}_{am}\otimes\tilde{\sigma}_{bm}\right ] \right)}.\end{aligned}\ ] ] orthogonality of the basis is used to get where we used the fact that the scalar product between two normalized vectors ( and ) is less or equal to one .this proves the optimality of the decomposition of given in eq .( [ eq - optdec ] ) . the third term in eq .( [ eq-2qwitnessterms ] ) and , analogously , the second one can be written as where is a rotated pauli operator , and . following similar arguments as before , it is easy to verify that is optimal .the situation for higher dimensions is different : it is unfortunately not straightforward to generalize the above optimization to higher - dimensional witnesses , because we extensively used the fact that a linear combination of pauli operators is again a scaled rotated pauli operator . also , for multi - partite witnesses the schmidt decomposition eq .( [ eq - optdec ] ) does not always exist , such that our optimization method is not applicable for these cases .in many experimental situations , e.g. when optical detectors are used , only the `` lost event efficiency '' is an important issue and the `` additional event efficiency '' is approximately .this situation further simplifies eq .( [ eq - wmineq ] ) , and the minimal detector efficiency that allows to close the loophole has a simple relation with the measured expectation value and the decomposition of the witness , namely this function is shown in fig .( [ fig - etamwm ] ) , where the hatched area corresponds to values for which the inequality is fulfilled , again assuming . for two qubits the detection loophole for witnesses can already be closed with a detection efficiency .this bound is sufficient e.g. for the optimal two - qubit witness of a bell state and a measured expectation value of .we conjecture that for the loophole can not be closed for any witness .we now want to discuss some prominent experimental examples in this context . in ion trap experiments the detection efficiencies are close to one ( ) andthe detection loophole is usually not an issue .exceptions are many - party - witnesses with expectation values close to zero , like the genuine 8-qubit multipartite entanglement witness experiment of hffner __ .single photon experiments on the other hand are more problematic . using eq .( [ eq - etamwm ] ) we give some explicit experimental examples for the needed detection efficiencies to close the witness loophole : m. barbieri _et al . _ implemented the optimal two - qubit entanglement witness to detect a bell state , where they achieved an expectation value of . in this casethe detection efficiency needs to be . in recent experiments also multipartite entanglement witnesseswere implemented . in this workthe three - qubit ghz entanglement witness is loophole - free with a detection efficiency of , and the four - partite case needs .single photon detector efficiencies for wavelengths of 700 - 800 nm are typically around 70% , such that the global detection efficiency for two qubits is circa 50 % , and even lower for more than two subsystems .the detection efficiencies for multipartite witness experiments with photons are thus considerably below the needed thresholds .this is a similar situation as for the detection loophole in bell inequalities .however , there is a good chance for loophole - free witness experiments with two qubits , when slightly more efficient detectors are available . in summary , we discussed the detection loophole problem for experiments measuring witness operators . assuming the worst case ( that may occur due to unknown global properties of the detectors ) we derived certain inequalities to close such loopholes .these inequalities are generally valid for any type of witness operator and depend on the measured expectation value of the witness , its local operator decomposition and the detector efficiencies .> from there , detector efficiency thresholds to close the loophole are easily calculated .the local decomposition of the witness can be optimized such that the needed detection efficiencies are minimized .we explicitly presented a constructive optimization for two - qubit witnesses . for multi - qubit witnesses the optimal decompositionis achieved by minimizing the sum of the absolute values of the expansion coefficients . in the case of higher - dimensional witnessesthe optimization is not straightforward any more , because it then also depends on the type of operator basis .let us mention that an analogous study can be performed , if the witness is decomposed into local projectors ; this will be published elsewhere . for qubit witnesses we further considered the common experimental situation , where additional counts can be neglected .current witness experiments with polarized photons do not close the detection loophole , because of the low single photon detector efficiencies . further research directions and open problems include the optimal local decomposition for higher - dimensional witnesses , and the case of erroneous detector orientations . [ bibliography ] j. s. bell , physics ( long island city , ny ) * 1 * , 195 ( 1964 ) a. aspect , p. grangier , and g. roger , phys .* 47 * , 460 ( 1981 ) ; a. aspect , p. grangier , and g. roger , phys .lett . * 49 * , 91 ( 1982 ) ; a. aspect , j. dalibard , and g. roger , phys .lett . * 49 * , 1804 ( 1982 ) ; g. weihs , t. jennewein , c. simon , h. weinfurter , and a. zeilinger , phys . rev. lett . * 81 * , 5039 ( 1998 ) ; m.a .et al . _ ,nature * 409 * , 791 ( 2001 ) .j. s. bell , _ speakable and unspeakable in quantum mechanics _ ( cambridge university press ) , epr - experiments , ( 1987 ) ; t. k. lo and a. shimony , phys .a * 23 * , 3003 ( 1981 ) ; a. garg and n. d. mermin , phys .d * 35 * , 3831 ( 1987 ) .larsson , phys . rev .a * 57 * , 3304 ( 1998 ) .o. ghne , p. hyllus , d. bru , a. ekert , m. lewenstein , c. macchiavello , and a. sanpera , phys .a * 66 * , 62305 ( 2002 ) .werner , phys .a , * 40 * , 4277 ( 1989 ) . m. horodecki _et al . _ ,phys . lett .a , * 223 * , 1 ( 1996 ) . m. lewenstein , b. kraus , j. i. cirac , and p. horodecki , phys .a * 62 * , 052310 ( 2000 ) .d. bru _ et al ., * 49 * , 1399 ( 2002 ) .p. hyllus , o. ghne , d. bru , and m. lewenstein , phys .a * 72 * , 12321 ( 2005 ) .h. kampermann _et al . _ ,to be publ .m. barbieri _lett . , * 91 * , 227901 ( 2003 ) .m. bourennane _et al . _ ,lett . , * 92 * , 87902 ( 2004 ) .cirac , p. zoller , phys .lett . , * 74 * , 4091 ( 1995 ) .h. hffner _et al . _ ,nature , * 438 * , 643 ( 2005 ) .p. g. kwiat , a.m. steinberg , r.y .chiao , p.h .eberhard , m.d .petroff , phys .a , * 48 * , r867 ( 1993 ) . o. ghne _et al . _ ,a , * 66 * , 062305 ( 2002 ) .
|
we consider a possible detector - efficiency loophole in experiments that detect entanglement via the local measurement of witness operators . here , only local properties of the detectors are known . we derive a general threshold for the detector efficiencies which guarantees that a negative expectation value of a witness is due to entanglement , rather than to erroneous detectors . this threshold depends on the local decomposition of the witness and its measured expectation value . for two - qubit witnesses we find the local operator decomposition that is optimal with respect to closing the loophole .
|
current uavs have limited autonomous capabilities that mainly comprise gps waypoint following , and a few control functions such as maintenance of stability in the face of environmental factors such as wind .more recently some autonomous capabilities such as the ability for a fixed wing ucav to land on the deck of a carrier have also been demonstrated .these capabilities represent just the tip of the spear in terms of what is possible and , given both the commercial and military applications and interest , what will undoubtedly be developed in the near future . in particular , flexibility in responses that can mimicthe unpredictability of human responses is one way in which autonomous systems of the future will differentiate themselves from rules - based control systems .human - style unpredictability in action selection opens the door to finding solutions that may not have been imagined at the time the system was programmed .additionally , this type of unpredictability in combat systems can create difficulties for adversary systems designed to act as a counter . the capability to compute sequences of actions that do not correspond to any pre - programmed input - in other words , the ability to evolve new responses - will be another area of future differentiation .there are many other such enhancements that will be enabled via autonomous systems powered by artificial intelligence . in the following sections we will outline some of the advanced capabilities that can be engineered , and design and engineering approaches for these capabilities .some degree of autonomy in flight control has existed for over a hundred years , with autopilot inventor , lawrence sperry s demonstration in 1913 of a control system that tied the heading and attitude indicators to a control system that hydraulically operated elevators and rudders . a fully autonomous atlantic crossing was achieved as early as 1947 in a usaf c-54 aircraft . however , much of the early work in automating control systems were mechanical implementations of rule - based systems drawing upon cybernetics and control theory .they demonstrated that with such techniques it was possible to automate a basic mission , including takeoff and landing . since the 1947 demonstration ,considerable effort has been invested in developing autonomous flight capabilities for commercial and military aircraft .modern flight control or autopilot systems that govern landings are segmented in five categories from cat - i to cat - iiic , with capabilities varying based on forward visibility and decision height .many of these systems use rule - based , or fuzzy - rule based control , incorporating sensor - fusion techniques such as kalman filters .they are capable of following a planned route and adjusting for environmental factors such as cross - winds , turbulence and so on .the increased popularity of commercial drones , and the heightened utilization of military drone aircraft has , in parallel , created a new class of autonomous capabilities . from open source initiatives such as the ardupilot control software for low - cost drones , to higher levels of autonomy in military drones .software such as the ardupilot , for example , uses a combination of gps positioning , additional sensors to gauge velocity and position , combined with basic flight control rules to autonomously navigate to a sequence of waypoints .many of these map - input based waypoint following capabilities are also implemented in military surveillance and combat drones .another area of control innovation comes from swarm theory and related control algorithms . at the simplest level, these algorithms seek inspiration from the behavior of biological systems such as ant colonies or flocks of birds .they are collaboration algorithms that enable each individual system in the swarm to compute its future actions based on its own measurements , but also those of its neighbors .while basic swarm algorithms are effective in providing coverage over an area , and automatically repositioning all nodes when one is lost to maintain coverage , they do not provide much guidance on how to divide mission responsibilities and burdens , and to effectively delegate them to individual nodes .the concept of a `` swarm '' as found in biology will have to evolve into something entirely different - perhaps somewhat similar to a pack hunt - but even that analogy would only be marginal - in order for it to be an effective and useful system particularly in a military context .some of the reasons why we propose this conclusion regarding the inadequacy of existing swarm algorithms is that most biologically inspired algorithms , such as particle swarm optimization ( pso ) or artificial bee colony algorithm ( abc ) , are search or optimization techniques that do not account for the role of an individual particle ( or node ) in the swarm .for example , pso proposes the same meta - heuristic for computing positional updates for all points and does not incorporate a differential update mechanism based on the role of a particle . in a subsequent publication , we intend to propose a `` pack hunt optimization '' ( pho ) algorithm that we believe addresses the shortcomings of the existing swarm algorithms we have cited , and holds relevance to ucav control applications . the state of current control systems can be summed up as follows : * effective at basic navigation and path following * many existing techniques to fuse sensor data for accurate position identification * able to automatically take off and land if runways are properly instrumented * actions beyond flight control ( such as weapons engagement ) are presently manual * missions are pre - defined * swarm algorithms can provide additional value for relative positioning of multiple assets and distributed sensingthe purpose of this section is to outline a few areas of potential advancement that can be expected of autonomous systems of the future .this list is neither exhaustive nor complete with regards to the author s current conception of all such advanced capabilities .it is a subset of possible functions that is listed to illuminate the broad contours of what is possible in terms of applications of artificial intelligence to ucav autonomy .some features include : 1 .knowledge & assessment updates 1 .identification of potential threats outside pre - programmed mission briefs 2 . autonomous exploration and assessment of identified targets that autonomous control deems to be high priority 3 .enhancement and update to intelligence supplied as part of the mission brief and plan , based on actual observation 2 . autonomous navigation and swarm coordination 1 .ability to adjust to environmental conditions that cause system or any linked swarm systems to deviate from mission plan expectations 2 .ability to adjust to loss of a swarm asset , not just in terms of re - positioning , but including potential re - tasking ( i.e. assumption of a new role on the part of an individual asset ) 3 .autonomous evasion 1 .automated update to mission plan based on sensor detection of probable manned aerial intercept 2 .automated update to mission plan based on detection of unexpected sensor presence 3 .autonomous evasion in the event of a rwr ( radar warning receiver ) activation or maw ( missile approach warning ) system activation 4 .autonomous targeting 1 .autonomous addition to target lists based on computer vision or alternate sensor based identification of threats to mission ( including surface to air threats ) 2 .autonomous addition to target lists in the event that primary targets have already been neutralized 3 .autonomous deletion of a target from target lists in the event it has been already neutralized , is found to violate a `` hard '' policy constraint or is low priority and its neutralization harms the overall achievement or success of the missionin the preceding sections we explored the current state of autonomous systems and the rules - based approach that is often employed to develop these systems .further , we also considered a number of advanced capabilities that would be desirable in future autonomous control systems .a fundamental challenge in developing these future capabilities is that the range of scenarios an autonomous system would have to contend with in order to effectively execute the required maneuvers are enormous . tackling such a large range of possibilities with a rules - based system will be impractical not only because of the combinatorial explosion of possibilities that would require individual rules , but also because human designers of such a system may simply not be able to conceive every imaginable scenario the autonomous system could find itself in .another challenge is that rules - based systems are hard coded to measure certain criteria , or sensor values , and then act based on this pre - specified criteria .this hard coding means that each rule is tied to a specific set of sensors .if additional sensors are added to a system , or existing sensors are upgraded , a large number of rules would have to be re - written , creating an obvious cost and effort burden .what we have described above is far from an exhaustive list of limitations in current autonomous systems , but we believe they are sufficient to motivate the need for a new architecture for autonomy . a future system that moves beyond rules - based systems , incorporates learning capabilities so that actions can be learned rather than hard coded , and can adapt to new information from new or better sensors , will represent a substantial advance . in the sections that follow ,we define the contours of just such a system .the fundamental architecture we propose in this paper is based on multiple independent control systems connected to an action optimizer neural network .each of the multiple independent control systems can be neural networks or non - ann rule based control systems that output a suggested vector of actions or control activations .the action optimizer ann gates and weighs the inputs supplied by each independent control system .let an independent control system , and be an action optimizer neural network to which networks are connected .additionally , let the set contain a collection environmental inputs that are supplied to .then , we denote the specific configuration of all environmental inputs at time by the output of under these environmental inputs and based on the inputs of all independent control networks , as follows : the goal of our system is to optimize the selection of action sequences that this sequences maximizes the performance of the system being controlled .it is important to understand what we mean by , `` performance '' here .we define performance as a variable that is the output of a utility function such that this output is high when the weighted achievement of all mission parameters is large , and low when the weighted achievement of mission parameters is small . in other words, we are attempting to locally maximize at least locally : and : the question obviously arises , how do we build the function conventionally , control functions have been built in various ways , for example as fuzzy rule based systems .however , we propose to implement the control function as an artificial neural network ( ann ) . as the application at hand will benefit from some knowledge of past actions, we specifically propose to implement the network as a recurrent neural network ( rnn ) .the actual training and evolution of the rnn represented by is not the subject of this paper and will be documented in a subsequent publication . in summary , this can be done in a manner that combines real world and simulator environments .however , in a more detailed future exploration we intend to cover questions such as whether individual control networks , can be trained independently and how a training set that reflects key the wide range of scenarios the ucav might experience would be compiled . for the purpose of the present discussion , our basic approach is to use reinforcement learning ( rl ) techniques to train the rnn in a simulated environment until a basic level of competence has been achieved , and to then allow the evolved network to control a real craft. collected data from the actual flight is reconciled with the simulated environment and the process is repeated until an acceptable level of capability is demonstrated by this reconciliation would benefit from applications of transfer learning .one of the benefits of this approach is that the simulated environment can introduce environmental constraints that must respond to appropriately .for example , these can be navigation constraints such as avoiding certain pre - identified objects on a map .work has already been done to use search algorithms such as a * to find viable paths around objects to be avoided this type of constraint can be implemented by one of the independent control networks ( presented in the previous section ) .other examples of existing work that could be leveraged in the form of an independent control network include collaborative mapping algorithms for multiple autonomous vehicles .of course , other constraints and optimizations would be represented by other ensembled control networks , forcing to weight them and choose from them carefully , in a way that maximizes thus , the controller can be evolved to optimize operation in different types of environments , and under different constraints .it may then become possible to simply `` upload '' the optimal controller for a particular environment , or a particular mission type , into the same craft and achieve mission - specific optimal performance .sensor data in autonomous systems does not have to remain limited to environmental measurements or flight sensor readings .it can include a variety of image feeds from forward , rear or down - facing cameras .additionally , radar data and forward looking infra red ( flir ) sensor data is also a possibility . in order to utilize all this diverse data to make decisions andeven deviate in small but important ways from the original mission plans , all of this data has to be interpreted and semantically modeled . in other words , its meaning and relevance to the mission andits role in governing future action has to be established . for the purpose of understanding how such data can be interpreted and what its impact on decisions can be , we classify sensors and data sources into the following categories : 1 . internal sensors 1 .system health ( e.g. engine vibration , various temperature and internal system pressure ) 2 . system performance ( e.g. velocity , stress ) 2 .external sensors 1 . navigational aides ( e.g. level , wind speed , inertial navigation gyroscopic sensors )environmental mapping ( e.g. camera , radar , lidar , flir , rwr , maws ) in an example table below , we show the types of impact that information received from these sensors can potentially have on mission plans and vehicle navigation . [ cols="^,^,^,^,^",options="header " , ] in order to support the types of advanced autonomy outlined in section 4 of this paper , many of the actions highlighted in the table above will likely need to be combined based on sensor input to form a chain of actions that update the internal state and maps used by the autonomous asset .sensor data may be an input required by any controller by the controller , a sensor bus connects all sensors to all controllers . for many sensor types , instead of the sensor providing a raw output, we transform the output to reflect semantic constructs .for example , instead of a raw radar signal input , we may transform the signal into a data structure that reflects the position , speed , heading , type and classification of each detected object .this transformation of raw sensor data into semantic outputs that use a common data representation for each class of sensor enables replaceability of underlying components so that the same controllers can work effectively even when sensors are replaced or upgraded .the semantic output of individual sensor systems can be used by controllers , and is also stored in a cognitive corpus , which is a database that can store mission information , current status , maps , objectives , past performance data and not - to - violate parameters for action that are used to gate the final output of the controller the more complete diagram of the proposed autonomy architecture illustrates , the controller receives input from a set of controllers is also connected to the sensor bus and the cognitive corpus .a state map stored in the cognitive corpus reflects the full environmental picture available to the autonomous asset .for example , it includes an estimate of the asset s own position , the positions of allied assets , the positions of enemy assets , marked mission targets , paths indicating preferred trajectories at the time of mission planning , territory and locations over which to avoid flight and other pertinent data that can assist with route planning , objective fulfillment and obstacle avoidance .this state map forms another important input to the controller as it chooses the most optimal sequence of actions .the image below shows a visual representation of what the state map might track . here, it shows the location of multiple allied assets , for example systems that might be part of a swarm with the ucav that is maintaining this map .there is also a hostile entity identified with additional information regarding its speed and heading .locations on the ground indicate sites to be avoided .sensor information carried in the set ( or vector ) result in updates to the state of each object in this map .note that the state map is maintained by each autonomous asset and while the underlying information used to update it may be shared with , or received from other systems , each autonomous asset acts based on its own internal representation , or copy , of the state map . while the details of an implementation are beyond the scope of this paper , we propose that the information exchange between autonomous systems occur using a blockchain protocol .benefits of this approach include the fact that in the event communication is interrupted and updates are missed , information can be reconstructed with guarantees regarding accuracy and order .further , the use of a blockchain store ensures that a single or few malicious participants can not impact the veracity of the information contained therein . while the figure shows a graphic representation of the map , it is possible to represent such a map as a vector or matrix . by so doing ,it can readily be supplied to the controller as an input .sophisticated autonomy requires control over a wider range of action than rule based systems can support .the subtle changes in flight patterns , identification of new threats , self - directed changes in mission profile and target selection all require autonomous assets to go beyond pre - ordained instructions .machine learning and ai techniques offer a viable way for autonomous systems to learn and evolve behaviors that go beyond their programming .semantic information passing from sensors , via a sensor bus , to a collection of decision making controllers makes provides for plug and play replacements of individual controllers .an artificial neural network such as an rnn can ensemble and combine inputs from multiple controllers to create a single , coherent control signal . in taking this approach , while some of the individual controllers may be rules - based , the rnn really evolves into the autonomous intelligence that can consider a variety of concerns and factors via control system inputs , and decide on the most optimum action .we propose delinking control networks from the ensembler rnn so that individual control rnns may be evolved and trained to execute differing mission profiles optimally , and these `` personalities '' may be easily uploaded into the autonomous asset with no hardware changes necessary .one of the challenges in taking this advanced approach may be the inability to guarantee what exactly a learning , evolving autonomous system might do .the action filter architecture proposed in this paper , which provides a hard `` not to exceed '' boundary to range of action , delivers an out - of - band method to audit and edit autonomous behavior , while still keeping it within parameters of acceptability .
|
this paper covers a number of approaches that leverage artificial intelligence algorithms and techniques to aid unmanned combat aerial vehicle ( ucav ) autonomy . an analysis of current approaches to autonomous control is provided followed by an exploration of how these techniques can be extended and enriched with ai techniques including artificial neural networks ( ann ) , ensembling and reinforcement learning ( rl ) to evolve control strategies for ucavs .
|
continuous advancement in vlsi technologies has resulted in extremely small transistor sizes and highly complex microprocessors . however , on - chip interconnects responsible for on - chip communication have been improved only moderately .this leads to the `` paradox '' that local information processing is done very efficiently , but communicating information between on - chip units is a major challenge .this work focuses on an emergent issue expected to challenge circuit development in future technologies .information communication and processing is associated with energy dissipation into heat which raises the temperature of the transmitter / receiver or processing devices ; moreover , the intrinsic device noise level depends strongly and increasingly on the temperature .therefore , the total physical structure can be modeled as a communication channel whose noise level is data dependent .we describe this mathematically in the following subsection .we consider the communication system depicted in figure [ fig1 ] .the message to be transmitted over the channel is assumed to be uniformly distributed over the set for some positive integer .the encoder maps the message to the length- sequence , where is called the _ block - length_. thus , in the absence of feedback , the sequence is a function of the message , i.e. , for some mapping . here , stands for , and denotes the set of real numbers .if there is a feedback link , then , , is a function of the message and , additionally , of the past channel output symbols , i.e. , for some mapping .the receiver guesses the transmitted message based on the channel output symbols , i.e. , for some mapping .let denote the set of positive integers .the channel output at time corresponding to the channel inputs is given by where are independent and identically distributed ( iid ) , zero - mean , unit - variance gaussian random variables drawn independently of .[ cc][cc]transmitter [ cc][cc]channel [ cc][cc]receiver [ cc][cc]delay [ b][b] [ b][b] [ b][b] [ b][b] [ b][b] the coefficients are non - negative and satisfy for .this assumption is , however , not required for the results stated in this paper . ] note that this channel is not stationary as the variance of the additive noise depends on the time - index .we study the above channel under an average - power constraint on the inputs , i.e. , and we define the signal - to - noise ratio ( snr ) as let the _ rate _ ( in nats per channel use ) be defined as where denotes the natural logarithm function .a rate is said to be _ achievable _ if there exists a sequence of mappings ( without feedback ) or ( with feedback ) and such that the error probability tends to zero as goes to infinity .the _ capacity _ is the supremum of all achievable rates .we denote by the capacity under the input constraint when there is no feedback , and we add the subscript `` fb '' to indicate that there is a feedback link .clearly , as we can always ignore the feedback link .in this paper we study the _ capacities per unit cost _ which are defined as note that implies our main result is stated in the following theorem . [ thm : main ] consider the above channel model .then , irrespective of whether feedback is available or not , the corresponding capacity per unit cost is given by where is defined in .theorem [ thm : main ] is proved in section [ sec : proof ] . in section [ sec : highsnr ] we briefly discuss the above channel at high snr .specifically , we present a sufficient and a necessary condition on the coefficients for capacity to be bounded in the snr .in section [ sub : upperbound ] we derive an upper bound on the feedback capacity , and in section [ sub : lowerbound ] we derive a lower bound on the capacity in the absence of feedback .these bounds are then used in section [ sub : asymptotic ] to derive an upper bound on and a lower bound on , and it is shown that both bounds are equal to . together with this proves theorem [ thm : main ] . as in (8.12 ) , the upper bound on is based on fano s inequality and on an upper bound on , which for our channel can be expressed , using the chain rule for mutual information , as lcl + & = & _ k=1^n + & = & _ k=1^n + & = & _ k=1^n [ eq : upper1 ] where the second equality follows because is a function of and ; and the last equality follows from the behavior of differential entropy under translation and scaling ( * ? ? ?9.6.3 & 9.6.4 ) , and because is independent of .evaluating the differential entropy of a gaussian random variable , and using the trivial lower bound , we obtain the final upper bound lcl + & & _ k=1^n + & & _ k=1^n ( 1+_=1^k_k- /^2 ) + & & ( 1+_k=1^n_=1^k_k- /^2 ) + & = & ( 1+_k=1^n /^2_=0^n - k _ ) + & & ( 1+(1+)_k=1^n /^2 ) + & & ( 1+(1+))[eq : upper2 ] where we define . here, the second inequality follows because conditioning can not increase entropy and from the entropy maximizing property of gaussian random variables ( * ? ? ?9.6.5 ) ; the next inequality follows by jensen s inequality ; the following equality by rewriting the double sum ; the subsequent inequality follows because the coefficients are non - negative which implies that ; and the last inequality follows from the power constraint . as aforementioned ,the above channel is not stationary , and one therefore needs to exercise some care in relating the capacity in the absence of feedback to the quantity ( where the maximization is over all input distributions satisfying the power constraint ) .in fact , it is _ prima facie _ not clear whether there is a coding theorem associated with. we shall sidestep this problem by studying the capacity of a different channel whose time- channel output is , conditional on the sequence , given by where and are defined in section [ sub : channelmodel ] .this channel has the advantage that it is stationary & ergodic in the sense that when is a stationary & ergodic process then the pair is jointly stationary & ergodic .it follows that if the sequences and are independent of each other , and if the random variables , , are bounded , then any rate that can be achieved over this new channel is also achievable over the original channel . indeed , the original channel can be converted into by adding to the channel output , and , since the independence of and ensures that the sequence is independent of the message , it follows that any rate achievable over can be achieved over by using a receiver that generates and guesses then based on . , , guarantees that the quantity is finite for any realization of . ]we consider that are block - wise iid in blocks of symbols .thus , denoting ( where denotes the transpose ) , are iid with taking on the value with probability and with probability , for some .note that to satisfy the average - power constraint we shall choose and so that let , and let denote the floor function .noting that the pair is jointly stationary & ergodic , it follows from that the rate is achievable over the new channel and , thus , yields a lower bound on the capacity of the original channel .we lower bound as lcl + & = & _= 0^n / l -1 i(._;_0^n / l -1|_0 ^ -1 ) + & & _= 0^n / l -1 i(._;_|_0 ^ -1 ) + & & _= 0^n / l -1[eq : lb1 ] where we use the chain rule and that reducing observations can not increase mutual information . by using that implies can be shown that the second term in the sum on the right - hand side ( rhs ) of vanishes as tends to infinity .this together with a cesro type theorem ( * ? ? ?4.2.3 ) yields lcl + & & i(._0;_0|_-^-1 ) + & & -_n _ = 0^n / l -1i(._-^-1;_|_0^ ) + & = & i(._0;_0|_-^-1)[eq : lbcesaro ] where the first inequality follows by the stationarity of which implies that does not depend on , and by noting that , for a fixed , .we proceed to analyze for a given sequence . making use of the canonical decomposition of mutual information ( e.g. , ( * ? ? ?* eq . ( 10 ) ) ) , we have lcl + & = & i(.x_1;_0|_-^-1=_-^-1 ) + & = & d ( .f__0|x_1=x,_-^-1 f__0|x_1=0,_-^-1 ) p_x_1(x ) + & & -d(.f__0|_-^-1 f__0|x_1=0,_-^-1 ) + & = & d ( .f__0|x_1=,_-^-1 f__0|x_1=0,_-^-1 ) + & & - d(.f__0|_-^-1 f__0|x_1=0,_-^-1 ) [ eq : lb2 ] where the first equality follows because , for our choice of input distribution , and , hence , conveys as much information about as . here, denotes relative entropy , and , , and denote the densities of conditional on the inputs , , and , respectively .thus , is the density of an -variate gaussian random vector of mean and of diagonal covariance matrix with diagonal entries lcl ^()__-^-1(1,1 ) & = & ^2+_i=-^-1_-ilx_il+1 ^ 2 + ^()__-^-1(k , k ) & = & ^2+_k-1 ^ 2+_i=-^-1_-il+k-1x_il+1 ^ 2 , + & & k=2, ,l ; is the density of an -variate , zero - mean gaussian random vector of diagonal covariance matrix with diagonal entries and is given by in order to evaluate the first term on the rhs of we note that the relative entropy of two real , -variate gaussian random vectors of the respective means and and of the respective covariance matrices and is given by lcl + & = & _ 2 - _ 1 + + & & + _ 2 ^ -1(_1-_2 ) [ eq : dgaussian ] with and denoting the determinant and the trace of the matrix , respectively , and where denotes the identity matrix . the second term on the rhs of is analyzed in the next subsection .let denote the second term on the rhs of averaged over , i.e. , r + = .then , using & and taking expectations over we obtain , again defining , lcl + & = & _ k=1^l + & & -_k=2^l + & & - + & & _ k=1^l + & & -_k=2^l ( 1+_k-1 ^2/^2 ) + & & - + & & _ k=1^l + & & -_k=2^l + & & - [ eq : lbbeforelimit ] where the first inequality follows by the lower bound which is a consequence of jensen s inequality applied to the convex function , , and by the upper bound l + ( 1+_k-1 ^2/^2 ) for every ; and the second inequality follows by and by upper bounding for every . the final lower bound follows now by and lcl + & & _ k=1^l + & & - _ k=2^l + & & - .[ eq : lbfinal ] we start with analyzing the upper bound .we have where the second inequality follows by upper bounding , , and we thus obtain in order to derive a lower bound on we first note that and proceed by analyzing the limiting ratio of the lower bound to the snr as the snr tends to zero . to this end , we first shall show that it was shown in that for any pair of densities and satisfying thus , for any given , together with implies that in order to show that this also holds when is averaged over , we derive in the following the uniform upper bound lcl + [ eq : uniformbound ] the claim follows then by upper bounding lcl + and by . in order to prove we use that any gaussian random vector can be expressed as the sum of two independent gaussian random vectors to write the channel output as where , conditional on , and are -variate , zero - mean gaussian random vectors , drawn independently of each other , and having the respective diagonal covariance matrices and whose diagonal entries are given by lcl _|_0(1,1 ) & = & ^2 + _ |_0(k , k ) & = & ^2 + _ k-1x_1 , k=2, ,l , and thus , is the portion of the noise due to , and is the portion of the noise due to .note that and are independent of each other because is , by construction , independent of .the upper bound follows now by lcl + & = & d(.f__0++|_-^-1 f__0++|x_1=0,_-^-1 ) + & & d(.f__0 + f__0+|x_1=0 ) + & = & d.(.f__0|_-^-1 f__0|x_1=0,_-^-1)|__-^-1=0[eq : asym1 ] where and denote the densities of conditional on the inputs and , respectively ; denotes the unconditional density of ; and denotes the density of conditional on . here , the inequality follows by the data processing inequality for relative entropy ( see ( * ? ? ?2.9 ) ) and by noting that is independent of .returning to the analysis of , we obtain from and lcl + & & _ 0 _ k=1^l - _ k=2^l + & = & _ k=1^l _ k-1 - _ k=2^l . by letting first to infinity while holding fixed , and by letting then go to infinity , we obtain the desired lower bound on the capacity per unit cost thus , , , and yield which proves theorem [ thm : main ] .the channel described in section [ sub : channelmodel ] was studied at high snr in where it was asked whether capacity is bounded or unbounded in the snr .it was shown that the answer to this question depends highly on the decay rate of the coefficients .we summarize the main result of in the next theorem . for a statement of this theorem in its full generality and for a proof thereof we refer to .[ thm : highsnr ] consider the channel model described in section [ sub : channelmodel ] .then , llcl i ) & _ > 0 & & _ > 0 c_fb ( ) < , [ eq : i ] + ii ) & _ = 0 & & _ > 0 c ( ) = , [ eq : ii ] where we define , for any , and . for example , when is a geometric sequence , i.e. , for , then the capacity is bounded .note that when neither the left - hand side ( lhs ) of nor the lhs of holds , i.e. , when and , then the capacity can be bounded or unbounded .fruitful discussions with ashish khisti are gratefully acknowledged .
|
motivated by on - chip communication , a channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers . for this channel , an expression for the capacity per unit cost is derived , and it is shown that the expression holds also in the presence of feedback .
|
information - theoretic limits of fading channels have been thoroughly studied in the literature and to date many important results are known ( see and references therein ) . generally speaking ,if the transmission delay is not of concern , the classic shannon capacity for a deterministic additive white gaussian noise ( awgn ) channel can be extended to the ergodic capacity for a fading awgn channel , which is achievable by a random gaussian codebook with infinite - length codewords spanning over many fading blocks such that the randomness induced by fading can be averaged out . with the transmitter and receiver channel state information ( csi )perfectly known , the adaptive power allocation serves as an effective method to increase the ergodic capacity .this allocation has the well - known `` water - filling '' structure , where power is allocated over the channel state space .with such an allocation scheme , a user transmits at high power when the channel is good and at low or zero power when the channel is poor .when the csi is only known at the receiver , the capacity is achievable with special `` single - codebook , constant - power '' schemes .the validity of the ergodic capacity is based on the fundamental assumption that the delay limit is infinite .however , many wireless communication applications have certain delay constraints , which limit the practical codeword length to be finite .thus , the ergodic capacity is no longer a meaningful performance measure .such situations give rise to the notions of _ outage capacity _ , _ delay - limited capacity _ , and _ average capacity _ , each of which provides a more meaningful performance measure than the ergodic capacity .in particular , there usually exists a capacity - versus - outage tradeoff for transmissions over fading channels with finite delay constraints , where an outage event occurs when the `` instantaneous '' mutual information of the fading channel falls below the transmitted code rate , and a higher target rate results in a larger outage probability .the maximum transmit rate that can be reliably communicated under some prescribed transmit power budget and outage probability constraint is known as the outage capacity . in the extreme case of requiring zero outage probability, the outage capacity then becomes the zero - outage or delay - limited capacity . to study the delay - limited system , the authors in adopt a -block block - fading ( bf ) awgn channel model , where indicates the constraint on transmission delay or the maximum codeword length in blocks .such a channel model is briefly described as follows .suppose a codeword is required to transmit within symbols , with the integer being the number of blocks spanned by a codeword , which is also referred to as the interleaving depth ( we call it coding length to emphasize how many blocks over which a codeword spans ) ; it is also a measure of the overall transmission delay .the parameter is the number of channel uses in each block , which is called block length .a codeword of length is also referred to as a frame , where the fading gain within each block remains the same ( over symbols ) and changes independently from block to block .the number of channel uses in each block is assumed to be large enough for reliable communication , but still small compared to the channel coherence time .if the csi for each -block transmission is known non - causally at the transmitter , transmit power control can significantly improve the outage capacity of the -block bf channel .when the csi can be only revealed to the transmitter in a causal manner , a dynamic programming algorithm is developed to achieve the outage capacity of the -block bf channel in . in the above existing works , the delay limit is either infinite or finite but deterministic .however , there are indeed some practical scenarios where the delay constraint is both finite and random . for example , in a wireless sensor network operating in a hostile environment , sensors may die due to sudden physical attacks such as fire or power losses .another example may be a cognitive radio network with opportunistic spectrum sharing between the secondary and primary users , where an active secondary link can be corrupted unpredictably when the channel is reoccupied by a primary transmission .how fast and reliably can a piece of information be transmitted over such a channel ?this question motivates us to formally define the maximum achievable information rate over a channel with a random and finite delay constraint , named as a _ dying channel_. this type of dying channels has never been thoroughly studied in the traditional information theory , and important theorems are missing to address the fundamental capacity limits . in this paper , we start investigating such channels by focusing on a point - to - point dying link and model it by a -block bf channel subject to a fatal attack that may happen at a random moment within any of the transmission blocks , or may not happen at all over blocks .note that the delay limit in the case of a dying channel is a random variable due to the random attack , instead of being deterministically equal to as in a traditional delay - limited bf channel .since the successfully transmitted number of blocks is random and up to , a dying channel is delay - limited and hence non - ergodic in nature .thus its information - theoretic limit can be measured by the outage capacity .it is well known that coding over only one block of a fading channel may lead to a poor performance due to the lack of diversity . however , when we code over multiple blocks to achieve more diversity in a dying channel , we must bear the larger possibility that the random attack happens in the middle of the transmission and renders the rest of the codeword useless .therefore , it is neither wise to span a codeword over too many blocks nor just over one block . we need to consider the tradeoff between the potential diversity and the attack avoidance for the selection of the codeword length over such a dying channel . in other words ,given a distribution of the random attack , we need to seek an optimal that `` matches '' the number of surviving blocks in a probabilistic sense such that the achievable diversity is maximized and the outage probability is minimized . in a system with multiple parallel sub - channels ( e.g. , in a ofdm - based system ) ,each sub - channel may be under a potential random attack .in such a scenario , we are interested in the overall system outage probability and how the outage probability behaves as the number of sub - channels increases .this leads us to examine the asymptotic outage behavior for the case of a parallel dying channel .we will consider two models of random attacks over the sub - channels : 1 ) the case of independent random attacks , where the attacks across the sub - channels are independently and identically distributed ( i.i.d . ) ; and 2 ) the case of -dependent random attacks , where the attacks over adjacent sub - channels are correlated and the attacks on sub - channels that are -sub - channel away from each other are independent .in the following , we briefly summarize the main results in this paper : 1 .we introduce the notion of a dying channel and formally define its outage capacity .suppose we code over blocks , and the number of surviving blocks is random and up to .an outage occurs if the total mutual information over the surviving blocks normalized by is less than a predefined rate .correspondingly , the outage capacity is the largest rate that satisfies an outage probability requirement . 2 .we study the optimal coding length that `` matches '' the attack time in a probabilistic sense such that the outage probability is minimized when uniform power allocation is assumed .we then investigate the optimal power allocation over these blocks , where we obtain the general properties for the optimal power vector .we find that , for some cases , the optimization problem over can be cast into a convex problem .we further extend the single dying channel result to the parallel dying channel case where each sub - channel is an individual dying channel . in this case, we investigate the outage behavior with two different random attack models : the independent - attack case and the -dependent - attack case .specifically , we characterize the asymptotic behavior of the outage probabilities for the above two cases with a given target rate . by the central limit theorems for independent and -dependent sequences ,we show that the outage probability diminishes to zero for both cases as the number of sub - channels increases if the target rate per unit cost is below a threshold .the outage exponents for both cases are studied to reveal how fast the outage probability improves .the rest of this paper is organized as follows .section [ sec : system model ] presents the system model for a single dying channel , as well as the definition of the corresponding outage capacity . in section [ sec : uniform power ] , we study the optimal coding length by considering uniform power allocation and derive the lower and upper bounds of the outage probability .moreover , we obtain the closed - form expression of outage probability for the high signal - to - noise ratio ( snr ) rayleigh fading case . in section [ sec : joint opt ] , we optimize over the power vector to minimize the outage probability . in section [ sec :parallel ] , we extend the single dying channel model to the parallel dying channel case . in particular , we examine the corresponding asymptotic outage probability with two setups : the independent - attack case and the -dependent - attack case , in sections [ sec : indp case ] and [ sec : m - dep case ] , respectively . in section [ sec : outage exponent ] ,the outage exponents for both cases are examined to reveal how fast the outage probability improves .section [ sec : conclusion ] concludes the paper ._ notation _ : we define the notations used throughout this paper as follows .* indicates the set of real numbers , is the set of nonnegative real numbers , and is the set of -dimensional nonnegative real vectors . *the error function : . *the normalized cumulative normal distribution function : . * the -function : .* is the natural logarithm .* is the ceiling operator and is the flooring operator .we consider a point - to - point delay - limited fading channel subject to a random fatal attack , while the exact timing of the attack is unknown to neither the transmitter nor the receiver .only the distribution of the random attack time is known to both the transmitter and the receiver .we further assume that there is no channel state information at the transmitter ( csit ) while there is perfect channel state information at the receiver ( csir ) .the transmitter transmits a codeword over blocks within the delay constraint ; and when the fatal attack occurs , the communication link is cut off immediately with the current and rest of the blocks lost .we build our model of such a dying link based on the -block bf - awgn channel , which is described as follows .let , , and be vectors in representing the channel input , output , and noise sequences , respectively , where is the gaussian random vector with zero mean and covariance matrix .rearrange the components of , , and as matrices , denoted as , , and , respectively ( each row is associated with symbols from a particular block . ) .a codeword with length spans blocks and the input - output relation over the channel can be written as follows : where is a matrix with the diagonal elements being the fading amplitudes .let be the -th column of for .similarly , let and be the -th columns of and , respectively .these are related as : which implies that the input symbols on the same row of experience the same fading gain , i.e. , they are transmitted over the same block .since s are i.i.d random vectors , we can view this channel as independent parallel channels with each channel corresponding to a block .hence , uses of the original channel corresponds to uses of the parallel channels in ( [ eq : parallel ch model ] ) .the parallel channels over which a codeword is transmitted are determined by the channel state , which can also be viewed as a composite channel that consists of a family of channels indexed by a particular set of . for a block fading channel with delay constraint , it can be modeled as a composite channel as follows : let be the set of all length- sequences of channel gains , which occurs with probability under the joint distribution of .for each , we associate a channel , where consists of parallel gaussian channels .let be the fading power gain vector , i.e. , , and be the transmit power allocation vector .for a given set of and , the maximum average mutual information rate over channel is : where we assume a unit noise variance throughout this paper . in our model of the dying channel ,the delay constraint is random rather than deterministically equal to due to the fact that a random attack may happen within any block out of the blocks or may not happen at all within the blocks .if the fatal attack happens during the transmission , the current block and the blocks after the attack moment will be discarded .an outage occurs whenever the total mutual information of the surviving blocks normalized by is less than the transmitted code rate .therefore , the dying channel is non - ergodic and an appropriately defined outage capacity serves as the reasonable performance measure .let be the random attack time that is normalized by the block length . as we know from the results of parallel gaussian channels , with random coding schemes, we can decode the codeword even if the attack happens within the blocks as long as the average mutual information of surviving blocks is greater than the code rate of the transmission , i.e. , if we have then the codeword is decodable , where the random integer with being the flooring operator .hence the outage capacity of a dying channel can be formally defined as follows : the outage capacity of a -block bf - awgn dying channel with an average transmit power constraint and a required outage probability is expressed as note that the outage probability above is defined over the distributions of the s and , where we assume that the s and are independent of each other and the transmitter does not know the values of the s and _ a priori _ , but knows their distributions . as we see from ( [ eq : dying channel outage capacity ] ) , there are two sets of variables to be optimized : one is the number of coding blocks , and the other is the power allocation vector . from the perspective of optimal transmission schemes ,the outage capacity maximization problem is equivalent to the outage probability minimization problem . in the next section ,we first study the optimal coding length to `` match '' the attack time in a probabilistic sense such that the outage probability is minimized .as discussed before , we can optimize over the coding length and the power vector to achieve the maximum outage capacity ( or equivalently the minimum outage probability ) .if a uniform power allocation strategy is adopted , the only thing left for optimization is the coding length . on one hand, we can have a larger by increasing , meaning that we potentially have higher diversity to achieve a lower outage probability . on the other hand ,a larger incurs a higher percentage of blocks being lost after the attack such that the average achievable mutual information per block is lower , and hence results in a larger outage probability .since the random attack determines the number of surviving blocks and determines the average base , we are interested in finding a proper value of to `` match '' the random attack property in the sense that the outage probability is minimized . with uniform power allocation , according to the law of total probability , the outage probability can be rewritten as a summation of the probabilities conditioned on different numbers of surviving blocks , i.e. : where , , and . given the distributions of and , in general , there are no tractable closed - form expressions for s .alternatively , we could first seek the bounds of the outage probability and then study more exact forms for some special cases where we show how to find the optimal .notice that the following relationship holds : since the fading gains s of different blocks are i.i.d , we have where is the cumulative distribution function ( cdf ) of the random variable .therefore , with the relationship in ( [ eq : outage lower bound ] ) , we have a lower bound for the outage probability in ( [ eq : expand outage uniform p ] ) as on the other hand , there exists a simple upper bound for the outage probability : hence yielding therefore , an upper bound for the outage probability in ( [ eq : expand outage uniform p ] ) is given as from the previous discussion , we know how to bound the outage probability in terms of with the general snr values. however , there usually exists a significant gap between the lower and upper bounds .fortunately , with appropriate approximations in the high snr regime is large , i.e. , . ] for rayleigh fading , we can obtain a tractable expression for the outage probability and hence further derive a closed - form solution for the optimal .for our -block fading channel model with high snr values , outage typically occurs when each sub - channel can not support an evenly - divided rate budget ( see exercise 5.18 in ) .thus , conditioned on the attack time , the outage probability can be written as : for rayleigh fading , we have when is large .thus , when snr is high , we can simplify ( [ eq : outage approx hihg snr ] ) as with the conditional outage probability given by ( [ eq : outage given t ] ) , the overall outage probability is let be the cdf of the attack time , which is assumed to be exponentially distributed with parameter .let , ( for ) with and .we can rewrite ( [ eq : outage k ] ) as +w_0\nonumber\\ & = & e^{kr}c\frac{\frac{\beta}{p}-(\frac{\beta}{p})^{k}}{1-\frac{\beta}{p}}+\frac{1-g(k)}{p^{k}e^{-kr}}+w_0.\end{aligned}\ ] ] for high snr , with , is small .hence , when , and ( [ eq : outage with exp att ] ) can be approximated to : where . in order to obtain the optimal by minimizing , we first treat ( [ eq : outage high snr ] ) as a continuous function of , although is an integer .let us first consider the convexity of ( [ eq : outage high snr ] ) over a real - valued . by taking the second - order derivative of ( [ eq : outage high snr ] ) over , we have the following : ^ 2}{(p e^{\lambda - r})^k}.\ ] ] since we have and in the high snr regime , it holds that .therefore , ( [ eq : sec order ] ) is non - negative in the high snr regime , which means that ( [ eq : outage high snr ] ) is convex over real - valued .given the convexity of ( [ eq : outage high snr ] ) , the optimal can be derived by setting its first - order derivative to zero and finding the root .consequently , the optimal solution is obtained as follows : \frac{1}{\lambda + \log p}.\ ] ] obviously , is unique given a set of , and . since a feasible for the original problem should be an integer , we need to choose the optimal integer solution from and , whichever gives a smaller value of ( [ eq : outage high snr ] ) . ,p=20db.,title="fig:",scaledwidth=60.0% ] + , p=30db.,title="fig:",scaledwidth=60.0% ] + in fig .[ fig : outage vs k 20db ] and fig .[ fig : outage vs k 30db ] , we plot the coding length versus the outage probability . we assume that the fading is rayleigh , the random attack time is exponentially distributed with parameter ( normalized by the transmission block length ) , the target rate is nats / s / hz , and the transmit power is set as db and db , respectively . as shown in fig .[ fig : outage vs k 20db ] , the dashed curve and the solid curve are the lower and upper bounds given by ( [ eq : lower bound ] ) and ( [ eq : upper bound ] ) , respectively .the circles are obtained by using ( [ eq : outage high snr ] ) . as can be seen , firstly ,the high - snr approximation in ( [ eq : outage high snr ] ) is quite accurate .the circles are located between the upper and the lower bounds except for .this is due to the fact that when , does not hold .secondly , we see that there exists a minimum outage probability over as shown in fig .[ fig : outage vs k 30db ] . atlast , comparing fig .[ fig : outage vs k 20db ] and fig .[ fig : outage vs k 30db ] , we see that the upper and lower bounds get closer as the snr increases with the values from ( [ eq : outage high snr ] ) are in between ; hence the approximation in ( [ eq : outage high snr ] ) becomes more accurate .when snr is low , we have .thus , when we span a codeword over blocks , the outage probability conditioned on is given as when using a repetition transmission ( over blocks ) , the outage probability is given as comparing ( [ eq : outage approx low snr ] ) and ( [ eq : outage repetition ] ) , we see that the outage performances of these two schemes are the same in the low - snr regime .this is due to fact that in low snr regime it is snr - limited rather than degree - of - freedom - limited such that coding over different blocks does not help with decreasing the outage probability .hence , repetition transmission is approximately optimal for a dying channel in the low snr regime .in the previous section , we investigated the optimal coding length that minimizes the outage probability by assuming uniform power allocation .we now consider optimizing over both the coding length and the power vector to minimize the outage probability .we note that optimizing over is in general a 1-d search over integers , which is not complex .since the main complexity of solving ( [ eq : dying channel outage capacity ] ) lies in the optimization over , we first focus on the outage probability minimization problem over for a given fixed , which is expressed as : after obtaining the optimal outage probabilities conditioned on a range of values , we choose the minimum one as the global optimal value .we start solving the above optimization problem by investigating the general properties of the optimal power allocation over a dying channel for a given .let be the event that it is obvious that the events s are decreasing events , which means . with the law of total probability, we can expand the outage probability in the objective of ( [ eq : outage min ] ) as follows , where s are defined in section [ sec : uniform power ] . with the above result, we then discuss the optimal power allocation for a dying channel under different conditions .[ thm : i.i.d fading no - increasing ] when fading gains over blocks are i.i.d . , the optimal power allocation profile is non - increasing . the proof is provided in appendix[app : thm1 ] .this is a general result regardless of the specific distributions of fading gains .that is , the optimal power vector lies in a convex cone , no matter what distribution the fading gain follows , as long as the i.i.d .assumption holds .now we consider the case where the fading gains over all the blocks are the same , while they are still random .this represents the case where fading gains are highly correlated in time .[ thm : id fading k is 1 ] when the fading gains s are the same , the optimal coding length is with .the proof is provided in appendix[app : thm2 ] .this assertion implies that the optimal transmission scheme for a highly correlated dying channel is to simply transmit independent blocks instead of jointly - coded blocks .when the fading gain falls into some special distributions , we can further convert the corresponding optimization problem into convex ones and derive the optimal power vector efficiently .given ( [ eq : outage approx hihg snr ] ) and conditioned on the attack time , the conditional outage probability can be written as : for rayleigh fading , we have when is large .thus , when snr is high , we can simplify ( [ eq : outage approx high snr i.i.d ] ) as the outage probability with rayleigh fading in high snr is approximated as below by substituting ( [ eq : outage given t i.i.d ] ) into ( [ eq : expand outage ] ) : denoting , we further simplify ( [ eq : outage_pi ] ) as since the optimal power vector lies in a convex cone as shown in theorem [ thm : i.i.d fading no - increasing ] , the problem can be formulated as a convex optimization problem ( refer to appendix[app : cvx high snr ] for the convexity proof ) : where is a convex cone . thus , the optimal power vector can be efficiently solved with standard convex optimization algorithms such as the interior point method . the simulation results are shown in fig .[ fig : cvx vs unifm ] , where we set the simulation parameters as : nats / s / hz , for the exponential random attack , and average power db . as we can see , the power vector derived by solving problem ( [ eq : cvx high snr ] ) achieves better performance in terms of the outage probability than the uniform power allocation case .+ when the fading gain has a log - normal distribution , we can also approximate the problem as a convex one by minimizing the upper bound of the objective function .since we have the outage probability is upper - bounded as follows : thus , the optimization problem of ( [ eq : outage min ] ) is translated into the following problem , where we essentially minimize the upper bound : let the s be independent and log - normal random variables , i.e. , . since the sum of standard normal random variables is a gaussian random variable with zero mean and variance , we have where is the error function . substituting ( [ eq : sum gaussian cdf ] ) into ( [ eq : upper bound min ] ) yields the new objective function : in general , ( [ eq : outage explicit form ] ) is not a convex function .however , under some special circumstances as described in appendix[app : log - normal opt ] , the problem in ( [ eq : upper bound min ] ) with the objective replaced by ( [ eq : outage explicit form ] ) can be rewritten as a convex problem , which is given as following : therefore , efficient algorithms can be applied to solve the above problem .numerical results are provided as follows .assume that the outage probability target is set as , the attack time is an exponential random variable with parameter , and the fading gains are standard log - normal random variables .as we see from fig .[ fig : outage capacity over k ] , the optimal power allocation leads to a significantly larger outage capacity over the uniform power allocation case .moreover , as increases , the outage capacity with the optimal power allocation may even increase to a maximum value while the outage capacity with uniform power allocation monotonically decreases .this suggests that , with the potential of a random attack , we can still span the codeword over more than one block to exploit diversity and achieve higher outage capacity if the power allocation and the codeword length are smartly chosen . , , average power ., title="fig:",scaledwidth=50.0% ] +in the dying channel example of cognitive radio networks , secondary users have access to vacant frequency bands that are licensed to primary users .some primary users may suddenly show up and take over some frequency bands , which results in connection losses if these frequency bands are being used by certain secondary users .hence , each sub - channel ( a frequency band ) may have a different random delay constraint for information transmission due to the uncertainty of non - uniform primary user occupancy patterns .specifically , the above system can be modeled as follows .given a link with parallel sub - channels as shown in fig .[ fig : parallel dying channel ] , the codeword is spanned in time domain over blocks and also across all the sub - channels . in some sub - channels , random attacks terminate the transmission before it is completed such that less than blocks are delivered . for other sub - channels , blocks are assumed to be safely transmitted . what is the maximum rate for reliable communication over such a link ?for the single channel case , it turns out that there is no way to achieve arbitrarily small outage with a finite transmit power .however , in this section we show that an arbitrarily small outage probability is achievable by exploiting the inherent multi - channel diversity .+ in this section , we extend the results of the single dying channel to the parallel multi - channel case .the outage probability of the parallel multi - channel case is given as where is the total rate over sub - channels , is the fading gain of block at sub - channel , is the number of sub - channels , is the random number of surviving blocks at sub - channel , is the number of blocks over which a codeword is spanned in the time domain , and is the total average power such that is the average power for each sub - channel .since the asymptotic behavior is concerned , uniform power allocation is assumed over sub - channels . according to different attack models , in the next two sections we investigate the asymptotic behavior of the above outage probability in two cases : the independent random attack case and the -dependent random attack case .let the average power be finite . since if ,when is large , we rewrite ( [ eq : outage parallel ] ) as we assume that the fading gains s are i.i.d . , and let the random variable be for the case of independent random attack , we assume that s are i.i.d . , and hence s are i.i.d .. the outage probability given by ( [ eq : outage parallel simple ] ) can be recast as : since s are i.i.d . , according to the central limit theorem , as the number of sub - channels , we have according to theorem 7.4 in on the sum of a random number of random variables , we derive the following relations : \label{eq : var_y},\end{aligned}\ ] ] where is a nominal random variable denoting the fading gain , is a nominal integer random variable denoting the number of surviving blocks of each sub - channel , and and denote the expectation and variance , respectively .as such , the outage probability can be approximated as : as , converges to .the outage probability decreases to over if is less than , or converges to if is larger than .that is , even though all sub - channels are subject to fatal attacks , the outage probability can still be made arbitrarily small when is large enough if the rate per unit cost is set in a conservative fashion , where is a key threshold .this is remarkably different from the single dying channel case in which the outage probability is always finite since there are only a finite and random number of blocks to span a codeword .in the previous section , we discussed the case where s are independent .however , in a practical system , such as cognitive radio networks , the primary users usually occupy a bunch of adjacent sub - channels instead of picking up sub - channels independently .thus , the s across adjacent sub - channels are possibly correlated ; and consequently the achievable rates across adjacent sub - channels are also correlated . on the other hand , if two sub - channels are far away from each other , it is reasonable to treat them as independent .thus , we assume that s are strictly stationary and -dependent with the same mean and variance .we first cite the central limit theorem for stationary and -dependent summands from ( theorem 9.1 therein ) .[ thm : m dep clt ] suppose is a strictly stationary -dependent sequence with and . then as , we have where with the covariance of and . the detailed proof can be found in .as assumed , the random sequence is stationary and -dependent , and s have the same mean and variance .then the covariance is given as : where is the expectation of given in ( [ eq : mean_y ] ) and .meanwhile , due to the fact that the fading gains and are independent if or , we could easily obtain for , as : \nonumber\\ & = & \frac{1}{k^2}e\left[e\left(\sum_{p=1}^{l_{i}}\alpha_{p}^{(i)}\sum_{q=1}^{l_{i+h}}\alpha_{q}^{(i+h)}\bigg|l_{i}l_{i+h } \right)\right ] \nonumber\\ & = & \frac{\mu_{\alpha}^{2}}{k^2}e(l_{i}l_{i+h})\end{aligned}\ ] ] assume that and have the same correlation coefficient if and . the correlation matrix is given as note that the following main results can be also derived for other correlation matrices .then ( [ eq : gammaoh ] ) is simplified as where is a non - negative correlation coefficient , and are the mean and variance of the random variable , respectively . substituting ( [ eq : mean_y ] ) , ( [ eq : var_y ] ) , and ( [ eq : gammaoh simplified ] ) into ( [ eq : vm ] ) , we have according to theorem [ thm : m dep clt ] , we have by simple manipulation , we have where is given in ( [ eq : mean_y ] ) and is given in ( [ eq : cov ] ) .hence , the outage probability for the -dependent random attack case can be approximated as follows when is large , as we see from ( [ eq : vm sigma ] ) that , comparing ( [ eq : apprx outage indp ] ) and ( [ eq : apprx outage mdp ] ) , we conclude that the outage probability of the independent attack case is smaller than that of the -dependent case given the same setting when the rate per unit cost is less than and the number of sub - channels is large .as we learn from the previous sections , the outage probability over parallel multiple channels goes to zero as increases if for both of the two attack cases . in this section ,we investigate how fast the outage probability decreases as increases for both cases , which is measured by the outage exponent defined as where .according to the results in , we could derive the outage exponent for the independent attack case as for , where and \nonumber\\ & = & \log m_y(s),\end{aligned}\ ] ] with the moment generating function of . according to theorem 7.5 in , we have where and are the probability generating function of the discrete random variable and the moment generating function of the continuous random variable , respectively . :if rayleigh fading is assumed , is exponentially distributed ; hence the corresponding moment generating function is , where is the parameter for the distribution of the . assuming that the random attack time has an exponential distribution , is an integer random variable with following distribution : thus , we have and . then we can derive the outage exponent numerically by solving ( [ eq : outage exp ] ) for a given . for the -dependent attack case , the techniques used in deriving the outage exponent for the independent attack case does not apply any more since here s are not independent .in this case , since the outage probability has an approximate normal distribution , we have therefore , an approximate outage exponent can be quantified from the upper bound as where is given in ( [ eq : cov ] ) . the outage exponent obtained by ( [ eq : outage exp ] )is derived by using the large deviation techniques .thus , it is exact while the outage exponent given by ( [ eq : exponent dep ] ) for the -dependent attack case is approximate .however , when , this approximation is accurate since the exponential bound is tight for the -function when its argument is large .numerical results are provided here to validate our analysis for the parallel multi - channel case .we choose the random attack time to be exponentially distributed with parameter and is chosen to be 5 .rayleigh fading is assumed and the fading gain is exponentially distributed with parameter 1 and the noise has unit power .first , we demonstrate the convergence of the outage probability for the independent attack case and the -dependent attack case , where the value of according to the above simulation setup is 0.571 . for the independent attack case , as shown in fig .[ fig : convergence indp ] , the solid and dashed curves are derived by ( [ eq : apprx outage indp ] ) while the circles and crosses are obtained by simulations .we also observe similar convergence for the -dependent attack case in fig .[ fig : convergence dep ] . in both figures ,the outage probability goes to 0 if , or goes to 1 if .we see that the accuracy of gaussian approximations is acceptable with reasonably large values .= 0.571 and p=2.,title="fig:",scaledwidth=50.0% ] + -dependent case : =0.571 , m=1 , =0.8 , and p=2.,title="fig:",scaledwidth=50.0% ] + second , we compare the outage probability performance between the independent case and the -dependent case . here and nats / s .as shown in fig .[ fig : outage dep vs indp ] , the outage performance of the -dependent case is worse than that of the independent case even when and .this is due to the fact that when , the independent attack case is expected to have a smaller outage probability as we discussed at the end of section [ subsec : m - dep case ]. however , the outage probability of the -dependent case still decreases to 0 but at a slower rate as the number of sub - channels increases , which is caused by the fact that the -dependent attack case has a smaller outage exponent .= 0.8.,title="fig:",scaledwidth=50.0% ] + in fig .[ fig : exponent with lambda ] , we compare the various outage exponent values between these two cases over the rate per unit cost with the simulation setup as follows : , , and .first , we see that the outage exponent for the independent attack case is larger than that of the -dependent attack case when the average attack time is the same .second , for both of the independent attack case and the -dependent attack case , a larger average attack time results in a larger outage exponent .-dependent random attack cases : m=1 , =0.8 , and k=5.,title="fig:",scaledwidth=50.0% ] +in this paper , we considered a new type of channels called dying channels , where a random attack may happen during the transmission .we first investigated a single dying channel by modeling it as a -block bf - awgn channel with a random delay constraint .we obtained the optimal coding length that minimizes the outage probability when uniform power allocation was assumed .next , we investigated the general properties of the optimal power allocation for a given .for some special cases , we cast the optimization problem into convex ones which can be efficiently solved .as an extension of the single dying channel result , we investigated the case of parallel dying channels and studied the asymptotic outage behavior with two different attack models : the independent - attack case and the -dependent - attack case .it has been shown that the outage probability diminishes to zero for both cases as the number of sub - channels increases if the target _ rate per unit cost _ is less than a given threshold . moreover ,the outage exponents for both cases were studied to reveal how fast the outage probability improves over the number of sub - channels .when , the outage probability is as we see from the above equation , if , we have .hence , we can achieve a smaller by swapping and , since the last term in is not affected by such a swapping while the second term is decreased . when , for any , if , by swapping and , all the terms containing both and , i.e. , all the probability terms in the form of will not be affected. however , the probability terms containing but not can be decreased by such a swapping .thus , we could achieve a smaller outage probability in total . when the coding length , the outage probability is when we choose any other arbitrary values for , i.e. , and , according to ( [ eq : expand outage ] ) , the outage probability is due to the concavity of the function , we have .hence , moreover , it is obvious that summing over only a portion of the blocks yields an even smaller value , i.e. , , with . if , for , the strong inequality holds .therefore , we have noting that , and considering ( [ eq : pn greater than p1 ] ) and ( [ eq : increasig relation of prob ] ) , the following inequality can be derived for ( [ eq : pn ] ) : from ( [ eq : total pout greater ] ) , we see that has the smallest outage probability when fading gains are the same , which means that the optimal coding length is with . therefore , ( [ eq : hessian pout ] ) as the summation of all the terms is positive semi - definite .hence is a convex function in terms of .in addition , lies in a convex cone as shown in theorem .[ thm : i.i.d fading no - increasing ] . hence the problem is a convex problem . [ [ app : log - normal opt ] ] sufficient conditions for the convexity of the optimization problem in ( [ eq : log - normal opt ] ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ let since is convex and is convex for and nondecreasing , according to lemma 1 , if , is convex over .hence , the objective function given by ( [ eq : outage explicit form ] ) is convex under the conditions given in the lemma .
|
in wireless networks , communication links may be subject to random fatal impacts : for example , sensor networks under sudden power losses or cognitive radio networks with unpredictable primary user spectrum occupancy . under such circumstances , it is critical to quantify how fast and reliably the information can be collected over attacked links . for a single point - to - point channel subject to a random attack , named as a _ dying channel _ , we model it as a block - fading ( bf ) channel with a finite and random delay constraint . first , we define the outage capacity as the performance measure , followed by studying the optimal coding length such that the outage probability is minimized when uniform power allocation is assumed . for a given rate target and a coding length , we then minimize the outage probability over the power allocation vector , and show that this optimization problem can be cast into a convex optimization problem under some conditions . the optimal solutions for several special cases are discussed . furthermore , we extend the single point - to - point dying channel result to the parallel multi - channel case where each sub - channel is a dying channel , and investigate the corresponding asymptotic behavior of the overall outage probability with two different attack models : the independent - attack case and the -dependent - attack case . it can be shown that the overall outage probability diminishes to zero for both cases as the number of sub - channels increases if the _ rate per unit cost _ is less than a certain threshold . the outage exponents are also studied to reveal how fast the outage probability improves over the number of sub - channels . asymptotic outage probability , convex optimization , dying channel , fading channel , outage capacity , optimal power allocation , parallel channel , random delay constraint .
|
since their inception inside of the last decade p systems have spawned a variety of hybrid systems .one such hybrid , that of spiking neural p systems , results from a fusion with spiking neural networks .it has been shown that these systems are computationally universal . here the time / space computational complexity of spiking neural p systems is examined .we begin by showing that counter machines simulate standard spiking neural p systems with linear time and space overheads .fischer et al . have previously shown that counter machines require exponential time and space to simulate turing machines .thus it immediately follows that there is no spiking neural p system that simulates turing machines with less than exponential time and space overheads .these results are for spiking neural p systems that have a constant number of neurons independent of the input length . extended spiking neural p systems with exhaustive use of ruleswere proved computationally universal in .zhang et al . gave a small universal spiking neural p system with exhaustive use of rules ( without delay ) that has 125 neurons .the technique used to prove universality in and involved simulation of counter machines and thus suffers from an exponential time overhead when simulating turing machines . in an earlier version of the work we present here , we gave an extended spiking neural p system with exhaustive use of rules that simulates turing machines in _polynomial time _ and has _ 18neurons_. here we improve on this result to give an extended spiking neural p system with exhaustive use of rules that simulates turing machines in _ linear time _ and has only _ 10 neurons_. the brief history of small universal spiking neural p systems is given in table [ tab : small_snp ] .note that , to simulate an arbitrary turing machine that computes in time , all of the small universal spiking neural p systems prior to our results require time that is exponential in .an arbitrary turing machine that uses space of is simulated by the universal systems given in in space that is doubly exponential in , and by the universal systems given in in space that is exponential in ..small universal sn p systems .the `` simulation time '' column gives the overheads used by each system we simulating a standard single tape turing machine .indicates that there is a restriction of the rules as delay is not used and indicates that a more generalised output technique is used . *the 18 neuron system is not explicitly given in ; it is however mentioned at the end of the paper and is easily derived from the other system presented in .also , its operation and its graph were presented in . [ cols="^,^,^,^,^ " , ] h. chen , m. ionescu , and t. ishdorj . on the efficiency of spiking neural p systems . in m.a .gutirrez - naranjo et al . ,editor , _ proceedings of fourth brainstorming week on membrane computing _ ,pages 195206 , sevilla , feb .2006 .m. ionescu and d. sburlan .some applications of spiking neural p systems . in george eleftherakiset al . , editor , _ proceedings of the eighth workshop on membrane computing _ , pages 383394 , thessaloniki , june 2007 .a. leporati , c. zandron , c. ferretti , and g. mauri . on the computational power of spiking neural p systems . in m.a .gutirrez - naranjo et al . , editor , _ proceedings of the fifth brainstorming week on membrane computing _ , pages 227245 , sevilla , jan .a. leporati , c. zandron , c. ferretti , and g. mauri .solving numerical np - complete problems with spiking neural p systems . in george eleftherakiset al . , editor , _ proceedings of the eighth workshop on membrane computing _ , pages 405423 , thessaloniki , june 2007 .t. neary . on the computational complexity of spiking neural p systems . in _ unconventional computation ,7th international conference , uc 2008 _ , volume 5204 of _ lncs _ , pages 189205 , vienna , aug .springer .t. neary and d. woods .-completeness of cellular automaton rule 110 . in michele bugliesi et al . , editor , _ international colloquium on automata languages and programing 2006 , ( icalp ) part i _ , volume 4051 of _ lncs _ , pages 132143 , venice , july 2006 .springer .d. woods and t. neary .on the time complexity of 2-tag systems and small universal turing machines . in _47 annual ieee symposium on foundations of computer science ( focs ) _ , pages 439448 , berkeley , california ,x. zhang , y. jiang , and l. pan .small universal spiking neural p systems with exhaustive use of rules . in _3rd international conference on bio - inspired computing : theories and applications(bicta 2008 ) _ , pages 117128 , adelaide , australia , oct .2008 . ieee .
|
it is shown that there is no standard spiking neural p system that simulates turing machines with less than exponential time and space overheads . the spiking neural p systems considered here have a constant number of neurons that is independent of the input length . following this we construct a universal spiking neural p system with exhaustive use of rules that simulates turing machines in linear time and has only 10 neurons .
|
the discovery of powerful , fast quantum algorithms launched new efforts to implement such quantum algorithms in real physical systems .quantum algorithms simultaneously exploit two characteristic features of quantum theory .namely , the fundamental phenomenon of quantum interference and the fact that for distinguishable quantum systems the dimension of the hilbert space increases exponentially with the number of systems .therefore , to implement a quantum algorithm in a real quantum system we must be able to create and manipulate arbitrary superpositions of quantum states and to preserve quantum coherence during a computation . unfortunately , quantum coherence is very fragile .typically , any coupling to an environment leads to decoherence so that quantum mechanical superpositions are rapidly destroyed .the urgent need to develop efficient methods to protect quantum coherence has led to the study of very general classes of quantum error - correcting codes .the main idea is to restrict the dynamics of a quantum algorithm to a subspace of the hilbert space , in which errors can be identified uniquely by suitable measurements and where the error operators can be inverted by unitary operations .typically , this is achieved by an encoding of the logical information and by a suitable choice of quantum gates .for some special cases it is also possible to design a passive error - correcting quantum code .such a passive quantum code relies on a subspace of the hilbert space which is not affected by any errors at all .in this situation the unitary recovery operation is the identity operation so that an active correction of the errors is not necessary . in principle, any type of error can be corrected by these strategies as long as enough physical qubits are available to achieve the required redundancy and one can make a large number of control measurements and perform the rapid recovery operations .however , in view of current - day experimental possibilities it is generally difficult to achieve both requirements .therefore it is desirable to develop alternative error - correcting strategies which possibly correct a restricted class of errors only , but which tend to minimize both redundancy and the number of recovery operations .recently , the first steps in this direction have been taken by defining a new class of one _ detected jump - error correcting quantum codes _ which are capable of stabilizing distinguishable qubits against spontaneous decay processes into statistically independent reservoirs .these codes are constructed by embedding an active error - correcting code in a passive code space and by exploiting information available on error positions .this embedding procedure leads to a significant reduction of redundancy and the number of control measurements and recovery operations . in this paperthe physical principles underlying detected jump - error correcting quantum codes are explored and generalized , motivated by the practical need for quantum error - correcting codes which minimize both redundancy and the number of recovery operations . based on these physical principles an upper bound is established on the number of logical states of a general embedded detected jump - error correcting quantum code . from thisbound it is apparent that the recently discovered one detected jump - error correcting quantum codes have minimal redundancy . based on this family of optimal one _ detected jump - error correcting quantum codes _ , we establish links with the general notions of combinatorial design theory .for this purpose the new concept of a _ spontaneous emission error design _ is introduced .this is a powerful tool for constructing multiple detected jump - error correcting quantum codes capable of stabilizing distinguishable qubits against spontaneous decay processes . as an example, we present a new embedded three detected jump - error correcting quantum code .this paper is organized as follows . in sec .[ masterequation ] basic physical aspects concerning the spontaneous emission of photons by qubit - systems are summarized . in sec .[ codedesign ] the physical principles are explored which lead to the construction of one detected jump - error correcting quantum codes .the conditions for general detected jump - error correcting quantum codes are given in sec .[ generaldj ] .the links with combinatorial design theory are established in sec .[ designs ] . finally , in sec .[ nonideal ] numerical examples are presented which exhibit basic stability properties of the optimal one detected jump - error correcting quantum codes .in this section we summarize basic facts about the dynamical description of a quantum system interacting with initially unoccupied modes of the electromagnetic field .these considerations are the starting point for the development of optimal strategies of error correction , which we pursue in the subsequent sections .we consider a model of a quantum computer in which two - level atoms ( qubits ) interact with external laser pulses which synthesize the quantum gates underlying a quantum algorithm .these qubits are assumed to be arranged in an array with well defined positions ( see fig . [ iontrap2 ] ) .in addition , these qubits are assumed to be distinguishable , which requires that their mean nearest neighbor distance is large in comparison with the optical wave lengths involved .their distinguishability guarantees that the dimension of their associated hilbert space is and thus scales exponentially with the number of qubits .in addition , it is assumed that these qubits couple to the unoccupied modes of the electromagnetic field .this coupling causes spontaneous decay processes of the qubits from their excited states , to their stable lower lying states . withinthe born , markov , and the rotating wave approximations the resulting dynamics of the reduced density operator of this -qubit system are described by the master equation with the non - hermitian effective hamiltonian thereby , the coherent dynamics of the -qubit system in the absence of the coupling to the vacuum modes of the electromagnetic field are described by the hamiltonian which incorporates the influence of the external laser pulses . in addition , we assume that the mean distance between the qubits is much larger than the wave lengths of the spontaneously emitted radiation . therefore , to a good degree of approximation each qubit couples to a different set of modes of the radiation field so that these sets constitute statistically independent reservoirs . in eq .( [ master equation ] ) the coupling of qubit to its reservoir and the resulting spontaneous decay process is characterized by the lindblad operator where denotes the identity on every except the -th qubit , and is the associated spontaneous decay rate . provided that initially the -qubit system is in a pure state , say , a formal solution of the master eq .( [ master equation ] ) is given in with the pure quantum state and with the probabilities it can be shown that each pure state describes the quantum state of the -qubit system at time conditioned on the emission of precisely photons at times by qubits .thus , each of the pure quantum states of eq .( [ unravel ] ) corresponds to a possible measurement record in an experiment in which each qubit is observed continuously by photodetectors . in the subsequent discussion it is important to note that due to the large separation between the qubits ideally this measurement record not only determines the spontaneous decay times , but also the associated positions ( ) of the qubits which have been affected by these decay processes .the measurement record is observed with probability . according to eq .( [ unravel ] ) the quantum state resulting from a particular measurement record is determined by two types of effects .first , the time evolution between two successive photon emission events is characterized by the non - hermitian hamiltonian of eq .( [ effective hamiltonian ] ) .thus , even in the absence of any spontaneous photon emission process in a given time interval ] , the quantum state at time is identical with the unperturbed state ( compare with eq .( [ unravel ] ) ) .thus , if one can find such a sufficiently high dimensional decoherence free subspace , the dynamics taking place between successive spontaneous photon emission events are stabilized perfectly without the need for control measurements and recovery operations . in practiceit is desirable to choose the dimension of the decoherence free subspace to be as large as possible .an important special case occurs when all the qubits have identical spontaneous decay rates , i.e. .in this situation it follows that and any subspace formed by basis states involving an equal number , say , of excited qubits is a decoherence free subspace . for a given number of qubits the dimension of such a decoherence free subspaceis given by which is maximal if .( denotes the largest integer smaller or equal to . ) in general , the first spontaneous emission of a photon will affect the quantum state of the -qubit system in an irreversible way . according to eq .( [ unravel ] ) the spontaneous emission of a photon by qubit , for example , is described by the application of the lindblad operator which induces a quantum jump .this lindblad operator is not invertible over the decoherence free subspace so that this quantum jump can not be corrected . in order to correct for this quantum jumpactively we have to restrict the dynamics to a still smaller subspace in which a unitary operator , say , can be found having the property therefore , if we still want to take advantage of passive error correction between successive photon emission events we have to construct an active error - correcting quantum code within the relevant decoherence free subspace .we now construct a one detected jump - error correcting embedded quantum code in the special case of identical spontaneous decay rates considered above . according to the criterion given in , the orthogonal basis states of a subspace constitute an active error - correcting quantum code with respect to the set of error operators if and only if for all possible values of and .( [ knill ] ) states the necessary and sufficient conditions for the existence of unitary recovery operations which fulfill eq . ( [ recovery ] ) for the error operators . in the physical settingthis criterion states that : ( i ) different orthogonal quantum states remain orthogonal under the action of error operators ; and ( ii ) all basis states are affected by a given pair of errors and in a similar way .the latter condition necessarily implies that the scalar products between the states and are state - independent .it is plausible that a larger set of error operators leads to a more restrictive set of conditions of the type of eq .( [ knill ] ) .furthermore , we also expect that more restrictive conditions lead to a higher redundancy of an active quantum code .as an example , consider the situation where continuous observation of the -qubit system by photodetectors does not reveal which qubit has emitted the registered photon .this implies that the error operators which could induce a spontaneous decay process are in the set .it has been shown by plenio et al . that when the error positions are unknown , eight physical qubits are needed to encode two orthogonal logical states by an embedded quantum code .this should be compared with the optimal active one - error correcting code using five qubits .thus , the advantage offered by using an embedded quantum code , capable of passively correcting errors between successive photon emission events leads to a significant increase of redundancy in comparison to purely active methods .however , this disadvantage can be overcome if besides knowing the error time , information about the error position is also available . in principle, this information can be obtained from continuous observation of the -qubit system by photodetectors as long as the mean distance between adjacent qubits is large in comparison with the wave length of the spontaneously emitted radiation . for this purposeit is important that each photon which is emitted by one of the qubits can be detected .how can we construct a one detected jump - error correcting embedded quantum code which exploits information about the error position in an optimal way so that its redundancy is minimized ?let us concentrate again on our previously introduced example of identical spontaneous decay rates . in thissetting we have a decoherence free subspace which involves excited qubits .this stabilizes the dynamics between successive photon emission events passively .for example , in the simple case of and , the orthogonal basis states of the decoherence free subspace are given by latexmath:[ ] , the mean number of required recovery operations is of the order of . to stabilize any quantum algorithm against spontaneous decay processes using an embedded one detected jump - error correcting quantum code three requirements have to be met .first , one has to be able to register the time and position of each spontaneous decay event which takes place during the performance of the quantum algorithm .as indicated schematically in fig .[ iontrap2 ] this can be achieved by continuous observation of the -qubit system with photodetectors . in principle , an identification of the perturbed qubit is possible provided the mean nearest neighbor spacing of the qubits is large in comparison with the wave lengths of the radiation emitted spontaneously . however , in practice the error position might not be determined so easily due to imperfect detection efficiencies of the photodetectors .therefore , in actual applications shelving techniques might be useful which amplify each spontaneously emitted photon to such an extent that it can be detected with an efficiency arbitrarily close to unity .second , we have to ensure that each spontaneous decay event is corrected immediately by application of an appropriated unitary transformation which inverts the effect of the lindblad operator . in practice, this inversion has to be performed on a time scale which is small in comparison with the natural time scale of the quantum algorithm and with the mean spontaneous decay time .third , one has to ensure that the sequence of quantum gates which constitute the quantum algorithm does not leave the code space at any time .this can be done by encoding the logical information within the code space , and to develop a universal set of quantum gates which leaves this code space invariant .ideally these quantum gates are implemented by suitable hamiltonians .this ensures that the code space is left invariant even during the application of one of these universal quantum gates .such universal sets of hamiltonian - induced quantum gates have already been developed for decoherence free subspaces of the kind discussed above .but in general , unitary gates based on swapping hamiltonians need not be universal on the embedded quantum code , or the swapping hamiltonians do not leave the embedded quantum code invariant .the solution of this intricate and yet unsolved problem is beyond the scope of the present work .however , some preliminary results have already been obtained recently .so far we have shown that any lindblad operator of the form of eq .( [ jump operator ] ) can be inverted by our one detected jump - error correcting quantum codes .we provide an example of a unitary transformation which achieves this inversion in the case of the one detected jump - error correcting quantum code involving four physical qubits .a possible sequence of quantum gates capable of inverting a spontaneous decay process affecting qubit , for example , is depicted in fig .this example demonstrates the basic fact that it is indeed possible to perform a unitary inversion of the lindblad operator provided eq .( [ knill ] ) is fulfilled for .to define general detected jump - error correcting quantum codes , we introduce some notation . for a set of positions ,we denote by the operator the associated error times are no longer mentioned explicitly , but it is understood that they are known .note that the operators commute , because the are pairwise different . since by eq .( [ probability ] ) , the errors which involve two equal indices , say , can not occur . as discussed in sec .[ codedesign ] , for all the states which are superpositions of states with a constant number of excited qubits are common eigenstates of the non - unitary effective time evolution ( [ heffective ] ) between quantum jumps .a subspace of such a decoherence free subspace with orthonormal basis is called a -detected jump - error correcting quantum code , and is denoted by if the following condition holds for sets of jump positions with at most elements and for all basis states and : the notation is motivated by classical coding theory .similarly , the notation -jc is motivated by notations from design theory ( see ) .the validity of this statement follows from the general conditions on quantum error - correcting codes ( cf . ) . since we know on which positions the jump operator acts , only products of the form have to be considered .there is a natural connection with combinatorics . for a basis state of qubits , the positions which are in state define a subset of , a collection of such subsets corresponds to an equally weighted superposition of basis states .let be disjoint sets of subsets , where each subset contains elements . identifying the set and the binary word where if and otherwise , we define the states where denotes the number of elements ( cardinality ) of the set .these orthonormal states span a -detected jump - error correcting quantum code provided that for all sets of jump positions with no more than elements and all sets the following condition holds : we note that the disjointness of the sets implies condition ( [ eq : djccondition ] ) for . rewriting the operator as shows that for and the states ( [ eq : combdjc ] ) the expectation value equals the expression in ( [ eq : combdjccondition ] ) .in this section we will show that -detected jump - error correcting quantum codes are naturally connected with -designs .these are combinatorial structures which have been extensively studied for many decades , cf . . to denote this class of combinatorial structures we introduce some notations using the language of finite incidence structures .let be a set of elements , called _points _ , say where is an integer and . for class of -subsets of containing elements will be denoted by . in a suggestive way its cardinality ,i.e. is just the binomial coefficient an incidence structure in is specified by a distinguished class of subsets of .the elements of are called _ blocks _ ( or sometimes _ lines _ ) of the incidence structure . if , we say that has constant block size . as an example, any undirected graph is an incidence structure of block size two , if we choose as the set of points of the graph and the points which are directly connected by an edge as blocks . for any point class denotes the class of blocks containing the point ( or : `` the lines through '' ) .if is constant for all the incidence structure is called regular . for a graph, is the degree of the vertex , i.e. , the number of edges on which lies .the incidence structure as well as the graph itself is called regular , if is constant . if there exists a constant such that for all the class of all blocks containing the set of points has size ,the incidence structure is called _ -regular_. a _-design _ is a -regular incidence structure with constant block size .it is denoted by --design or as .graphs which correspond to -designs are depicted in fig .[ fig : designs ] .the preceding discussion leads to the notion of -spontaneous emission error designs which we denote by - .the essential property of these combinatorial objects is the _ local multiplicity _ of a subset of containing at most elements which is defined by where are disjoint subsets of .any - produces a -detected jump - error correcting quantum code using the encoding defined in eq .( [ eq : combdjc ] ) .we conclude this section by constructing of a three - jump correcting code .the permutation group of order acts on the -element subsets of .the orbits under of the sets , , and are mutually disjoint .direct calculation shows that they fulfill the local multiplicity condition ( [ eq : localmult ] ) .hence the sets , , and define an jump code .the corresponding ( not normalized ) basis states are given by : further examples of - are discussed in . in that paper , there are also general bounds on the parameters of jump codes derived . in particular , the dimension of a -detected jump - error correcting code is bounded above by for completeness, we repeat the main ideas of the proof .the dimension of the space spanned by the basis states of qubits where of them are in the excited state is .this implies the bound .a jump on positions reduces the number of excitations to . after the jump ,the positions where the jump occurred are zero .there are such basis states .a jump must not reduce the dimension of the code , hence . for possiblequantum jumps the lowest upper bound is achieved for , as to obtain the second upper bound in eq .( [ eq : gen_upperbound ] ) , we note that starting with a jump code , applying to all qubits yields a jump code .note that , interchanges ground and excited state , hence the code lies in the decoherence free subspace with excitations .also , the linear space spanned by the operators for all subsets with no more than elements is invariant under conjugation by on all qubits .this holds as .thus , for the code we obtain the bound . if there is no restriction on the number of excited states , choosing maximizes the upper bound of eq .( [ eq : gen_upperbound ] ) , i.e. as mentioned above , the upper bound of eq .( [ maxupperbound ] ) is achieved for and for an even number of qubits . a table of lower bounds ( obtained by constructions from - ) and upper bounds for small values of and is provided in .the detected jump - error correcting quantum codes constructed in the previous sections can stabilize quantum algorithms provided three conditions are fulfilled .first , the decay rates of the qubits are equal .second , the time and position of each quantum jump is detected with hundred percent efficiency .third , the appropriate unitary recovery operations are applied perfectly and instantaneously immediately following the detection of an error .experimentally these requirements can only be approximated .therefore , the natural question arises how non - ideal conditions influence the robustness of the embedded error - correcting quantum codes . in this section several types of imperfections affecting the ideal performance of the one detected jump - error correcting embedded quantum codes of sec .[ codedesign ] are studied numerically . for this purposewe investigate the stabilization of a quantum memory and of a simple hamiltonian dynamics against spontaneous decay processes .the effective two - level rabi hamiltonian considered , i.e. can be viewed as modeling the quantum dynamics of the ideal grover search algorithm in the limit of a large number of qubits .thereby , the rabi frequency can be related to the characteristic time required for performing an oracle - operation and to the number of qubits according to .here , denotes an equally weighted superposition of all ( orthonormal ) code words which may be used as an initial state in grover s quantum search algorithm .the final state we are searching for is denoted . for this choice of states and are not orthogonal because .however , if the number of qubits becomes large their overlap tends to zero . according to the hamiltonian eq .( [ grovham ] ) and consistent with grover s quantum algorithm , after the interaction time , the initial state is transformed to the final state .let us first of all consider situation where the jump position can be detected with a given non - zero error rate only .such an imperfection might occur if , for example , a photon emitted by a particular trapped ion is detected by the photodetector associated with a different ion ( see with fig .[ fehldetekt ] ) . the probability to detect the emitted photon at the correct positionis denoted by .the probability that an emitted photon is detected falsely by the next nearest neighbor is given by .analogously , the probability of detecting the photon by the -th nearest neighbor is with the normalization condition . the influence of this type of imperfection on a quantum memory , i.e. in eq.([master equation ] ), is depicted in fig .[ detposit ] .a state of the jump code is propagated according to eq .( [ unravel ] ) with a monte - carlo simulation of the quantum - trajectories .if a jump is detected , the appropriate recovery operation is applied . in the case of a correct detection, the quantum state of the memory is recovered perfectly . in the case of a false detection, the quantum state of the memory leaves the code space .therefore , in this simulation the full hilbert space of four physical qubits has to be taken into account . as a measure of fidelity the squared absolute value of the overlap between the state after a time and the initial state of the memoryis plotted as a function of the error parameter .for the orthonormal basis states of the code are degenerate eigenvectors of the operator appearing in the effective hamiltonian of eq .( [ effective hamiltonian ] ) .this property ensures that these states form a passive code for the effective time evolution between successive quantum jumps .the existence of such degenerate eigenstates of the operator relies on the assumption that the decay rates of all qubits are equal .although this physical situation can be realized in a laboratory , it is of interest to investigate what happens if this condition of equal decay rates is violated . in this latter caseour code does not correct errors between successive quantum jumps passively . for this purposelet us consider the rabi - hamiltonian of eq .( [ grovham ] ) which describes the ideal quantum dynamics .in addition , we assume that the decay rates of the physical qubits are selected randomly according to a gaussian distribution whose mean value is equal to the characteristic rabi frequency . to study the resulting time evolution we choose the code based on complementary pairings ( see sec . [ codedesign ] ). in fig .[ deltakappa ] the fidelity of the quantum state is depicted as a function of the variance of the gaussian distribution .the fidelity is defined as the overlap between the ( mixed ) system state at time and the desired state which would result from the ideal dynamics at this particular interaction time . in this numerical simulationthe master equation ( [ master equation ] ) was integrated up to time , whereas each jump operator was replaced by a sequence consisting of and an immediately applied unitary recovery operation ( see eq.([recovery ] ) ) . in this simulationit was assumed that the recovery operations are performed perfectly .it is apparent that the code stabilizes the quantum dynamics successfully , despite the fact that the code is not a perfect one detected jump - error correcting quantum code for this situation . immediately after the detection of a spontaneous emission eventthe qubits are described by a quantum state belonging to a subspace involving one excited qubit less than the original code space .this subspace also constitutes a passive error - correcting code .therefore , a time delay between the detection and the application of a recovery operation does not lead to an additional error caused by the effective time evolution provided the ideal quantum dynamics characterized by is not affected .nevertheless , this time delay must be short in comparison with the mean time between two successive spontaneous emission events .otherwise , a second spontaneous emission may map the state of the system onto another subspace , from which a recovery is no longer possible .[ detdelay ] demonstrates that , as long as the delay between detection and correction is not too large compared with the mean decay time , error correction is still possible .another important condition for correct implementation of a detected jump - error correcting quantum code is the ability to observe the environment of each qubit _ continuously_. however , immediately after the detection of a spontaneous emission event , typically the detector is not able to respond to another photon . during the latent response time of a photodetectora second spontaneous emission event can take place which may destroy quantum coherence . in fig .[ totzeit ] the dependence of the fidelity on the response time of a photodetector is depicted for various decay rates .it is apparent from fig .[ totzeit ] that the detected jump - error correcting quantum code can stabilize an algorithm as long as the response time of the photodetectors is small in comparison with the average time between successive spontaneous emission events .we have studied quantum error - correcting codes that exploit additional information about the locations of the errors .this information is obtained by continuously monitoring the system .errors caused by the resulting non - unitary dynamics are corrected passively by embedding the error - correcting code in a decoherence free subspace . to construct such codeswe have established connections to design theory .the numerical simulations presented demonstrate that the jump codes discussed can stabilize quantum systems even in cases of imperfect detections and recovery operation .this work is supported by the dfg ( spp ` quanteninformationsverarbeitung ' ) and by ist-1999 - 13021 and ist-2001 - 38869 of the european commission .
|
the recently introduced detected - jump correcting quantum codes are capable of stabilizing qubit - systems against spontaneous decay processes arising from couplings to statistically independent reservoirs . these embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error - correcting code embedded in a passive error - correcting code . the construction of a family of one detected jump - error correcting quantum codes is shown and the optimal redundancy , encoding and recovery as well as general properties of detected jump - error correcting quantum codes are discussed . by the use of design theory multiple jump - error correcting quantum codes can be constructed . the performance of one jump - error correcting quantum codes under non - ideal conditions is studied numerically by simulating a quantum memory and grover s algorithm .
|
consider the general spacetime geometry it is then a standard result that the only nonzero component of the ricci tensor is restricting attention to vacuum plane waves gives us the form : \ , h_+(u ) + 2xy \ ,h_\times(u ) \right\ } \ , \d u^2 + \d x^2 + \d y^2.\ ] ] in this form of the metric the two polarization modes are explicitly seen to decouple . by choosing and appropriately we can construct any general polarization state .the `` most general '' form of the rosen metric is where .the only non - zero component of the ricci tensor is : though relatively compact , because of the implicit matrix inversions this is a grossly nonlinear function of the metric components . in particular , in this form of the metric the and linear polarizations do not decouple in any obvious way .consider the strong - field gravity wave metric in the linear polarization .that is , set .it is found most useful to put the resulting metric in the form then in vacuum we have the general vacuum wave for polarization in the form from the polarization , by rotating the plane through a fixed but arbitrary angle , we can easily deal with linear polarization modes along any desired axis .now take an arbitrary , possibly dependent , polarization and consider the following metric ansatz : \d x^2 \right . \nonumber \\ & & \qquad \left .\vphantom{\big| } + 2 \sin(\theta(u ) ) \sinh(x(u ) ) \d x \ , \d y + [ \cosh(x(u ) ) - \cos(\theta(u ) ) \sinh(x(u ) ) ] \d y^2 \right\}. \;\ ; \end{aligned}\ ] ] note setting corresponds to linear polarization .the vacuum field equations imply let us introduce a dummy function and split this into the two equations the first of these equations is just the equation you would have to solve for a pure ( or in fact any linear ) polarization .the second of these equations can be rewritten as and is the statement that can be interpreted as distance in the hyperbolic plane .compare this to maxwell electromagnetism , where polarizations can be specified by with no additional constraints .thus an electromagnetic wavepacket of arbitrary polarization can be viewed as an arbitrary `` walk '' in the plane .we could also go to a magnitude - phase representation where so an electromagnetic wavepacket of arbitrary polarization can also be viewed as an arbitrary `` walk '' in the plane , where the plane is provided with the natural euclidean metric in contrast for gravitational waves in the rosen form we are now dealing with an arbitrary `` walk '' in the hyperbolic plane , . furthermore , because of the nonlinearity of general relativity , there is still one remaining differential equation to solve .as an important example , we consider strong - field circular polarization .circular polarization corresponds to a fixed distortion with a linear advancement of : then \d x^2 \right .\nonumber \\ & & \qquad \left .\vphantom{\big| } + 2 \sin(\omega_0 u ) \sinh(x_0 ) \d x \ , \d y + [ \cosh(x_0 ) - \cos(\omega_0 u ) \sinh(x_0 ) ] \d y^2 \right\}. \end{aligned}\ ] ] so the only nontrivial component of the ricci tensor is solving the vacuum equations gives this now describes a spacetime that has good reason to be called a strong - field circularly polarized gravity wave . note the weak - field limit corresponds to so for an arbitrarily long interval in we have , and without loss of generality we can set , as expected , we obtain + 2 \sin(\omega_0\ ; u ) \;\d x \;\d y \right\}.\ ] ]let us return to considering the metric in general rosen form where , represent any arbitrary number of dimensions ( ) transverse to the plane .it is easy to check that the only non - zero component of the ricci tensor is still let us now decompose the matrix into an `` envelope '' and a unit determinant related to the `` direction of oscillation '' .that is , let us take where .( a related discussion can be found in . ) calculating the various terms of the ricci tensor using the relations = 0 , \qquad [ \hat g^{ab } \ ; \hat g_{ab } '' ] - [ \hat g^{ab } \ ; \hat g_{bc } ' \ ; \hat g^{cd } \ ; \hat g_{da } ' ] = 0,\ ] ] it is found that .\ ] ] note that we have now succeeded in decoupling the determinant ( ; effectively the `` envelope '' of the gravitational wave ) from the unit - determinant matrix .now consider the set of all unit determinant real symmetric matrices , and on that set consider the riemannian metric ^{-1 } \;\d[\hat g ] \ ; [ \hat g]^{-1 } \ ; \d[\hat g ] \right\}.\ ] ] then this means an arbitrary polarization vacuum rosen wave is an arbitrary walk in , with distance along the walk related to the envelope function as in the discussion above .arbitrary polarizations , while trivial in the brinkmann form , are difficult to implement in the rosen form . to address this puzzlewe have re - analyzed the rosen strong - field gravity wave in terms of an `` envelope '' function and two freely specifiable functions .the vacuum field equations can be interpreted in terms of a single differential equation governing the `` envelope '' , coupled with an arbitrary walk in polarization space . in particularwe have indicated how to construct a circularly polarized rosen form gravity wave , and how to generalize this central idea beyond ( 3 + 1 ) dimensions .further detailed calculations and discussions of these rosen form polarizations can be found in .this research was supported by the marsden fund administered by the royal society of new zealand .8 hans stephani , dietrich kramer , malcolm maccallum , cornelius hoenselaers , and eduard herlt , _ exact solutions of einstein s field equations_. ( cambridge university press , cambridge , 2003 ) .see especially section 24.5 .roger penrose , a remarkable property of plane waves in general relativity " , rev .* 37 * ( 1965 ) 215220 .roger penrose , any spacetime has a plane wave as a limit " . in _ differential geometry and relativity _ , edited by m. cahen and m. flato .( kluwer / riedel , dordrecht , 1976 ) .pages 271275 .h. w. brinkmann , einstein spaces which are mapped conformally on each other " , mathematische annalen * 94 * ( 1925)119 .a. einstein and n. rosen , `` on gravitational waves '' , j. franklin inst . * 223 * ( 1937 ) 43 .cropp b and visser m 2010 `` general polarization modes for the rosen gravitational wave '' _ class .* 27 * 165022 . l. d. landau and e. m. lifschitz , _ the classical theory of fields _ , [ fourth revised english edition ] , ( pergamon press , 1980 ) .
|
strong - field gravitational plane waves are often represented in either the rosen or brinkmann forms . these forms are related by a coordinate transformation , so they should describe essentially the same physics , but the two forms treat polarization states quite differently . both deal well with linear polarizations , but there is a qualitative difference in the way they deal with circular , elliptic , and more general polarization states . in this article we will describe a general algorithm for constructing arbitrary polarization states in the rosen form .
|
the spectroscopy modeling analysis and reduction tool ( smart ) is a software package written in idl for the analysis of data acquired with the infrared spectrograph(irs ) on the spitzer space telescope .the code has been developed for the unix / linux operating systems .the irs comprises four separate spectrograph modules covering the wavelength range from 5.3 to 38 m with spectral resolutions , r and 600 .the modules are named after their wavelength coverage and resolution as short - low ( sl ) , short - high ( sh ) , long - low ( ll ) and long - high(lh ) .the sl includes two peak - up imaging cameras that have band - passes centered at 16 m ( `` blue '' ) and 22 m ( `` red '' ) . for details of the irs instrument see and chapter 7 of the spitzer observers manual(som7 ) .smart has been designed specifically for irs data and in particular to extract spectra from observations of faint or extended sources .it has been written with an understanding of both the available spitzer irs observing modes and a knowledge of how the contents of various files generated by the spitzer science center ( ssc ) irs pipeline can be used to maximize the signal - to - noise in the extracted spectrum .these three design factors make it a comprehensive and powerful software package for the extraction and analysis of irs data .smart is primarily intended to operate on the basic calibrated data ( bcd , see som7 ) delivered by the ssc pipeline , but will also operate on the browse quality data ( bqd , including both images and wavelength and flux calibrated spectral tables ) and 2-d data products from intermediate stages of the ssc pipeline , for example , the un - flatfielded data .smart aims to provide the routines necessary for the processing and scientific analysis of irs data .the main goal is to simplify the tasks of visualizing , organizing , optimally combining and extracting data .the result of this processing are fully flux and wavelength calibrated spectra .further analysis is available within smart .additionally , the spectra can be easily exported ( in either fits , ascii or idl save set format ) to other analysis packages written , for example , in idl or iraf .smart includes software developed by two of the spitzer legacy teams .the molecular cores to planet - forming disks ( c2d ) team has developed a code to remove fringes caused by interference in the detector substrate material .this software is an enhanced version of the code developed for the infrared space observatory ( iso ) short wavelength spectrometer .the formation and evolution of planetary systems ( feps ) team have adapted the idp3-nicmos package to analyze image data from the irs peak - up cameras .the spectral analysis code is based on the inherited iso spectral analysis package ( isap) . the isap software is available at http://www.ipac.caltech.edu/iso/isap/isap.html the present paper is as an introduction to smart .a smart web site at http://isc.astro.cornell.edu/smart/ serves as the repository for the full listing of all functions available in smart and details of the algorithms used .this website includes a comprehensive smart users guide ( sug ) and a set of data reduction recipes aimed at the new user .each recipe outlines the steps required to produce wavelength and flux calibrated spectra .both the website and the software will be publicly available in december 2004 . in the following section we introduce the main graphical user interfaces ( guis ) used for the interactive analysis of irs data and brieflydescribe the experienced user and batch - mode capabilities .before starting a smart analysis session the observers need to fetch their irs data from the spitzer archive .smart is designed to operate on the ssc pipeline basic calibrated data ( bcd ) fits files .a bcd is a calibrated , flatfielded 2-d image .the observer obtains a bcd image for each irs exposure ( for observing mode and pipeline details , see som7 ) .in addition to the bcd file smart also needs two associated files for each exposure .these files are the uncertainty data and the bad pixel mask .the bad pixel mask has a 16 bit integer assigned to each pixel .each bit corresponds to a given warning / error condition detected during the pipeline processing .for example a pixel may have suffered a cosmic ray hit or may be saturated .a perl script searches the local directory for these 3 files and builds a new fits file ( ` * bcd3p.fits ' ) for each exposure .each new fits file contains the data plane and two extensions : the uncertainty plane and the bad pixel mask plane .ssc pipeline products are read into the project manager and either proceed directly or via image analysis into the isap - based data evaluation and analysis gui ( idea ) .figure 1 presents a flow chart outlining the main graphical user interfaces ( guis ) available in smart . at first glancesome of the guis may appear complex , but we remind the reader that the sug will be available at the smart website . figures 2a & 2b show the project manager and dataset guis .these form the base for launching different applications and storing the resulting data products .the project manager allows the spitzer observer to load files by browsing a local directory containing the 2-d images and spectral table files from the archive or import files from an existing local data base .it is designed to handle large data sets by grouping them into sets called `` projects '' .bcds within one project may or may not come from the same irs module or astronomical target .for example , consider the simple case of a low - resolution observation of a point source covering the full wavelength range ( 5.3 38 m ) .this requires the use of both the sl and ll modules .each module covers its nominal spectral range in two orders via two sub - slits .the default observing mode will obtain two spectra of the target source per sub - slit resulting in eight separate exposures .the resulting spectra from this observation would consist of two sets of four spectra from 5.2 - 8.7 m ( sl2 ) , 7.4 - 14.5 m ( sl1 ) , 14.0 - 21.3 m ( ll2 ) and 19.5 - 38.0 m ( ll2 ) of the target source .the irs low resolution observations always obtain data simultaneously in the two sub - slits , so there are an additional two sets of four spectra , with the same wavelength coverage as above , of the background sky .entire projects can be saved to disk as idl save sets , which can be imported into new smart projects .alternatively individual files can be exported from the project to disk in either fits or ascii table format .the main applications launched from the project manager are described in the following sections .we have enhanced a version of the image display program atv , to work with our irs spectral 2-d images .the atv code was developed to visualize both 2-d and 3-d images .figure 3 shows an example of data displayed in atv - irs .we have enhanced the code so that an over - plot tracing the curved spectral orders and the boundaries of the individual resolution elements can be displayed .the cursor position is reported in terms of pixel position ( x , y ) and flux ( pixel value ) as well as sky coordinates ( right ascension , declination ) and spectral wavelength .this is very useful for assessing whether weak features in a spectrum are emission lines or are caused by cosmic - ray hits to the detectors .the viewer can also display the uncertainty and bad pixel mask planes , returning the same information for the cursor position .additional pixels may be flagged at this stage by editing the bad pixel mask .in addition to displaying a single bcd , one can make a movie of a stack of images or make a single mosaiced image .a table containing the pixel information can also be inspected .if a stack of images is selected the statistics on the cube of pixel data can be displayed in a table .images from an external archive can also be added to the project manager and viewed in atv - irs .the image display paradigm 3 ( idp3 ) is a sophisticated photometry software package .it is written in idl and is designed for the analysis of the hubble nicmos data .the irs has imaging capabilities provided by the two peak - up cameras , each with an 1 arcmin field of view .the blue irs peak - up camera fills a gap in the wavelength coverage of the spitzer imagers at 16 m .both the red ( 22 m ) and blue cameras are used for many observations .elizabeth stobie at the university of arizona provided a modified version of idp3 , known as idp3-irs , which is optimized for analyzing sources observed with the irs peak - up mode .quick look takes a bcd file and collapses the spectral orders along the dispersion direction to produce an average intensity profile across the source .this provides a convenient tool to search for either extended emission or weak secondary point sources in the low resolution data , which have slits that are 57 ( sl ) and 168 ( ll ) in the cross - dispersion direction .figure 4 shows an example of a serendipitous detection of a weak source located close to the slit center in ll1 .the negative stripes in ll2 are caused by the sky subtraction in ll1 using data which has the target source in ll2 , see section 2.4 for a discussion of sky subtraction .the standard image operations - averaging , median filtering , division , addition and subtraction - are available in smart .in addition to operations weighted by the uncertainty data , pixels can be discarded according to their bad pixel mask value .this offers a powerful means for discarding corrupted data and improving the signal to noise .for example , consider the median of ten bcds .first the pixels flagged by a bad pixel mask value are excluded from the calculation of the median value for each pixel in the 128x128 array .a new bad pixel mask is generated for the median data using the or operator on all the bad pixel mask values associated with the data used to estimate the median .automatic co - adding and differencing are available for sl and ll data .the images are sorted by slit and position within the slit ( i.e. , nod position ) and then co - added .the co - added `` on - source '' and `` off - source '' images may then be differenced .there are currently four extraction routines available in smart .all methods use a look - up table supplied by the ssc , which traces the spectral orders on the bcd image .this table is converted into an idl structure known as the `` wavesamp '' .the extraction routines use the wavesamp in order to sub - set the relevant group of pixels in each resolution element on the array .an example of the wavesamp trace is shown in figure 3 , where the curved spectral order has been sub - divided into spectral resolution elements .the curvature results in fractional pixels being assigned to a given resolution element .the value of each fractional pixel is scaled by its geometrical area .the bcd data is in units of electron / sec .each routine estimates the total number of electron / sec in each individual resolution element .the final step is to apply the flux calibration ( i.e. , electron / sec to janskys ) and stitch the orders together into a single spectrum , using the pipeline `` fluxcon '' tables .this is the standard extraction method for sh and lh data .it can also be used for extended objects that fill the sl and ll apertures .the pixel values in each resolution element , as defined by the wavesamp , are summed , while accounting for fractional pixels .prior to extraction , pixel values set to nan ( not a number ) in the pipeline or flagged by the bad pixel mask are replaced with an average value , which is estimated from the values of the pixels in the same resolution element as the bad pixel . for sl and ll observations of point sourcesthe standard extraction method is column extraction .a column of pixels centered on the point source is extracted .figure 5 is an example of the aperture used for a column extraction in ll2 .the column traces the spectral order and its width in the cross - dispersion direction is scaled with the instrumental point spread function .the user should over - ride the default width only after careful consultation of the manual and help pages ( see http://isc.astro.cornell.edu/smart/ ) .the user defined width is scaled with wavelength , and requires additional `` on - the - fly '' calibration , see section 3 .the pixels in the column in each resolution element are summed , while accounting for fractional pixels .prior to extraction pixel values set to nan or flagged by bad pixel mask values are replaced with an average value , which is estimated from the pixels in the column within the same resolution element as the bad pixel .a column of pixels centered on the extended source is extracted .the column traces the spectral order and its width is constant with wavelength . again, the pixels in each resolution element within the column are summed , while accounting for fractional pixels .pixel values set to nan or flagged by bad pixel mask values are replaced using the method outlined above for column extraction .gaussian extraction should only be used with care as it requires additional on - the - fly calibration , see section 3 .the data from the pixels in each individual resolution element are collapsed in the dispersion direction .the resulting 1-d trace is fit with a gaussian profile .the gaussian center and width can be frozen to aid the extraction of weak sources .pixel values set to nan or flagged by bad pixel mask values are excluded from the fit .the bcd images include sky emission and possible detector artifacts , which can be removed in smart .the first method is applied before extraction and the remaining two methods are akin to removing a baseline before measuring a line flux .if no sky image data are available a zodiacal model can be subtracted from the spectra in idea .the sky emission is removed by differencing an `` on - source '' and `` super - sky '' image prior to extraction .a super - sky image can be created by co - adding multiple `` off - source '' ( i.e. , sky ) bcds together .this can be done in smart using either the image operations gui or using one of the available scripts .this method is applicable to all four modules . for low resolution datathe `` super - sky '' image may simply be the median of the `` off - source data '' , which is acquired as part of the standard staring mode observation . for high resolution dataseparate sky observations are required .the sky emission is removed during the extraction process .an `` off - source '' bcd is used for the sky estimate .this can be done in two ways .the first method calculates the median sky pixel value in each resolution element of the off - source bcd . in the second method ,the pixel values in each resolution element of the off - source bcd are plotted as a function of cross - dispersion distance from the center of the slit .a first - order polynomial is fit to the resulting intensity profile to estimate the sky level . for full aperture extractionthe sky value is scaled with the area of the resolution element . for column extraction the sky valueis scaled to the area of the column with in a given resolution element .for gaussian extraction the sky value is scaled to the width of the gaussian .the scaled sky value is then subtracted from each respective summed resolution element , column or integrated gaussian profile in the source spectrum .the single - sky methods are applied to the data .however in this instance the sky is calculated from the same bcd that contains the source data .the pixels in a given resolution element that are not part of the source column or gaussian are used to estimate the sky value .an example of a selection of suitable sky regions is shown in figure 5 .idea , the isap - based data evaluation and analysis program , is a comprehensive 1-d spectral analysis package .the code includes the inherited isap software .isap was developed for the analysis of spectral data from iso ( sws / lws / pht - s / cam - cvf ) and provides a wealth of routines embedded in an easy - to - use graphical environment .the ability to analyse iso spectra has been preserved so that direct comparisons may be made between data from the two satellites .figure 6a shows the idea gui , which has been enhanced to fit the special needs of irs data .the applications gui is shown in figure 6b .processing routines include shifting , zapping low signal - to - noise or corrupted data , defringing , re - binning , unit conversion , combining spectra with weighted means or medians , filtering and smoothing .analysis routines include line fitting , line identification , continuum - fitting , synthetic photometry ( including the iras , iso and spitzer filter profiles ) , zodiacal light modeling , blackbody fitting , de - reddening and template fitting routines .the spectra can be imported / exported as fits , idl save sets or ascii tables .smart is supplied with a default set of calibration files from the ssc , which can be inspected in the calibration gui .however the experienced user can create her / his own set of calibration files and import them into smart via the calibration gui .the extraction routines can be tailored to specific source profiles to maximize the signal to noise .intermediate pipeline products , for example , the un - flatfielded data , can be substituted for the bcd image files .this can be beneficial for the extraction of faint sources and sources that have weak features .when un - flatfielded data are used the flux calibration is performed on - the - fly using a default set of calibration sources .both the target and the flux calibrator are extracted using the same parameters .the extracted calibration source is then used to flux calibrate the extracted target spectrum .only an experienced user should over - ride the default calibration file selection .smart is also designed for efficient batch mode processing .many of the gui functions are available in batch mode and we are developing a suite of scripts for the most commonly used functions .for example , both full aperture , column and gaussian extraction are available in batch mode .the script includes two sky removal options .the first option is to subtract the sky during extraction .alternatively , the sky can be removed using the super - sky method .the data from a given slit and nod position are co - added and then the co - added `` on - source '' and `` off - source '' images are differenced .a single spectrum is then extracted from the median filtered , sky - subtracted image .smart is currently being tested by the irs team and the participating legacy teams . in december 2004we will have a public release of smart .the code will be available at our website , http://isc.astro.cornell.edu/smart/ , where the irs observer can find detailed instructions for downloading and installing smart .the site includes a smart users guide and recipes for reducing irs data .we plan to update and add new functionality to the code as our understanding of the irs data analysis evolves .for example , we are currently working on an optimized extraction algorithm for both high and low resolution data .updates will be posted at the web site .we would like to thank the following people , the irs team and the ssc for their dedicated work in generating the pipeline processed data and for ongoing calibration work ; the isap team for allowing us to inherit and modify the isap code and the referee , eckhard sturm , for his swift endorsement of this paper .this work is based [ in part ] on observations made with the spitzer space telescope , which is operated by the jet propulsion laboratory , california institute of technology under nasa contract 1407 .support for this work was provided by nasa through contract number 1257184 issued by jpl / caltech .barth , a. j. 2001 , in asp conf .238 , astronomical data analysis software and systems x , eds . f. r. harnden , jr . , f. a. primini , & h. e. payne ( san francisco : asp ) , 385 + houck , j. r. , et al .2004 apjs , 154 ( in press ) + kester , d. j. m. , beintema , d. a. & lutz , d. 2003 , in the calibration legacy of the iso mission , proceedings of a conference held feb 5 - 9 , 2001 , ed . l. metcalfe , a. salama , s.b .peschke and m.f .kessler . published as esa publications series , esa sp-481 .european space agency , 2003 , 375 + lahuis , f. & boogert , a.sfchem 2002 : chemistry as a diagnostic of star formation , proceedings of a conference held august 21 - 23 , 2002 at university of waterloo , waterloo , ontario , canada n2l 3g1 .ed charles l. curry and michel fich .to be published by nrc press , ottawa , canada , 63 + schneider , g. & stobie , e. , 2002 , astronomical data analysis software and systems xi asp conference series , vol 281 , 2002 d.a .bohlender , d. durand , and t.h .handley eds `` pushing the envelope : unleashing the potential of high contrast imaging with hst '' 2002 , 382 - 386 + sturm , e. et .1998 , astronomical data analysis software and systems vii , a.s.p .conference series , vol .145 , 1998 , eds .r. albrecht , r.n .hook and h.a .bushouse,,1998 , 161 + werner , m. w. et al , 2004 , apjs , 154 ( in press ) +
|
smart is a software package written in idl to reduce and analyze spitzer data from all four modules of the infrared spectrograph , including the peak - up arrays . the software is designed to make full use of the ancillary files generated in the spitzer science center pipeline so that it can either remove or flag artifacts and corrupted data and maximize the signal - to - noise in the extraction routines . it may be run in both interactive and batch mode . the software and users guide will be available for public release in december 2004 . we briefly describe some of the main features of smart including : visualization tools for assessing the data quality , basic arithmetic operations for either 2-d images or 1-d spectra , extraction of both point and extended sources and a suite of spectral analysis tools .
|
in the upcoming 5th generation ( 5 g ) wireless networks , non - orthogonal multiple access ( noma ) has been recognized as a promising consideration of multiple access scheme to accommodate more users and to improve the spectral efficiency .a preliminary version of noma , multiuser superposition transmission ( must ) scheme , has been proposed in the 3rd generation partnership project long - term evolution advanced ( 3gpp - lte - a ) networks .the principal idea of noma is to exploit the power domain for multiuser multiplexing and to utilize successive interference cancellation ( sic ) to harness inter - user interference ( iui ) .in contrast to conventional orthogonal multiple access ( oma ) schemes , noma enables simultaneous transmission of multiple users on the same degrees of freedom ( dof ) via superposition coding with different power levels .meantime , by exploiting the received power disparity , advanced signal processing techniques , e.g. , sic , can be adopted to retrieve the desired signals at the receiver .it has been proved that noma can increase the system spectral efficiency substantially compared to the conventional oma schemes . as a result , noma is able to support massive connections , to reduce communication latency , and to increase system spectral efficiency .most of existing works focused on downlink noma systems .however , noma inherently exists in uplink communications , where electromagnetic waves are naturally superimposed with different received power at a receiving base station ( bs ) . besides , sic decoding is generally more affordable for bss than mobile users .the authors in compared noma and oma in the uplink from the perspective of spectral - power efficiency .most recently , the authors in designed a resource allocation algorithm based on the maximum likelihood ( ml ) receiver at the bs . on the other hand ,another key feature of noma is to offer fairness provisioning in resource allocation .in contrast to oma systems where users with poor channel conditions may temporarily suspended from service , noma allows users with disparate channel conditions being served simultaneously . in ,a power allocation scheme was proposed to provide the max - min fairness to users in an uplink noma system . in ,the authors studied a proportional fair based scheduling scheme for non - orthogonal multiplexed users . in , power allocation with fairness consideration was investigated for single antenna and multiple antennas noma downlink systems , respectively . despite some preliminary works have already considered fairness in resource allocation , it is still unclear why and when noma offers a more fair resource allocation than that of oma . in this paper , we aim to compare the fairness in resource allocation of uplink between noma and oma .to this end , a selection criterion is proposed for determining whether noma or oma should be used given current channel state information .through characterizing the contribution of achievable data rate of individual users to the system sum rate , we explain the underlying reasons that noma is more fair in resource allocation than that of oma in asymmetric channels .furthermore , for two - user noma systems , we propose a closed - form fairness indicator metric to determine when noma is more fair than oma .in addition , a simple hybrid noma - oma scheme which adaptively chooses noma and oma according to the proposed metric is proposed to further enhance the users fairness .numerical results are shown to verify the accuracy of our proposed metric and to demonstrate the fairness enhancement of the proposed hybrid noma - oma scheme .the rest of the paper is organized as follows . in sectionii , we present the uplink noma system model and discuss the capacity regions of noma and oma . in section iii , the reason of noma being more fair than oma is analyzed .besides , a closed - form fairness indicator metric and a hybrid noma - oma scheme are proposed .simulation results are presented and analyzed in section iv .finally , section v concludes this paper .notations used in this paper are as follows . the circularly symmetric complex gaussian distribution with mean and variance is denoted by ; stands for distributed as " ; denotes the set of all complex numbers ; denotes the absolute value of a complex scalar ; denotes the probability of a random event .in this section , we present an uplink noma system model and introduce the capacity regions of noma and oma .users.,width=192 ] we consider an uplink noma system with one single - antenna bs and single - antenna users , as shown in figure [ noma_uplink_model ] .all the users are transmitting within a single subcarrier users multiplexing on a single subcarrier .this case will generalized to the case of multi - carrier systems in section [ hybrid ] and section [ simulation ] . ] with the same maximum transmit power .for the noma scheme , users are multiplexed on the same subcarrier with different received power levels , while for the oma scheme , users are utilizing the subcarrier via the time - sharing strategy . for the noma scheme ,the received signal at the bs is given by where denotes the channel coefficient between the bs and user , denotes the modulated symbol for user , denotes the transmit power of user , and denotes the additive white gaussian noise ( awgn ) at the bs and is the noise power . without loss of generality , we assume that . dbm . for the curve of , we have db . for the curve of , we have db and db.,width=336 ] it is well known that the oma scheme with the optimal dof allocation and the noma scheme with the optimal power allocation can achieve the same system sum rate in uplink transmission , as shown in figure [ capacityregion ] . here , the optimal resource allocation for both noma and oma schemes is in the sense of maximizing the system sum rate . to facilitate the following presentation, we define as a time - sharing factor for user , where .particularly , the optimal dof allocation of the oma scheme , i.e. , point c and point f in figure [ capacityregion ] , can be achieved by : note that can also be interpreted as the normalized channel gain of user .in other words , the optimal dof allocation for the oma scheme is to share the subcarrier with the time duration proportional to their normalized channel gains , whereas it relies on adaptive time allocation according to the instantaneous channel realizations .we note that the optimal dof allocation is obtained with all the users transmitting with their maximum transmit power since there is no iui in the oma scheme .on the other hand , power allocation of noma that achieves the corner points , i.e. , point a , point b , point d , and point e in figure [ capacityregion ] , can be obtained by simply setting , and performing sic at the bs .any rate pairs on the line segments between the corner points can be achieved via a time - sharing strategy .it can be observed from figure [ capacityregion ] that noma with a time - sharing strategy always outperforms oma , both in the sense of spectral efficiency and user fairness , since the capacity region of oma is a subset of that of noma .we note that noma without the time - sharing strategy can only achieve the corner points in the capacity region , which might be less fair than oma in some cases . in this paper , we study the users fairness of the noma scheme without time - sharing and the oma scheme with an adaptive dof allocation .both schemes achieve the same system sum rate but results in different users fairness. intuitively , in figure [ capacityregion ] , for symmetric channel with , oma at point c is more fair than noma since both users have the same individual data rate .however , for an asymmetric channel with , it can be observed that noma at the optimal point d is more fair than oma at the optimal point f. therefore , it is interesting to unveil the reasons for fairness enhancement of noma in asymmetric channels and to derive a quantitative fairness indicator metric for determining when noma is more fair than oma .in this section , we first present the adopted jain s fairness index for quantifying the notion of resource allocation fairness .then , we characterize the contribution of individual user data rate to the system sum rate and investigate the underlying reasons of noma being more fair than oma . subsequently , for a two - user noma system , a closed - form fairness indicator metric is derived from jain s index to determine whether using noma or oma for any pair of users on a single subcarrier .furthermore , a hybrid noma - oma scheme is proposed which employs noma or oma adaptively based on the proposed metric . in this paper, we adopt the jain s index as the fairness measurement in the following where denotes the individual rate of user .note that .a scheme with a higher jain s index is more fair and it achieves the maximum when all the users obtain the same individual data rate .uplink users .the sum rates of the noma scheme and the oma scheme are denoted by the green double - side arrow .the individual rates of the noma scheme and the oma scheme are denoted by the red line segments and the black line segments , respectively.,width=336 ] for the optimal resource allocation of both noma and oma schemes discussed in section [ resourceallocation ] , it is easily to obtain the sum rate and individual data rates for both schemes as follows : where and denote the system sum rate for noma and oma schemes with the optimal resource allocation , respectively , and and denote the individual data rate for user in noma and oma schemes , respectively . for the noma scheme , we first define the accumulative normalized channel gain as , , , and then rewrite the achievable rate of user as the first term in denotes the sum rate of a system with users and the second term denotes the counterpart of a system with users . in other words ,the contribution of user to the system sum rate depends on the difference of a logarithm function with respect to ( w.r.t . ) and . for notational simplicity and without loss of generality, we define the logarithm function as with on the other hand , for the oma scheme , it can be observed from that has a linear relationship with and the slope w.r.t .the system sum rate is determined by the normalized channel gain .similarly , the contribution of user to the system sum rate depends on the difference of a linear function of and , where the linear function is given by figure [ linearlog ] illustrates the linear and logarithmic increments of the system data rate w.r.t . the accumulative channel gain for oma and noma ,respectively , with uplink users . it can be observed that the noma and oma schemes have the same system sum rate but contributed by different date rates of individual users . in particular, the noma scheme achieves a more fair resource allocation than that of the oma scheme since all the users are allocated with similar individual rates .in fact , the fairness of resource allocation in noma inherits from the logarithmic mapping of w.r.t .the accumulative channel gain .the first and second derivatives of are increasing and decreasing w.r.t . , respectively .the larger normalized channel gain , the slower increasing with , which results in a smaller individual rate compared to that of the oma scheme . on the other hand, a smaller normalized channel gain would result in a higher increasing rate of with , when a higher individual rate is obtained compared to that of the oma scheme .for instance , considering the weakest user and the strongest user with their normalized channel gain and , respectively , is raised up by the logarithm function compared to , while is reduced compared to . note that for symmetric channels , linear mapping of the oma scheme is more fair than the noma scheme .however , the probability that all the users have the same channel gains is quite small , especially for a system with a large number of users .in practice , most of noma schemes assume that there are at most two users multiplexing via the same dof , which can reduce both the computational complexity and decoding delay at the receiver .therefore , we focus on the fairness comparison of noma and oma with in this section .we aim to find a simple metric to determine when noma is more fair than oma for any pair of users , which is fundamentally important for user scheduling design in the system with multiple dof and multiple users .the fairness indicator metric is proposed in the following theorem .[ theorem1 ] given a pair of users with their channel realizations , the noma scheme is more fair in the sense of jain s fairness index if and only if where and is the lambert w function . in the high snr regime ,i.e. , , we have the high snr approximation of as since both the noma and oma schemes have the same sum rate , we need to compare the sum of square of individual rates ( ) , i.e. , , in the denominator of .the scheme with a smaller would be more fair in terms of jain s index .for the oma scheme , we have where since we assume . for the noma scheme, the can be given by note that a trivial solution for is given with , which corresponds to a single user scenario .in addition , at , i.e. , , we have as observed from the capacity region in figure [ capacityregion ] .further , is a monotonic decreasing function of within , while is a monotonic decreasing function of within and it is increasing with within .also , from figure [ capacityregion ] , we can observe that for an arbitrary small positive .therefore , there is a unique intersection of and at in the range of . before the intersection ,i.e. , , noma is more fair , while after the intersection , i.e. , , oma is more fair . solving the equation of within , we obtain furthermore , with , we have , which completes the proof for the sufficiency of the proposed fairness indicator metric . for the necessity , since the intersection of and within is unique , the only region within where is .in other words , noma is more fair only if , which completes the proof for the necessity of the proposed metric .[ remark1 ] note that the proposed fairness indicator metric only depends on the parameter defined in . as a result , the metric depends on the instantaneous channel gains . compared to the jain s index ,our proposed metric is more insightful which connects oma and noma .particularly , for the high snr approximation , we can observe that decreases with the increasing maximum transmit power since the lambert w function in the numerator increases slower than that of the denominator .therefore , the probability of noma being more fair will decrease when increasing the maximum transmit power , which will be verified in the simulations .the proposed fairness indicator metric in theorem [ theorem1 ] provides a simple way to determine if noma is more fair than oma , and would serve as a criterion for user scheduling design for systems with multi - carrier serving multiple users . in particular , for an arbitrarily user scheduling strategy , we propose an adaptive hybrid scheme which decides each pair of users on each subcarrier in choosing either the oma scheme or the noma scheme to enhance users fairness . instead of using the noma scheme or the oma scheme across all the subcarriers , this hybrid noma - oma scheme can enhance the user fairness substantially . note that the fairness performance can be further improved if it is jointly designed with the user scheduling .it will be considered in the future work .in this section , we adopt simulations to verify the effectiveness of the proposed metric and to evaluate the proposed hybrid noma - oma scheme .a single cell with a bs located at the center with a cell radius of m is considered .there are subcarriers in the system and numbers of users are randomly paired on all the subcarriers .all the users are randomly and uniformly distributed in the cell .we set the noise power in each subcarrier at the bs as dbm . the 3gpp path loss model in urban macro cell scenariois adopted in our simulations ..,width=336 ] figure [ predictedactual ] depicts the probability of noma being more fair than oma versus the maximum transmit power , .it can be observed that matches well with . in other words ,our proposed fairness indicator metric can accurately predict if noma is more fair than oma .also , for the high snr approximation in , closely matches with the simulation results .in addition , we can observe that the noma scheme has a high probability ( ) of being more fair than that of the oma scheme in terms of jain s index .this is due to the fact that the probability of asymmetric channels is much larger than that of symmetric channels . on the other hand, the probability of noma being more fair is decreasing with the maximum transmit power as discussed in remark 2 .this is because the noma scheme is interference - limited in the high transmit power regime .specifically , the strong user ( with higher received power ) will face a large amount of interference , while the weak user ( with lower received power ) is interference - free owing to the sic decoding . as a result ,in the high transmit power regime , the weak user can achieve a much higher data rate than that of the strong user , which may result in a less fair resource allocation than that of oma . even though , noma is still more fair than oma with a probability of about in the high transmit power regime .figure [ pdf_noma_oma ] shows the probability density function ( pdf ) of user rate for a multi - carrier system with a random pairing strategy .three multiple access schemes are compared , including the noma scheme , the oma scheme , and the proposed hybrid noma - oma scheme .it can be observed that the individual data rate distribution of the noma scheme is more concentrated than that of the oma scheme , which means that the noma scheme offers a more fair resource allocation than the oma scheme .further , the individual rate distribution of the hybrid noma - oma scheme is more concentrated than that of the noma scheme .in fact , our proposed hybrid noma - oma scheme can better exploit the channel gains relationship via the adaptive selection between noma and oma according to the fairness indicator metric .actually , for the three multiple access schemes , we have , and , where denotes the jain s index for the hybrid noma - oma scheme . in addition, the cumulative distribution function ( cdf ) of user rate is more of interest in practice , which is illustrated in figure [ cdf_noma_oma ] .we can observe that the 10th - percentile the user rate , which is closely related to fairness and user experience , increased about bit / s / hz compared to that of the noma scheme .this shows that our proposed hybrid noma - oma scheme can significantly improve the performance of low - rate users and therefore elevate the quality of user experience .in this paper , we investigated the resource allocation fairness of the noma and oma schemes in uplink .the fundamental reason of noma being more fair than oma in asymmetric multiuser channels was analyzed through characterizing the contribution of data rate of each user to the system sum rate .it is the logarithmic mapping between the normalized channel gains and the individual data rates that exploits the channel gains asymmetry to enhance the users fairness in the noma scheme .based on this observation , we proposed a quantitative fairness indicator metric for two - user noma systems which determines if noma offers a more fair resource allocation than oma .in addition , we proposed a hybrid noma - oma scheme that adaptively choosing between noma and oma based on the proposed metric to further improve the users fairness .numerical results demonstrated that our proposed metric can accurately predict when noma is more fair than oma .besides , compared to the conventional noma and oma schemes , the proposed hybrid noma - oma scheme can substantially enhance the users fairness .l. dai , b. wang , y. yuan , s. han , i. chih - lin , and z. wang , `` non - orthogonal multiple access for 5 g : solutions , challenges , opportunities , and future research trends , '' _ ieee commun . mag ._ , vol .53 , no .9 , pp . 7481 , sep . 2015 .z. ding , y. liu , j. choi , q. sun , m. elkashlan , c. l. i , and h. v. poor , `` application of non - orthogonal multiple access in lte and 5 g networks , '' _ ieee commun . mag ._ , vol .55 , no . 2 ,185191 , feb .z. wei , j. yuan , d. w. k. ng , m. elkashlan , and z. ding , `` a survey of downlink non - orthogonal multiple access for 5 g wireless communication networks , '' _ zte communications _14 , no . 4 , pp . 1725 , oct . 2016 .w. shin , m. vaezi , b. lee , d. j. love , j. lee , and h. v. poor , `` non - orthogonal multiple access in multi - cell networks : theory , performance , and practical challenges , '' _ arxiv preprint arxiv:1611.01607 _ , 2016 .d. w. k. ng , e. s. lo , and r. schober , `` energy - efficient resource allocation in ofdma systems with large numbers of base station antennas , '' _ ieee trans .wireless commun ._ , vol . 11 , no . 9 , pp . 32923304 , sep. 2012 .z. ding , z. yang , p. fan , and h. poor , `` on the performance of non - orthogonal multiple access in 5 g systems with randomly deployed users , '' _ ieee signal process ._ , vol . 21 , no . 12 , pp15011505 , dec . 2014 .z. yang , z. ding , p. fan , and g. k. karagiannidis , `` on the performance of non - orthogonal multiple access systems with partial channel information , '' _ ieee trans .commun . _ , vol .64 , no . 2 ,654667 , feb .y. sun , d. w. k. ng , z. ding , and r. schober , `` optimal joint power and subcarrier allocation for full - duplex multicarrier non - orthogonal multiple access systems , '' _ ieee trans ._ , 2017 , accepted for publication .m. al - imari , p. xiao , m. a. imran , and r. tafazolli , `` uplink non - orthogonal multiple access for 5 g wireless networks , '' in _ proc .ieee intern .sympos . on wireless commun .systems _ , aug . 2014 , pp . 781785 .m. al - imari , p. xiao , and m. a. imran , `` receiver and resource allocation optimization for uplink noma in 5 g wireless networks , '' in _ proc .ieee intern . sympos . on wireless commun .systems _ , aug .2015 , pp . 151155 .d. diamantoulakis , k. n. pappi , z. ding , and g. k. karagiannidis , `` wireless - powered communications with non - orthogonal multiple access , '' _ ieee trans .wireless commun ._ , vol . 15 , no . 12 , pp .84228436 , dec .2016 .m. s. ali , h. tabassum , and e. hossain , `` dynamic user clustering and power allocation for uplink and downlink non - orthogonal multiple access ( noma ) systems , '' _ ieee access _ ,vol . 4 , pp . 63256343 , aug .
|
in this paper , we compare the resource allocation fairness of uplink communications between non - orthogonal multiple access ( noma ) schemes and orthogonal multiple access ( oma ) schemes . through characterizing the contribution of the individual user data rate to the system sum rate , we analyze the fundamental reasons that noma offers a more fair resource allocation than that of oma in asymmetric channels . furthermore , a fairness indicator metric based on jain s index is proposed to measure the asymmetry of multiuser channels . more importantly , the proposed metric provides a selection criterion for choosing between noma and oma for fair resource allocation . based on this discussion , we propose a hybrid noma - oma scheme to further enhance the users fairness . simulation results confirm the accuracy of the proposed metric and demonstrate the fairness enhancement of the proposed hybrid noma - oma scheme compared to the conventional oma and noma schemes .
|
wyner introduced the wiretap channel in which a legitimate transmitter wants to have secure communications with a legitimate receiver in the presence of an eavesdropper , and determined its capacity - equivocation region for the degraded case .csiszar and korner extended this result to the general , not necessarily degraded , wiretap channel .leung - yan - cheong and hellman determined the capacity - equivocation region of the gaussian wiretap channel .this line of research has been subsequently extended to many multi - user settings . here , we are particularly interested in models with multiple independent legitimate transmitters , e.g. , interference channel with confidential messages , interference channel with external eavesdroppers , multiple access wiretap channel , wiretap channel with helpers , and relay - eavesdropper channel with deaf helpers .since in most multi - user scenarios it is difficult to obtain the exact secrecy capacity region , recently , there has been a significant interest in studying the asymptotic performance of these systems at high signal - to - noise ratio ( snr ) in terms of their secure degrees of freedom ( d.o.f . )achievable secure has been studied for several channel structures , such as the -user gaussian interference channel with confidential messages , -user interference channel with external eavesdroppers in ergodic fading setting , gaussian wiretap channel with helpers , gaussian multiple access wiretap channel in ergodic fading setting , multiple antenna compound wiretap channel , and wireless network .the exact sum secure was found for a large class of one - hop wireless networks , including the wiretap channel with helpers , two - user interference channel with confidential messages , and -user multiple access wiretap channel in , and for all two - unicast layered wireless networks in . helpers . ] in this paper , we revisit the gaussian wiretap channel with helpers , see fig . [fig : gwc_helper_general ] .the secrecy capacity of the gaussian wiretap channel with no helpers is the difference between the individual channel capacities of the transmitter - receiver and the transmitter - eavesdropper pairs .this difference does not scale with the snr , and hence the secure of the gaussian wiretap channel with no helpers is zero , indicating a severe penalty due to secrecy .it has been known that the secrecy rates can be improved if there are helpers which can transmit independent signals , however , if the helpers transmit i.i.d .gaussian signals , then the secure is still zero .it has been also known that positive secure could be achieved if the helpers sent structured signals , but the exact secure was unknown .references determined the exact secure of the gaussian wiretap channel with helpers to be .this result was derived under the assumption that the eavesdropper s csi was available at the transmitters . in the present paper , we show that the same secure can be achieved even when the eavesdropper s csi is unknown at the legitimate transmitters .this result is practically significant because , generally , it is difficult or impossible to obtain the eavesdropper s csi . since the upper bound developed in is valid for this case also , we thus determine the exact secure of the gaussian wiretap channel with helpers with no eavesdropper csi to be .the achievable scheme in the case of no eavesdropper csi here is significantly different than the achievable scheme with eavesdropper csi developed in . in particular , in ,the legitimate transmitter divides its message into sub - messages and sends them on different _ irrational dimensions_. each one of the helpers sends a cooperative jamming signal .the message signals and the cooperative jamming signals are sent in such a way that : 1 ) the cooperative jamming signals are aligned at the legitimate receiver in the same irrational dimension , so that they occupy the smallest possible space at the legitimate receiver to enable the decodability of the message signals , and 2 ) each cooperative jamming signal is aligned exactly in the same irrational dimension with one of the message signals at the eavesdropper to protect it .this scheme is illustrated in fig .[ fig : gwc_no_csi_one_helper_ia ] for helpers . in , we used insights from to show that , when a cooperative jamming signal is aligned with a message signal in the same irrational dimension at the eavesdropper , this alignment protects the message signal , and limits the information leakage rate to the eavesdropper by a constant which does not depend on the transmit power . meanwhile , due to the alignment of the cooperative jamming signals in a small space at the legitimate receiver , the information rate to the legitimate receiver can be made to scale with the transmit power .we use this real interference alignment based approach to achieve a secure of for _ almost all channel gains _ , and develop a converse to show that it is in fact the secure capacity .the achievable scheme in the present paper again divides the message into sub - messages .each one of the helpers sends a cooperative jamming signal . as a major difference from the achievable scheme in , in this achievable scheme , the legitimate transmitter also sends a cooperative jamming signal .this scheme is illustrated in fig .[ fig : gwc_no_csi_one_helper_no_csi_ia ] for helpers . in this case , the message signals and the cooperative jamming signals are sent in such a way that : 1 ) all cooperative jamming signals are aligned at the legitimate receiver in the same irrational dimension , and 2 ) all cooperative jamming signals span the _ entire space _ at the eavesdropper to limit the information leakage to the eavesdropper .we use insights from , which developed a new achievable scheme that achieved the same secure as in without eavesdropper csi , to show that the information leakage to the eavesdropper is upper bounded by a function , which can be made arbitrarily small . on the other hand , since the cooperative jamming signals occupy the smallest space at the legitimate receiver , the information rate to the legitimate receiver can be made to scale with the transmit power . in this achievable scheme ,we let the legitimate transmitter and the helpers _ blindly _ cooperative jam the eavesdropper .because of the inefficiency of _ blind _ cooperative jamming , here , we had to use more cooperative jamming signals than in , i.e. , in we use a total of cooperative jamming signals from the helpers , while here we use cooperative jamming signals , one of which coming from the legitimate transmitter .the gaussian wiretap channel with helpers , see fig . [fig : gwc_helper_general ] , is defined by where is the channel output of the legitimate receiver , is the channel output of the eavesdropper , is the channel input of the legitimate transmitter , , for , are the channel inputs of the helpers , is the channel gain of the transmitter to the legitimate receiver , is the channel gain of the transmitter to the eavesdropper , and and are two independent zero - mean unit - variance gaussian random variables .all channel inputs satisfy average power constraints , \le p ] as . to satisfy the power constraint at the transmitters , we can simply choose ^{-1},|h_2| , |h_3| , \cdots , chain , we know that where and are fixed , and is the little- function .this means that next , we need to bound the second term in , where and is the upper bound on defined at the beginning of this section , and is due to the fact that given and , the eavesdropper can decode with probability of error approaching zero since are rationally independent for all channel gains , except for a set of lebesgue measure zero .then , by fano s inequality , similar to the step in . combining and, we have where again is the little- function .if we choose arbitrarily small , then we can achieve secure for this model where there is no eavesdropper csi at the transmitters .we studied the gaussian wiretap channel with helpers without any eavesdropper csi at the transmitters .we proposed an achievable scheme that achieves a secure of , which is the same as the secure reported in when the transmitters had perfect eavesdropper csi .the new achievability scheme is based on real interference alignment and _ blind _ cooperative jamming . while aligned cooperative jamming signals with the information symbols at the eavesdropper to protect the information symbols , which required eavesdropper csi , here we used one more cooperative jamming signal to span the _ entire space _ at the eavesdropper to protect the information symbols . as in , here also , we aligned all of the cooperative jamming signals in the same dimension at the legitimate receiver , in order to occupy the smallest space at the legitimate receiver to allow for the decodability of the information symbols .therefore , we aligned the cooperative jamming signals carefully only at the legitimate receiver , which required only the legitimate receiver s csi at the transmitters .j. xie and s. ulukus .secure degrees of freedom of the gaussian wiretap channel with helpers . in _ 50th annual allerton conference on communication , control and computing _ , monticello , il , october 2012 .x. he and a. yener . -user interference channels : achievable secrecy rate and degrees of freedom . in _ieee information theory workshop on networking and information theory _ ,volos , greece , june 2009 . j. xie and s. ulukus . real interference alignment for the -user gaussian interference compound wiretap channel . in_ 48th annual allerton conference on communication , control and computing _ , monticello , il ,september 2010 .x. he and a. yener .secure degrees of freedom for gaussian channels with interference : structured codes outperform gaussian signaling . in _ieee global telecommunications conference _ , honolulu , hawaii , december 2009 .j. xie and s. ulukus .sum secure degrees of freedom of two - unicast layered wireless networks . submitted to _ ieee journal on selected areas in communications - signal processing techniques for wireless physical layer security _ ,september 2012 .a. s. motahari , s. oveis - gharan , m. a. maddah - ali , and a. k. khandani .real interference alignment : exploiting the potential of single antenna systems ., submitted november 2009 . also available at [ arxiv:0908.2282 ] .
|
we consider the gaussian wiretap channel with helpers , where no eavesdropper channel state information ( csi ) is available at the legitimate entities . the exact secure of the gaussian wiretap channel with helpers with perfect csi at the transmitters was found in to be . one of the key ingredients of the optimal achievable scheme in is to align cooperative jamming signals with the information symbols at the eavesdropper to limit the information leakage rate . this required perfect eavesdropper csi at the transmitters . motivated by the recent result in , we propose a new achievable scheme in which cooperative jamming signals span the _ entire space _ of the eavesdropper , but are not exactly aligned with the information symbols . we show that this scheme achieves the same secure of in but does not require any eavesdropper csi ; the transmitters _ blindly _ cooperative jam the eavesdropper .
|
ultrasound wave scattering provides a non - perturbative tool to probing a hydrodynamic velocity field . propagating sound wavesare modified by their interaction with the flow that results in their amplitude and phase distortions and scattering at different angles .the sound scattering by a flow was studied for the last 50 years theoretically as well as experimentally with a goal to develop a reliable technique to directly study dynamics of vorticity and velocity fields . being developed and being reliable this tool can provide rather unique possibility to get information about dynamics of turbulence .however , in spite of the efforts the goal to measure the dynamical structure factor of the vorticity in a turbulent flow remains rather far from reach . in this paperwe attempt to address much less ambitious but rather important experimental problem : is it visible to obtain reliable information about velocity and vorticity field of an axisymmetric large vortex either stationary or time - dependent from a resulting scattered signal of a finite width sound beam by a finite width receiver and rather close to the scattering region? + the full structure of the scattering waves ( amplitude and phase ) can be related to a structure function of a vorticity via analytical relation only in the born approximation and in a far - field limit .this theory however considers a beam , a sound emitter , and a receiver of an infinite width .only ref. discusses an influence of a finite width of a gaussian sound beam on scattering from a point vortex in a far - field limit .finite width of these constituents particularly if they are smaller than a flow region , leads to additional diffraction effects , which are mixed with interference patterns resulted from the scattering and refraction .this unavoidable nuisance effects should be unravelled from the intrinsic scattering signal , from which the vorticity could be determined .the aim of the studies presented in the paper is to develop a kind of analysis , which can allow determining the vorticity and velocity field without limitations described and to apply it to the experimental data on the sound scattering from a single either stationary or time - dependent vortex .+ the majority of the experiments in the past were concerned with sound propagation in turbulent flows and determination of the scattering cross - section , which can be compared with theoretically predicted one .such approach can not provide unambiguous answer about the scattering by a turbulent velocity field . later on itwas realized that the scattering waves may be useful for remote probing of turbulent flows .using this idea the scattering problem was reformulated as the inverse problem to determine an instantaneous velocity field within a scattering region ( laminar or turbulent ) from measurements of the acoustic scattering at different angles similar to the light scattering problem .it was shown that in the approximation of a plane sound wave propagating through a velocity field with a continuous distribution of vorticity in a finite scattering domain there is a linear relation between the fourier component of the scattering sound wave amplitude and the spatial and temporal fourier transform of the vorticity component normal to the plane of the wave propagation in a far - field limit .+ during the years , several experiments were conducted to study either stationary or time dependent vortex flows by sound scattering .earlier experiments by gromov et al and later and more precise by baudet et al investigated the von karman vortex street behind a cylinder at low reynolds numbers , .the experiments showed rather good qualitative agreement with the theoretical predictions .later on the experiment by oljaca et al on the ultrasound scattering from a stationary random vortex flow formed along the axis of a swirling jet at , showed a good quantitative agreement between measured and computed from separate velocity field measurements of the scattering sound amplitude .the main reason for the success was a satisfaction of rather strict limitations of the theory , which leads to the linear analytical relation between fourier transform of the vorticity and the scattering acoustic signal .in particular , it is required for an emitter to be perfectly flat and wider than the flow region , and for a detector to be positioned in a far field region . in the experiment the characteristic size of the jet was about the sound wavelength and much smaller than the receiver and the sound beam widths .hence , the requirements were perfectly met for the applicability of the theory . in this connectionone can mention also refs where just structure functions of vorticity in a frequency domain were measured .another approach was undertaken in a series of papers by fink and collaborators , where the velocity field and vortex dynamics were reconstructed from a phase shift measured by a transducer array .time - reversal mirror ( trm ) method is used to amplify the effect of the flow on the sound phase distortion leading to a high resolution in the velocity detection .the acoustic technique is based on a geometrical approximation , which is justified for the flow field and sound characteristics used in the experiments . in a forward scattering geometrythe phase shift possesses the same information as a sound scattering but only in the strictly geometric acoustics limit .diffraction processes , which are relevant for a finite wave length as well as finite width receiver and sound beam , require deeper understanding of sound scattering particularly in a projection of an acoustic signal from a near - field to a far - field region . on the other hand, there was a very recent attempt to achieve a goal similar to the formulated above , by using direct numerical simulations to analyze the whole structure of the scattered wave ( amplitude and phase ) and to compare it with the experimental measurements of the sound scattering on a vortex in a turbulent flow .the agreement found is not great , particularly for the amplitudes .our approach is rather different : it is straightforward in acquisition of the phase and amplitude and simple to apply especially at higher frequencies . +the experiments were performed in a flow , which was produced inside a closed cylinder either with one rotating disk or in the gap between two coaxial either co- or contra - rotating disks ( the so - called von karman swirling flow ) . as detection techniques we use ultrasound scattering and particle image velocimetry ( piv ) to directly measure velocity field .+ the hydrodynamic cell consists of a vertical plexiglass ( perspex ) cylinder of 29 cm inside diameter and 32 cm in height ( see fig.1 ) .a swirling water flow is produced between two rotating plates driven independently by two motors with a maximum continuous torque of 13 nm .the motors are brushless sinusoidal ones controlled by velocity mode drivers via optical encoders .each shaft is centered and is fixed by two bearings and sealed with uniten rotating rings .the base of the device is suspended on shock absorbers and the shafts are isolated from vibration by the motor couplings ( rotex gs designed for servo systems ) .the acoustic components ( emitters and a receiver array ) are mounted flush to avoid obstacles to the flow ( see fig.2 ) .the cell is completely filled with deionized water with addition of 10ppm of surfactant .small amount of surfactant facilitates removing of air bubbles from water .several steering configurations have been used : ( i ) two steering plates of 100 mm diameter are positioned 100 mm apart , each plate has four blades of 20 mm width and 5 mm thickness ; ( ii ) the same kind of plates with 40 mm diameter and 4 mm blades thickness ; ( iii ) upper plate is a metal disk of 280 mm diameter , a lower plate is absent , and only 25 mm diameter shaft is present with upper plane located at 50 mm from the bottom of the cell ; ( iv ) two elongated shafts of 25 mm diameter are set 86 mm apart , grinded at the edge to a shape of one blade of 12 mm width and 6 mm thickness .the measurement plane where the emitter and the detector array are placed , is chosen at the middle plane between the disks .the sound detects the velocity field in the plane , or complimentarily , the vorticity field perpendicular to that plane .ultrasound pulses are sent by a specially designed emitter ( see below ) and detected by a linear array of 64 acoustic detectors with 1 mm spacing and 64x10 mm overall active area ( from blatek ) .the acquisition system is built in a heterodyne scheme meaning that 64 lock - in amplifiers are utilized on the incoming analog signals .the outgoing low frequency signals are passed to sample and hold components , which integrate the signal of each pulse and hold the value to be scanned by two pc acquisition cards ( national instruments pci6071 ) recording 128 channels , hence providing amplitude and phase components of the signal for 64 acoustic channels .the lock - in amplifiers were compared to a sr844 rf lock - in amplifier ( stanford research systems ) and were found linear and reliable .the acquisition cards also function as controllers of timing with precision of 0.05 .the detector and electronics are optimized for 5.5mhz .each of 64 preamplifiers is based on vca2612 chips from texas instruments , and it is built directly around the detector .the entire circuitry consists of about 8,000 components .a special care was taken to block electromagnetic interference emitted from the motors : ( i ) installing filters on the motors and the power supply ; ( ii ) the circuitry and the emitter cable are shielded in a triplex scheme where the outer shell is connected to chassis ground , the intermediate shell connected to a one point analog ground .+ we are able to measure amplitude and phase of the sound pressure simultaneously at 64 positions , at various doppler frequencies .the required transmitter activating signal is only 30mvrms ( on certain piezocrystals ) , where the response of sound picked after propagation and detection is around 500mvrms on the acquisition card .+ the time sequence of the measurement process is implemented with two national instruments pci6071 cards installed on a pc running on labview , and it goes as follows : a software key initiates once the sequence , in which a rectangular pulse is sent repeatedly and activates a switch that connects the main signal to the acoustic emitter .the pulse repetition rate is 505 and its duration 18 .the main signal is sinusoidal with implied frequency between 220khz and 6mhz .the emitted sinusoidal signal must remain coherent between pulses and this will not be provided in burst mode of some function generators .the rising edge of the pulse triggers additional three clocks ( two event clocks are available in each card ) .1 ) a sample pulse is sent to engage the sample and hold components in the 64 lock - in amplifiers .the pulse width is 8 , which defines the integration time ;delay time of 204 is chosen according to the acoustic propagation time with additional 5 to allow for geometrical delays , time shifts and sound from secondary sources to arrive on the detectors .the end of the sample pulse triggers the acquisition scan by the cards .2 ) a gain variation pulse is sent to the preamplifier during the sample pulse and 1 before in order to increase the gain to a preset maximum value .such step is taken to decrease the chance for a preamplifier to saturate and hang during the sample time in response to sudden strong noise ( e.g. reflection of side pointing beam from the wall ) .3 ) a clear pulse is sent to reset the sample and hold components by opening a path to discharge their capacitors .the discharge starts after that sufficient time is allowed for the acquisition cards to scan the channels ( scan rate of 500khz was used , and a waiting time was set assuming the cards work in sequence ) .the discharge time was set to 40 ( twenty times the characteristic discharge period ) .+ in order to avoid errors by some arbitrary offset in the sampled signal a strobe mechanism was utilized .the frequency of the two reference signals , which enter the product detectors in the lock - in amplifiers , was shifted by 100hz in relation to the main signal ( coherence remains since the three function generators were connected on the same 10mhz time - base ) .the sequence of pulses is not synchronized with any of the function generators hence an initial arbitrary phase has to be resolved . in addition , since labview on a pc is not a real - time software it is not promised that the two cards begin acquisition in the same pulse .the arbitrary phases are resolved by sacrificing two channels in each card .thus , 62 lock - in amplifiers are used for acoustic detectors and another two are connected to the main signal ( attenuated ) . during analysisthe phase of the latter is subtracted from the former depending on which card the lock - in amplifier is sampled .the direct hashing of the acquired data that provides the amplitude and phase of sound is based on summing the signal around the peak in fourier domain .the center frequency is 100hz as expected and the characteristic width is 50hz .a summing window of about 100hz is used , which is enough to include doppler shifts by the time changing flow .note that a zero doppler shift is expected for steady state flow( although locally the wavelength is changed by the velocity of the flow ) .the given pulse repetition rate determines that only frequency shifts larger than 1800hz could alias with the main frequency signal .+ significant part of the efforts in building the probe were devoted to design a suitable acoustic emitter .the important parameters are the length of the transducer in the axis parallel to the measurement plane , and the smoothness of the phase and amplitude .a long transducer is necessary to produce a beam of parallel rays , to minimize the effect of side lobes , and to reduce the edge diffraction on the detector .the transducer lengths are varied between 3 mm and 145 mm .the transducer width is around 10 mm , close to the width of the detector and determines the thickness of a layer , in which the flow is effectively averaged .+ a composite piezoelectric material was used in order to decrease lamb waves distortion . however , by using piezo - composite in the largest sizes a smooth phase still could be achieved but not the amplitude .the following types of transducers were built : + ( t1 ) this type was used for the transducers of 3.5 mm and 10 mm lengths .a piezoelectric plate ( spc - y02 ) was covered by 4 mm flat lens of perspex and matched by conductive epoxy ( circuitworks ) that was used also for wiring .the backing was air .+ ( t2 ) a piezoelectric plate ( pz34 from ferroperm ) was covered by a flat lens made of perspex with groves for the soldered wires .the aperture size was 35 mm , and the backing was air .+ ( t3 ) two or three pieces of pz34 plates were adjusted together side - by - side in a frame to make a length of 96 mm and 145 mm , respectively .the backing was air , and the cover was a thin layer of epoxy .wires were soldered on the edges .+ ( t4 ) a better transducer compared with t3 was built from polarized pvdf , of 125x10 mm an active area , mounted on balsa wood and covered by a thin , conductive layer of epoxy and a thin layer of acrylic .+ in all types of the emitters the enclosure was made of black delrin , with 30db / cm absorption .types t1 and t4 were levelled with a round wall .the latter was made using a large opening made in the cell with house filled of water and covered by 10 m mylar film .+ calibration of the array detector response in each acoustic element to a plane sound wave was accomplished by moving a small emitter ( 19x6 at 2.5 mhz from automation industries ) slowly and precisely , paralleled to the array and taking measurements in steps of 1 mm .summation of the complex wave function over all steps yielded the response to an ideal large emitter .+ the standard part of the piv system was built by oxford lasers .a double exposure 1 mb ccd camera and a 15mj pulse infrared laser providing light sheet are used to measure the velocity field of the flow in two dimensions , with particles as markers .the seeding particles are 20 m polyamide from dantec , dissolved in a 50% mixture with a surfactant ( polyoxyethylene - sorbitan monolaurate from sigma ) .the plane of the velocity measurement is chosen to overlap the acoustic measurement plane .the analysis is based on cross correlation between images by fast fourier transformation ( fft ) of divided regions in the frame ( usually by 32x32 pixels ) .analysis of each region provides one velocity vector , but the algorithm can take overlapping regions up to 75% , so that for regions of 32x32 pixels from a 1 mb pixels camera a mesh of 128x128 vectors is obtained . in the implementation , the pixel equivalent size was 80x210 m , the laser pulse duration was set to 60 and the pulse separation was set to 2130 .thus , correlation between consecutive frames could fetch reliably velocities between 0.1 - 1.1 m/s ( in a worse case , using 32x32 pixels ) .there is a tradeoff between mesh resolution and velocity accuracy and limits .our analysis expands the velocity limit by combining results of analysis with different area sizes of a correlation region .we limit our search to low velocity gradients and thus we can interpolate higher velocities in a fine mesh that reveals the structure of the lower velocities .+ the system was modified to use it in the narrow field of view available between the rotating plates .since the design of the experimental set - up ( in two disks configuration ) does not allow the camera to look right above the light sheet plane , we were accounted with three problems .( i ) the view angle should minimize pickup reflections from the wall .( ii ) the converging effect of the circular wall of the water filled cell as well as reduced optical clarity can be corrected by using windows made of flat pockets of perspex filled with water outside the cell . for reducing reflections separate windows for the laser sheet and the camerawere found preferable .( iii ) focusing on a plane in a perspective view should use the scheimpflug condition , in a way known as a perspective control .the perspective control requires breaking the alignment between the camera and the lens so that the plane of the ccd ( image plane ) meets the principle plane of the lens at the same level , where the object plane meets with the other principle plane of the lens .the required tilting of the lens comes on the expense of reduced light intensity entering the ccd camera . in order to increase the light intensity we used two lenses .the tilted lens chosen was mamiya rz67 75mm/4.5 short barrel lens , capable for the perspective control .on the digital camera , we fixed a micro nikkor - nikon 50mm/1:1.4 lens through a f - c mount , which was shorten down to 11 mm .this configuration allows us to focus on almost 15 cm of the plane from 30 cm distance , subject to setting obtuse viewing angle .following a derivation of an equation for sound scattering and refraction by a flow mostly due to kraichnan and lighthill one gets in the first approximation of a small mach number , , the following equation for the sound scattering caused by the velocity field alone : here is the complex wave function that represents the sound pressure oscillations generating at frequency by the emitter , is the sound wave number , and are the flow velocity and the velocity oscillations due to sound propagation , respectively , and is the uniform fluid density .eq.(1 ) was obtained in the approximation where low frequency sound generated by the flow and the sound attenuation due to viscosity were disregarded .further , we assume 2d symmetry for simplicity , and due to the fact that the sound frequency is much larger than any frequency in the flow a quasi - stationary case is considered .the influence of , e.g. , moving vortices in the flow shows up in a frequency shift ( doppler shift ) relatively to the incident wave frequency .the former usually is smaller or comparable with characteristic frequency of fluid flow and is not considered here . in order to evaluate eq.(1 ) , we choose the axis along a local incident wavefront direction , thus , and expand the source term .there is no influence of the curvature in the wavefront direction on the derivatives within the first order in the mach number approximation . then taking into account that the wave length is the smallest scale in the flow one gets for an incompressible flow the following equation ( in a plane wave approximation ) \psi / c,\eqno(2)\ ] ] where is replaced by and is used to define the local ray direction .+ one can define the sound scattering field as , where is the complex wave function in the absence of a flow that satisfies the following wave equation without a source term : by applying further the green function method the solution of the scattering problem is obtained for a two - dimensional geometry in the following form ( in approximation ) : in a far - field the solution can be drastically simplified and reduced to the analytical relation between the scattering field and the fourier transform either of the velocity or the vorticity fields .indeed , in the far - field , and the integral in eq.(4 ) becomes the fourier transform of the velocity and the velocity gradient fields .the variable in fourier domain is the scattering wave vector and , where is the unit vector from the center of the scattering region toward the detector .the fourier transform in 2d is defined as taking into account that , one obtains in the far - field limit where and .we present in eq.(5 ) the scattering field as a convolution of the fourier transforms of the velocity field and the known beam function , . only in an ideal case of an infinite planar wave front , when , there exists a simple relation between the fourier transforms of the velocity and the vorticity fields : which leads to a direct linear relation between the sound scattering field and the fourier transform of the vorticity field as first pointed out by lund and rojas .this general approach can be applied in a case of a single rigid body rotation vortex with a core azimuthal velocity distribution as and outside the core up to the cell wall the azimuthal velocity component decays as where is the angular speed , and are the cell and the core radii , respectively .+ in the axisymmetric case one gets the following result for the fourier transform of the velocity component , : , \eqno(9)\ ] ] where is the bessel function , which was introduced via the expression ( 9 ) exhibits even for the unbounded beam width and detector length two maxima in the scattering signal at some value of due to a final size of the flow region , .it is clear from eq.(9 ) that in a general case the location of the maxima depends on two parameters , and .so in the case when , the peaks are located at .this value alters by change of .so the numerical calculations based on eq.(9 ) give the following results ( for mm ) : at mm the peak position is at , at mm the value is , at mm the expected value is , but at mm the location value is already shifted to .+ the peak height , , also provides useful information about the flow . indeed , in the case of and of the infinite beam width, is proportional to , to a core circulation , , and independent of , or sound frequency .the dependence on and independence of are rather non - trivial results taking into account rather complicated functional dependence of on the parameters , , and .+ in the limit the peak locations approaches zero angle asymptotically but the value of the amplitude of the fourier transform of the velocity at remains zero . in the limit of and at one can derive an asymptotic expression for the bessel functions in eq.(9 ) and get . from this expressionit is easy to see that the amplitude at small angles increases proportionally to , , and , and approaches zero at .+ for a finite width beam ( or emitter ) or a finite width detector the fourier transform of the scattering field wave function from eq.(5 ) can be expressed as a convolution . in a case of the finite width , , ( either of a beam or a detectorwhatever is the smallest ) let us consider first the gaussian beam model .the beam function is defined as )}$ ] . for behaviour of the peak height here is similar to the infinite width beam considered above .+ the location of the peaks in a finite width gaussian beam depends on the beam width : at mm the peak position is at ( at ) .when increases this relation is changed : at mm the peaks are located at , and at mm they are found at . the dependence of the peak heights on the parameters for a finite width beam is more elaborate , since here three characteristic lengths exist in the problem : , , and . in spite of this fact the approximate scaling law was found from numerical calculations .it follows that is independent of and proportional to with the scaling function ( where ) that is different for two regions of small and large beam width : ( a ) for , and ( b ) for .+ in the case of a finite rectangular beam ( or a finite detector whatever is the smallest ) the peak location is defined by as in a finite gaussian beam considered above . at onegets ( e.g. for mm ) . for larger values of the peak locationis shifted towards smaller values .so , at mm the estimated peak position is at , and at and 50 mm one gets and , respectively .the peak height similar to the case of the gaussian finite width beam , is independent of and proportional to with the scaling function that is slightly different from the former case .it is determined for the range of arguments values at mm and mm and has the following form : ( a ) for , and ( b ) for . + in order to demonstrate the influence of a finite beam width on the scattering signal from a single vortex with a core radius , we present in fig.3 the results of numerical simulations for the structure functions of the velocity field for the vortex of mm and three beam widths of 20 , 60 , and 120 mm ( but all of them smaller than the cell size , mm ) compared with an infinite beam width for the same cell size . only for mmthe wave number of the structure function peak location becomes close to the peak location for the infinite width beam ( see fig.3a ) .the wave number of the peak location in the infinite width beam case is defined by the cell size ( see explanation above after eq.(9 ) ) .one can see that in spite of the fact that , the finite width of the beam drastically alters the resulting scattering signal .thus in order to get correct results on the structure functions of the velocity and vorticity fields one should use either a beam width exceeding the flow size , , or to perform a deconvolution based on a provided constrain , which fills out the missing information about the velocity field outside the beam extent .while the study of scattering data requires to extract from sound signals both the amplitude and the phase variations , one can find information about the velocity field and the circulation from the phase shift .we consider two cases in a phase shift analysis . in the first case of a large emitter, rays remain parallel in propagation , and the phase shift as function of ( assuming it can be measured avoiding interferences between the paths ) can be defined in the limit as : lindsay has found that the phase shift induced by a point vortex has a linear dependence on an angle .this result can be obtained if we consider rays passing outside the core , and assume a large emitter at and velocity profile , namely : phase shift is measured on a screen at , with , then the slope can be found , outside the singularity at , with the following constant value : our idea is to look in particularly on rays passing near the center of the vortex at and try to get information on the flow from the phase shift slope . for an infinite source at and detectors at : thus , substituting the velocity field projected on via eqs.(7,8 ) , one obtains the phase shift slope as comparing eq .( 14 ) and eq.(12 ) we find that a rigid body rotation core induces a larger slope by an order compared with the exterior of the vortex .thus for a point vortex at there is a step in the phase shift , having an infinite slope , revealed therefore as the berry s phase .a total change in the phase shift between two points on the acoustic screen can be calculated by a loop integral over the velocity field .since a velocity along the detector and the emitter is zero , there is a simple relation between the circulation and the step in the phase shift followed from eq .( 10) : where and are the phase shift differences on two sides of the detector , respectively .+ in the second case we consider rays diverging out of a single point on a small emitter ( and there is an axisymmetric single vortex ) .we can show that the phase shift slope , calculated from the phase shift due to a vortex at the cell center between a detector placed at , and sound emitter located at , can be written as the angle is measured between the radius from the center of the vortex and the beam direction .the phase difference used to derive eq .( 16 ) can be written as where is the refraction index in the first approximation in the mach number , is the unit vector in the ray direction , and is the coordinate in the ray direction. then from simple geometrical considerations one gets eq .( 16 ) . substituting the expression for the velocity profile eqs.(7,8 ) into eq .( 16 ) we obtain : that is equal exactly a half of the result for a large emitter . probably , the most severe limitation of the theory to be applicable to an experiment is the far - field approximation . in this respectthe relevant question is how to extrapolate sound scattering measurements obtained at a distance not far away from a scattering region into a far - field region to be compared with the theory and to reliably extract information about the vorticity structure function .our validation of the far - field scattering result is based on comparison with calculations , which use the velocity field measured by piv technique .a far - field construction of the scattering field from the acoustic field ( either pressure or scattering wave function ) given at a certain plane in 3d case and at a certain line in 2d case , is based on the mathematical description of a huygens principle typically used to describe radiation from a curved surface .this problem is similar to the problem of diffraction by a thin screen of finite dimensions ( or for a finite dimension wave beam) .thus our goal is to consider the propagation of a wave into unbounded source - free half - space , when certain conditions in the initial plane are specified .then the scattering field in 2d at the location can be defined from the following rayleigh - sommerfeld integral : }dy',\eqno(18)\ ] ] where is the green function of the helmholtz equation in two dimensions(2d ) , , and is the hankel function of zero order .equation(18 ) follows from the integral helmholtz theorem , where use of the green s function is made . and denote the two sides of the diffraction screen ( ) and ( ) , respectively .the presence or absence of the screen is not essential to the derivation of eq.(18 ) but in the absence of the screen one does not need to apply the kirchhoff approximation . to get a more simplified expression in a far - field region the asymptotic expression for the hankel function can be used .then the far - field construction of the scattered sound wave function can be calculated in the limit of and as where and are the distances measured from the cell center till the detector and the far - field region , respectively , and are the scattering wave functions at the detector ( as measured ) and at the far - field , respectively . since at the array detectorthere are just 64 elements , a sum instead of the integral is used to calculate the wave function in the far - field . + the method presented is found advantageous over other inverse solution methods . in measurements as well as in numerical simulationsit is also desirable to apply tapering of the beam edge at the emitter to avoid an oscillating edge diffraction pattern .in order to get information about the scattering sound amplitude one needs to find a phase difference between an incident and a scattered signals . our way to find the scattering wave is simply to measure and subtract the complex sound signals with a flow and at a rest . at high enough frequencies we can concern ourself only with the main change in the wave , which is the phase shift due to change in time of flight by the flow .the phase shift is found from the phase information as .the amplitude of scattering wave on the array detector is determined as .both wave functions , and are the central frequency component of a sound extracted from a bank of pulses entered to a fourier type filter , as described above . information obtained during a period when the bank of pulses is collected , is called a frame . in a steady flowwe average the complex results between available frames . for the dynamic study of a flowwe compare the wave function in a flow with the average on many frames to avoid suspecting additional sources of fluctuations .another step before substitution of into the last equations is to normalize it by a factor .the phase correction is the average in time ( between frames ) of the values that minimize the expression ( average on y refers to channels of the array ) .such step is used to specifically regularize results of scattering from a single vortex and it also compensates for occasionally uncontrolled conditions that change the overall refraction ( for example raising of seeding particles by the flow , when they are used ) . at the end , we obtain the scattering wave function on the transducer array plane as : the presented results of scattered and incident fields are normalized so that the incident field on the emitter plane is of a unit amplitude . to calculate the incident field at a distance from its plane we use the huygens construction ( see eq.(18 ) ) that is a propagation transform of the window function ( with 5% apodization at the edges to reduce signal corruption due to noise ) : then can be calculated from eq.(4 ) .+ we focus on a single vortex , created between co - rotating plates , and find its average properties in time . since the vortex is axisymmetric we can extract from piv analysis the profile of an azimuthal velocity versus the radius .the center of the vortex is found according to a minimum in the orthogonal projection of a velocity vector on x and y axes ( except for the smallest vortices produced by 25 mm rods , where we designated the center by eye recognition ) .the profile is built by averaging on 64 piv maps , such as shown in fig.4a .the profiles for the three inspected flows are shown in fig.4b ( co - rotating 25 mm rods ) , fig.5 ( co - rotating 40 mm plates ) , and fig.6 ( co - rotating 100 mm plates ) . in general , we see that rotating plates with blades produce a core of a rigid body rotation , i.e. , the core velocity increases linearly with the radius . outside the core , the velocity decreases like in a flow between two cylinders .this solution ( eqs .( 7,8 ) ) was used to fit the measured velocity profile , and the flow parameters were extracted from the parameters of the fit . thus we find from the fits mm , mm and mm , respectively . in all casesgenerally the core rotation frequency is the same as of the motor .the piv maps extend to about third of the cell due to geometrical limitations in a field of vision and a limited power of the laser ( nominally 15mj / pulse ) .therefore , the velocity profile is extrapolated using the model of a flow between two cylinders .the distance from the cell center to the detector s plane and the wall is mm .we used an average on 64 velocity maps of the flow for 40 mm plates ( extrapolated between radii of 50 mm and 145 mm ) to evaluate the integral of the orthogonal velocity projection on x - axis .the result can be compared to the ultrasound phase shift measured at 5.5mhz , as shown in fig.7 . as follows from eq.(15 ) the measurement of the phase shift provides direct information about the circulation .this result can be obtained from the data in fig.7 , where the spatial dependence of the phase shift obtained from the sound and piv ( here the velocity field measurements were converted into the phase shift via eq .( 10 ) ) measurements are presented .we would like to emphasize here that all experiments presented and analyzed were realized in the limit , so that the expression for the phase shift delivered in eq.(10 ) is applicable .it follows that for the maximum phase difference measured in fig.7 of about , the maximum circulation is about . as one can find from fig.7 the value of the maximum phase difference andcorrespondingly the maximum circulation coincide with those found from the piv measurements within 1 - 2% .+ the phase shift patterns due to the flow based on averaging of 10 frames of 512 pulses for various emitter sizes were examined .it appears remarkably for two types of flow considered ( both a single vortex flow with a rigid core rotation , produced by 100 mm plates and 25 mm rods ) that the slope of the phase shift around the center of the beam versus scattering angle ( or a distance along the detector ) is constant .we describe in detail the flow produced between two co - rotating 100 mm plates , which provides stable phase shift plots . in fig.8we demonstrate how a value of the phase shift slope are obtained : the limit is chosen to give a constant slope with the smallest linear regression error . according to eq.(17 )the phase shift slope provides direct information about the vorticity of the vortex , .the results for emitters of different sizes and at various frequencies are shown in figs.9a - d .the phase shift slopes for all frequencies and all emitters appear to be linearly dependent on the rotation speed , .the data presented can be either scaled by or presented via the derivative on , . then the data on the proportionality coefficient as a function of frequency , calculated from the plots presented in figs.9a - d , are summarized in fig.10 .the proportionality coefficient , , depends linearly on the frequency in the range between 0.5 and 5.5mhz , and the results are sharply separated in two groups : ( i ) small emitters ( 3.5 , 10 mm ) , and ( ii ) large emitters ( 35 , 145 mm ) compared with the size of the detector array of mm .the rotation speed of the plates is used to define the flow parameter and is converted to the angular speed of the vortex core obtained from the fits of the piv data . according to eq.(17 ) in the theoretical section ,we expect a point - like emitter to have the frequency dependence of the proportionality coefficient , , on the plots of fig.10 as follows : the experimental value of for 3.5 mm and 10 mm emitters is sec(mhz) .the value of from the plot in fig.10 for 35 mm and 145 mm emitters is sec(mhz) .as explained in section iiib the theoretically predicted value of for a small emitter should be twice smaller than for a large one , i.e. the theoretically expected value for the large emitters is (mhz) according to eq .. however , specifically for these measurements the large emitters blocked 12% of the total diameter of the exterior of the flow . assuming effective reduction in the cell radius the expected theoretical value should be corrected down to sec(mhz) ,and it becomes rather well comparable with the experimental value presented above .using eq.(4 ) derived in the theoretical section we were able to calculate the scattered signal amplitude from a given velocity field assuming that the emitter wavefront is known exactly .we used a velocity profile of a flow produced by 100 mm rotating plates extracted from piv ( fig.6 ) .the emitter of 35 mm length was used and modelled for a flat amplitude ( a plane wave ) as the input , to obtain the result in fig.11a .the measured incident and scattered wave amplitudes at the frequency 4.0mhz are compared to the calculated ones ( they are scaled to unity value of the wave amplitude at the emitter exit)(see fig.11a , b ) . in the calculation of the sound scattering field ,a mesh of a half wavelength resolution was used in a zone limited to twice of the emitter size and the velocity field was interpolated based on the velocity profiles values .the wavefront was calculated at various cross - sections in the cell using eq.(21 ) .the measurement was performed at 5.5mhz collecting pulses in a flow produced by the 100 mm disk at the rotation speed rad / sec . in fig.12a , b similar data of the incident and scattering sound fields from 125 mm emitter at the frequency 5.5mhz are shown ( 100 mm disks , rad / sec ) .as in the previous case an agreement with the theoretical calculations is rather good .an evidence of a diffraction pattern ( side - lobe ) from the emitter of 3.5 mm length at the frequency of 5.5mhz in still water ( regarded as the incident wave ) , and in the scattering signal due to vortex flow between the rods of 25 mm diameter at / sec , is clearly seen in fig.13a , b .the direction of the initial sound beam was slightly tilted relatively to the direction to the cell center , so that the center of the beam is shifted from the center of the flow .this fact was not regarded in the fitted curve that results in a discrepancy of the measured and calculated sound amplitudes .this result clearly demonstrates that the scattering signal can be easily buried in the incident signal coming from the side - lobes at sufficiently large angles of detection .we use the huygens projection of the near - field scattering signal detected at the receiver array plane , into a synthetic far - field plane using the rayleigh - sommerfeld integral via eq.(19 ) .we choose the far - field plane at a distance of m , much larger than m .the signal is tapered 2 mm on each edge before the projection to suppress numerical instability of the diffraction pattern . by this procedure oneobtains full information of an amplitude and a phase of the scattered signal in the far - field .however , practically due to finite spatial resolution the phase information becomes corrupted .so the next step in the far - field construction is to replace the phase variation as a function of an angle ( or a wave number ) by a function toggled at each minimum point by a . in such way we reconstructed the phase field using an observation made in the simulations that at every minimum point of the correct amplitude curve the sign of the field should be inverted . in order to avoid some spurious minimumpoints the number of digitizing points in integration was increased , and the result was compared against finer digitization mesh .+ according to eq.(5 ) the sound scattering field in the far - field is proportional to the fourier transform , i.e. in such a way plots of the modulus of the two - dimensional fourier transform , , in the entire cell were obtained in figs.14,15 .the data projected into the far - field are compared with calculations of the scattering field based on the velocity field measurements by piv .the results are presented in a view angle observed from the center of the cell through the receiver as an aperture .this angle , , is related to the scattering wave vector , , via formulas : , , and .+ we studied properties of the modulus of the fourier transform , , at different sound frequencies and different rotation speeds .it was revealed that the angular location of the main peaks in is proportional to the sound wavelength ( fig.16 ) .the experimental value of the slope of the plot is , and is found to be in a good agreement with the value of the slope obtained from numerical simulations ( see discussion in sec.iiia ) . it was also found that the scaled value of the peak height of is proportional to the circulation ( fig.17 ) . thus , four different sets of the data for three different vortices and two beam sizes can be scaled down using the functional dependencies on the beam width , , and the reduced vortex core size , , taken from our numerical calculations ( see discussion in sec.iiia ) .as seen from the plot scaling does not work great for one set of the data , possibly , due to flow imperfection . on the other hand ,the proportionality of the scaled peak height to is not so obvious as one can decide from the first sight , since the finite beam width described by the beam function , alters significantly the function presented in eq.(9 ) .we also found from the measurements that the peak height , , is independent of the sound frequency and depends strongly ( about when ) on the beam width .when the peak height value , , is proportional to in a full agreement with the simulations .it is obvious that the finite size of either a beam ( an emitter ) or a detector ( whatever the smallest ) limits our knowledge about the scattering field and , therefore , also about the fourier transform of the velocity field at the wave numbers ( or angles ) smaller than that corresponding to the peak location .. + the projected scattering to a far - field , , does not have a direct relation to the structure function of a vorticity , . in order to get the latter, one should extract information about a 2d fourier transform of the velocity field from the measurements of . however , this transformation is a singular one , so a choice of constrains is made roughly related to the continuity and irrotaional character of the flow outside of the acoustic beam extent , where decays to zero . particularly for the plane beam ,a 1d fourier transformation is relevant , since in the propagation direction , , can be presented by a delta - function in the wave number domain .we performed a backward and forward 1d fourier transformation of values on a set of points in coordinate ( which span about ) , such that the values are not modified initially .next we added the filtering effect that removes by extrapolating the intermediate result as if it were one - dimensional projection of the velocity , , using our flow model for extrapolation ( see fig.18 ) .the phase in the extrapolated region of should be adjusted having continuity at and a phase difference at the tails .such construction provides the required velocity field and correspondingly its fourier transform in the wave number domain , . now using eq.(6 )one can relate in such a way the obtained to the required structure function of the vorticity ( see fig.19a ) . with a help of the wiener - khinchin theorem onecan also obtain an azimuthally averaged point by point correlation function of the vorticity , which in the case of an isotropic flow or axial symmetry gives the corresponding correlation function of the vorticity is shown in fig.19b . since the weight of errors in estimation of is more pronounce at high scattering angles , the integral in eq.(22 ) should be cut in a tight range .we used a gaussian attenuation factor ( with characteristic value ) on the far - field scattering values to filter out high frequency fluctuations .+ it is obvious that the vorticity structure function restored in such a way contained additional information reflecting our guess ( or our piv results ) .however , this additional information is mostly relevant to a low wave number range of the vorticity structure function , i.e. to the core of the fourier transform of the vorticity .therefore , at wave numbers larger than the core wave number value , the vorticity structure function provides information based on the original experimental data . the sound scattering technique can be used to study vortex dynamics .it can be studied by a phase dynamics approach to get temporal variation of a vortex location , vortex radius , and vortex circulation similar to what was done in ref. . however , we used a different approach of sound scattering and compare the results on a vortex precession with those obtained by piv .the vortex position is found from the minimum point of the scattering amplitude pattern .the periodicity in the vortex motion is characterized by the variation of the peak heights in the far - field scattering pattern .the setup is used with the upper plate of 280 mm in diameter at the angular speed / sec , and the lower plate is absent .the rate of piv tracking is one map per 0.266 seconds .the rate of ultrasound tracking based on collection of 32 pulses per frame is one wave function plot per 0.016 seconds ( sound frequency is 2.5mhz , emitter is 96 mm long ) . in the case of the rotating upper plate , a periodic precession of the vortexis found by both techniques : piv and the sound scattering ( see fig . 20 ) . to compare two sets of the data the time correlation functions of the vortex locationswere produced from both sets . a good quantitative agreement between the dynamic results of piv and ultrasound measurementswas found ( see fig.21)with a period of 0.90sec .besides , the peak height of the fourier transform of the velocity projected into a far - field is found to be twice periodic compared with the vortex position ( see fig.22a ) .it is clearly seen in the auto - correlation function presentation of the data in fig.22b .this effect occurs due to increase or decrease in the integral of over the beam area ( detector view ) , when the vortex core is shifted sideways , since for small contains a contribution from the integral of , and double periodicity shows up due to absolute value of the function .the precession frequency as a function of the rotation frequency obtained from the auto - correlation functions similar to that shown in fig.22b , is presented in fig.23 .we were not able to get such features from the piv measurements due to restrictions in accuracy .we demonstrated that our system provides rather reliable information about both the phase and the amplitude of the sound scattering signal by acquiring simultaneously 64 channels of the detector array .the spatial and temporal information on the phase of the scattered signal allows us to get values of circulation , vorticity , vortex location , and vortex core radius .we verified quantitatively the theoretical value for the slope of the proportionality coefficient , , for a single rigid body rotation vortex in a finite size cell .this method is rather comparable with the acoustic time - reversal mirror ( trm ) method .instead of vorticity ( or phase difference ) amplification due to number of crossing of the flow in the trm method , in our method the signal - to - noise ratio is amplified by averaging complex wave functions over many pulses .so the comparable resolution in the phase difference measurements can be achieved .+ at the same time in our approach we can also use the amplitude of the scattering signal to characterize the flow .the existing theories of the sound - flow interaction provide a relation between the scattering signal and the structure functions of velocity and vorticity fields of a flow .however , strong limitations imposed by the theories make an application of the sound scattering technique for flow characterization rather restricted .the theories consider the scattering signal produced by a sound emitter and obtained by a receiver of an infinite length in a far - field limit .+ in the paper presented we studied both theoretically and experimentally a possibility to obtain reliable information about velocity and vorticity fields of a single either stationary or time - dependent vortex flow by the acoustic scattering technique with a finite width sound beam of the order of the vortex size and a finite size receiver taking a scattered signal rather close to the scattering region . the experimental results and analysisare compared with the piv measurements taken simultaneously on the same vortex . from the theoretical sidethe main step in our analysis is the use of the huygens construction to obtain the acoustic scattering signal in a far field from the known sound distribution in any intermediate plane .this reconstruction allows us to overcome rather difficult experimental problem to obtain directly the scattering signal in a far - field .such procedure also helps us to realize that the only empirically relevant calculation of scattering is that of a confined flow , since an emitter and a detector define the flow perimeter . +another theoretical suggestion is the use of the beam function , which describes the finite width beam and allows us to use the same formalism as for an infinite width beam .+ the analysis of the experimental data based on the revised theory of sound scattering shows rather good agreement between the sound scattering and piv measurements , i.e. the sound scattering signal obtained from the velocity field measured by piv , coincides rather well with the scattering signal obtained in the ultrasound experiment .the same can be said about the fourier transform of the vorticity obtained from the scattering via the construction suggested and that reproduced from the piv measurements .we also show that the peak value of the far field scattering signal is proportional to the circulation or the angular speed of the core of a single vortex , and the angle distance between the peaks is inversely proportional to the sound frequency .further natural step is to use this approach to a turbulent flow and to study the vorticity structure function in a turbulent flow . +this work is partially supported by israel science foundation grant , by binational us - israel foundation grant , and by the minerva center for nonlinear physics of complex systems .
|
sound scattering by a finite width beam on a single rigid body rotation vortex flow is detected by a linear array of transducers ( both smaller than a flow cell ) , and analyzed using a revised scattering theory . both the phase and amplitude of the scattered signal are obtained on 64 elements of the detector array and used for the analysis of velocity and vorticity fields . due to averaging on many pulses the signal - to - noise ratio of the phases difference in the scattered sound signal can be amplified drastically , and the resolution of the method in the detection of circulation , vortex radius , vorticity , and vortex location becomes comparable with that obtained earlier by time - reversal mirror ( trm ) method ( p. roux , j. de rosny , m. tanter , and m. fink , _ phys . rev . lett . _ * 79 * , 3170 ( 1997 ) ) . the revised scattering theory includes two crucial steps , which allow overcoming limitations of the existing theories . first , the huygens construction of a far field scattering signal is carried out from a signal obtained at any intermediate plane . second , a beam function that describes a finite width beam is introduced , which allows using a theory developed for an infinite width beam for the relation between a scattering amplitude and the vorticity structure function . structure functions of the velocity and vorticity fields deduced from the sound scattering signal are compared with those obtained from simultaneous particle image velocimetry ( piv ) measurements . good quantitative agreement is found .
|
in a world where digital communications are becoming ever more prevalent , there are still services working in analog form . some examples of analog communications systems widely used today include voice communications over telephone lines , tv and radio broadcasting and radio communications ( see table [ tab : analog ] ) . although most of these services are also being gradually replaced by their digital counterparts , they will remain with us for a long time . usually the need to protectthe confidentiality of the information transmitted by these means might arise .thus , there is a growing demand for technologies and methods to encrypt the information so that it is only available in inteligible form to the authorized users . in a recent paper ,a secure communication system based on the chaotic baker map was presented , which is a scheme that encrypts wave signals .first , the analog signal limited in the bandwidth is sampled at a frequency to avoid aliasing . at the end of the sampling process, the signal is converted to a sequence of real values .next , the signal is quantized : the amplitude of the signal is divided into subintervals and every interval is assigned a real amplitude value , , its middle point for example .thus , a new sequence is generated by replacing each by the associated to the subinterval it belongs to : , where each takes its value from the set . once the original wave signal is sampled and quantized , and restricted to the unit interval ,a chaotic encryption signal , , is used to generate the ciphertext .this signal is obtained by either sampling a chaotic one or by a chaotic mapping . for the purposes of our analysis , the process to generate the chaotic signal is irrelevant since our results apply equally to any signal . finally , an ordered pair is constructed , localizing a point in the unit square . in order to encrypt , the baker map is applied times to the point to obtain : the encrypted signal is given by , where is considered as the secret key of the cryptosystem . as a result , a plaintext signal with values , is encrypted into a signal which can take different values . for a more complete explanation of this cryptosystem ,it is highly recommended the thorough reading of . in the following two sections ,the security defects caused by the baker map realized in finite precision are discussed , and then the fact that the secret key can be directly deduced from the ciphertext is pointed out . after the cryptanalysis results , which constitute the main focus of our paper ,some countermeasures are discussed on how to improve the security of the chaotic cryptosystem .the last section concludes the paper .the proposed cryptosystem uses the baker map as a mixing function .the baker map is an idealized one in the sense that it can only be implemented with finite precision in digital computers and , as a consequence , in this case it has a stable attractor at .this is easy to see when the value of is represented in binary form with significant bits . assuming ( ) , the baker map runs as follows : where denotes the left bit - shifting operationapparently , the most significant bit is dropped during the current iteration . as a result , after iterations , .once , it is obvious that will exponentially converge to zero within a finite number of iterations , i.e. , the digital baker map will eventually converge to the stable attractive point at , as shown in fig .[ fig : map ] .it is important to note that this result does not depend on the real number representation method , on the precision , or on the rounding - off algorithm used , since the quantization errors induced in eq .( [ equation : bakermap ] ) are always zeros in any case . considering that in today s digital computers real values are generally stored following the ieee floating - point standard , let us see what will happen when the chaotic iterations run with 64-bit double - precision floating - point numbers . following the ieee floating - point standard ,most 64-bit double - precision numbers are stored in a normalized form as follows : where represent the number bits , means a binary number and the first mantissa bit occurring before the radix dot is always assumed to be 1 ( except for a special value , zero ) and not explicitly stored in .when , assume it is represented in the following format : where .apparently , it is easily to deduce . considering , .when is generated uniformly with the standard c ` rand ( ) ` function in the space of all valid double - precision floating - point numbers , both and will approximately satisfy an exponentially decreasing distribution , and then it can be easily proved that the mathematical expectation of is about 53 .this means that the value of the secret key must not be greater than 53 . in other words, it is expected that each plaintext sample can not be correctly decrypted when is greater than 53 ( or even smaller but close to 53 ) , since the counter - iterating process is unable to get from due to the loss of precision during the forward iterations . figure [ fig : ber ] plots the recovery error obtained for different values of the secret key when a 100-sample ciphertext is decrypted .it can be appreciated how the plaintext is correctly recovered only when . for ,the system does not work at all . as a consequence ,only secret keys have to be tried to break a ciphertext encrypted with this cryptosystem .this takes a modern desktop computer less than a second for moderated lengths of the plaintext .this attack is called a brute force attack , which breaks a cipher by trying every possible key .the feasibility of a brute force attack depends on the size of the cipher s key space and on the amount of computational power available to the attacker . with todays computer technology , it is generally agreed in the cryptography community that a size of the key space is insecure . compare this figure with the key space of the cipher under study .if the value of could be arbitrarily enlarged , then the encryption process would slow down until it would be unusable in practice .thus , from any point of view , this is an impractical encryption method because it is either totally insecure or infinitely slow , without any reasonable tradeoff possible . in is said that the encryption is applied to the wave signal instead of the symbolic sequence .therefore , in table [ tab : analog ] a review of some widely used multimedia communications systems with their bandwidth and sampling frequencies is given .these are the kind of signals that might be encrypted by the system proposed in .consider for example tv broadcasting , which transmits 12,000,000 samples per second .it is impossible to iterate the baker map billions of times for 12,000,000 samples in one second with average computing power .finally , another physical limitation of the cryptosystem is that when is very large , each encrypted sample would require a vast amount of bits to be transmitted , which would require in turn a transmission channel with infinite capacity , meaning that the system can not work in practice .even assuming that the messages are encrypted with an imaginary computer with infinite precision and infinite speed , using an infinite - bandwidth channel , and an idealized version of the baker map , the cryptosystem would be broken as well because the secret key can still be derived from only one amplitude value of the ciphertext . to begin with ,let us assume that two quantization levels are used , that is , . during the encryption processa binary tree is generated in the following way : where following the decimal number denotes its binary format .the fact that the ciphertext uses discrete amplitudes constitutes its weakest point .it is possible to directly get the value of with only one known amplitude . in eq .( [ eq : tree ] ) , it is obvious that is always one value in the set as mentioned above , in the case that the real values are stored in the ieee - standard floating - point format , any amplitude value will be represented in the following form : where . from eq .( [ eq : set ] ) , one can see that .therefore , we can directly derive , by checking which bit is the least significant bit ( i.e. , the least significant 1-bit ) in all bits of . a more intuitive way to compute from a single amplitude value , , consists of two steps : i )represent this amplitude value in fixed - point binary form ; ii ) count the bits in the fixed - point format of to determine the value of an integer , which is the number of bits after the radix dot and before the least significant bit , i.e. , .obviously , .similarly , for other values of , one can easily deduce that ; and for , the value of can still be derived easily , but the calculation algorithm depends on how the binary tree shown in eq .( [ eq : tree ] ) is re - designed .although in it is hinted that the value of could be changed dynamically based on some information of the encrypted trajectory , this idea would not further increase the security of the cryptosystem as long as different amplitudes are still possible for each different value .this means that the ciphertext value , whatever , can only take values from the finite set defined in eq .( [ eq : set ] ) for the given .hence , for each the value of can be computed as described above and the security is again compromised .there are many ways to improve the security of the attacked cryptosystem .this section introduces three possible ones : changing the key , changing the 2-d chaotic map , and masking the ciphertext with a secret signal . note that only the basic ideas are given , and the concrete designs and detailed security analysis are omitted because this is not the main focus of our paper . as mentioned above , in addition to the above - discussed security defects of the secret key , using as the secret key has another obvious paradox : from the point of view of the security, should be as large as possible ; while from the point of view of the encryption speed , should be as small as possible .apparently , is not a good option as the secret key . instead of using , better candidates for the secret key must be chosen , such as the control parameter of the 2-d chaotic map and the generation parameter of the encryption signal .if the former is chosen , the baker map has to be modified to introduce some secret control parameters , as described in the following section . as shown above ,the multiplication factor 2 in the original baker map is the essential reason of its convergence to in the digital domain , so the baker map has to be modified to cancel this problem , or another 2-d chaotic map without this problem has to be used .a possible way is to generalize the original baker map to a discretized version over a lattice of the unit plane .for example , when , the lattice is composed of the following four points : , , and .a typical example of baker map discretized in this way can be found in , reproduced next for convenience .first , the standard baker map is generalized by dividing the unit square into vertical rectangles , , , , , such that .the lower right corner of the rectangle is located at .formally the generalized map is defined by : for .the next step consists of discretizing the generalized map .if one divides an square into vertical rectangles with pixels high and pixels wide , then the discretized baker map can be expressed as follows : where the pixel is with , .the sequence of integers , , is chosen such that each integer divides , and .the formula can be extended for rectangles ( see ) .with such a discretization , the negative convergence to zero can be removed . however ,another negative digital effect , the recurrence of the orbit , arises in this case , since any orbit will eventually become periodic within iterations .this means that the security defect caused by the small key space is not essentially improved .thus , the discretized baker map must be used when the key is changed to be its discretization parameters .another way is to use entirely different 2-d chaotic maps with one or more adjustable parameters , which can be used as the secret key instead of . an easy way to enhancethe security of the cryptosystem is to mask the ciphertext with a _ secret_ pseudo - random signal , which can efficiently eliminate the possibility to derive the estimated value of from one amplitude of the ciphertext .the secret masking sequence can be the chaotic encryption signal , and the parameters of controlling the generation process of should be added as part of the secret key . in this case , the ciphertext is changed from into .note that the masking can be considered as an added stream cipher to the original system .this is a common technique to achieve stronger ciphers .in summary , the new cryptosystem proposed in can be broken due to the limitation of computers to represent real numbers .even if an ideal computer with infinite precision were used to encrypt the messages , the cipher can still be broken due to the fact that the number and value of possible amplitude values in the ciphertext depend directly on the secret key .furthermore , for the cryptosystem to work with large values of , an ideal computer with infinite computing speed , infinite storage capacity , and infinite transmission speed would be required . as a consequence , we consider that this cryptosystem should not be used in secure applications .some possible countermeasures are also discussed on how to improve the security of the cryptosystem under study .an important conclusion of our work is that an idealized map can not be used in a practical implementation of a chaos - based cipher . 1 r. f. machado , m. s. baptista , and c. grebogi .cryptography with chaos at the physical level ., 21(5):12651269 , 2004 .ieee computer society .standard for binary floating - point arithmetic .ansi / ieee std . 754 - 1985 , august 1985 .s. li . when chaos meets computers .arxiv : nlin.cd/0405038 , available online at http://arxiv.org/abs/nlin/0405038 , may 2004 .d. r. stinson . .crc press , 1995 .j. fridrich .symmetric ciphers based on two - dimensional chaotic maps . , 8(6):12591284 , 1998 .s. li , x. mou , z. ji , j. zhang , and y. cai. performance analysis of jakimoski - kocarev attack on a class of chaotic cryptosystems ., 307(1):2228 , 2003 ..multimedia communication systems and their bandwidth . [ cols="<,>,>",options="header " , ]orbits followed by and in a practical implementation of the baker map . as can be observed , constitutes a fixed point .the number of iterations required to converge to the origin depends on the precision used , but is always finite in a computer . ]
|
in recent years , a growing number of cryptosystems based on chaos have been proposed , many of them fundamentally flawed by a lack of robustness and security . this paper describes the security weaknesses of a recently proposed cryptographic algorithm with chaos at the physical level . it is shown that the security is trivially compromised for practical implementations of the cryptosystem with finite computing precision and for the use of the iteration number as the secret key . some possible countermeasures to enhance the security of the chaos - based cryptographic algorithm are also discussed . chaotic cryptosystems , baker map , cryptanalysis , finite precision computing 05.45.ac , 47.20.ky . and
|
biological studies have shown that it is possible to move molecules `` uphill '' against their electrochemical potential gradient by coupling their flow to the large downhill flow of another particle species. the coupling of fluxes occurs when the momentum of the ions is coupled as they flow through narrow multiply - occupied sub - nanometer - sized pores . in the biological cases , these pores can be ion channels or channel - like transporters. recently , however , synthetic nanometer - diameter pores have been engineered in a variety of materials including pet , silicon nitride , and polycarbonate .moreover , ion channels have been inserted in these pores to make them even more narrow , even to the point of the single - file motion of ions required for momentum coupling .the purpose of this paper is to simulate this kind of coupled transport ( co - transport ) directly with a molecular simulation method for the first time over a wide range of experimental and fabrication parameters .specifically , our goal is to understand the general mechanisms behind co - transport by studying the effect of various factors that influence coupling of movements of two ionic species in a narrow multiply - occupied pore . with a general understanding of what parameters enhance uphill ion transport it will be possible to fabricate synthetic nanopores and nanoporous materials for a wide range of applications like low - level contaminant removal , analyte concentration amplification , and energy storage . ions must be moved against their electrochemical gradient using energy . in biological cells ,this external energy can be a direct chemical energy ( atp hydrolysis in the case of atpases , for example ) used for conformational changes in the transporter proteins that transport the ions bound on one side to the other side during this structural change .pumps working this way maintain the concentration gradients of various ions ( na , h , k ) that can be harvested in co - transport as a secondary energy source .other mechanisms of co - transport have also been proposed .for example , eisenberg and coworkers suggested the possibility that ion fluxes can be coupled through the electric field . here, we investigate the narrow - pore mechanism where one uses an existing concentration gradient ( e.g. , of na ) to create a large flux to push another species ( that is usually present at much smaller concentration ) with the flow against its own concentration gradient .calculations based on various kinetic models have been performed on the basis of this model , but direct computer simulations using a molecular model that explore the co - transport mechanism in detail , to our best knowledge , are still absent .several other groups , including those of coalson , chung , and roux used various simulation methods ( including brownian dynamics , dynamic lattice monte carlo , and molecular dynamics ) to study diffusion of ions through narrow biological ion channels .although the simulation techniques used by these groups could handle microscopic coupling between ions ( interactions through intermolecular potentials or collisions ) , none of these groups studied the macroscopic phenomenon of coupled transport systematically , as is done in this paper . on the other hand , chou andlohse did consider co - transport in single - file pores .using a simple one - dimensional lattice exclusion model , they found single - file coupled transport through model zeolites and channels using kinetic models in a lattice simulation .however , given the simplicity of the model , the need for molecular dynamics or monte carlo ( mc ) simulations to reveal microscopic details was raised by the authors . in this work , we build a simple molecular model for a co - transporting pore , in which a narrow pore is lined with a number of structural charges to attract a sufficient number of cations into the pore .this makes the pore multiply - occupied .this , and the narrowness of the pore , makes the movement of ions coupled because the ions can not pass each other .our pore model has rotational symmetry obtained by rotating the shape of [ fig1 ] around the centerline .the pore has rounded edges at the entrances of 0.5 nm curvature radius forming vestibules to the central cylindrical region ( of length and radius ) of the pore .this central region is surrounded by rings of negative point charges ( red circles in [ fig1 ] ) .there are four charges in each ring , each in nm radial distance from the centerline .the magnitude of each partial charge is so their sum gives a predefined value , .the total thickness of the membrane is nm . we use na for the abundant species in this study .the ions are modeled as charged hard spheres , with diameters 0.19 and 0.362 nm for na and cl , respectively .the species to be co - transported ( denoted by x ) has a diameter 0.3 nm .water is modeled as a dielectric continuum with dielectric constant throughout the system .the two regions outside the membrane on both sides represent the two baths denoted left ( l ) and right ( r ) . in a biological situation , these compartments may be the extracellular and intracellular spaces . in a technological situation , they are rather called feed and permeate sides .the electrochemical driving force for the passive diffusion of the ions is the difference of concentrations in the two control cells .since we enforce electroneutrality in these cells on average , the mean electrical potential across the membrane is 0 v ( see appendix [ sec : appendix ] ) .this is an approximation of a large , well - stirred bath .equivalently , it is as if an electrode were keeping the membrane potential at 0 v , except that our baths remove the excess charges instead of the electrodes .the transport of ions is simulated by the dynamic monte carlo ( dmc ) method , where ions are randomly displaced within a maximum displacement .the move is accepted or rejected according to the usual mc acceptance criterion ( 50 - 150 billion of such moves were attempted ) . in the crowded pore ,the dominant mechanism of coupling is momentum exchange between clashing particles governed by short - range repulsion that is handled in dmc simulations by rejecting configurations where two hard spheres overlap .we have also performed molecular dynamics simulations in some cases to verify that dmc correctly captures this phenomena ( data not shown ) .it has been shown that dmc is an appropriate method to compute relative fluxes in mixtures , as demonstrated in previous dmc simulations for ion channels .therefore , we will plot the flux ratios throughout this paper , which makes sense intuitively because characterizes the efficiency of co - transport .more details can be found in previous papers and in the appendix [ sec : appendix ] .we first performed a detailed analysis regarding the length , the radius , and the charge of the pore to assess what values are necessary to produce coupled transport of x and na .we started by fixing the length of the pore at nm and changing the charge of the pore at two different pore radii and 0.48 nm .there is a large concentration gradient for na from l to r ( from 150 mm to 10 mm ) . at the same time, there are smaller concentrations of x in the system ( 1 mm and 3.16 mm in l and r sides , respectively ) .this means that the x concentration gradient is in the opposite direction of that for na .\(a ) ( b ) .[tab1 ] the rates ( in percent ) of various occupancy combinations of x and na in the cylindrical pore for different channel charges , , for filter radius nm .other parameters are the same as in [ fig2 ] .ion combination ( 1st column ) means that there are x and na in the cylindrical pore .only those rows are shown , where the rate is larger than 0.1% .[ cols="^ , > , > , > , > " , ] the net flux is a sum of fluxes flowing from r to l ( r ) and from l to r ( l ) * ? ? ?( these fluxes correspond to unidirectional fluxes that have been reasonably measured in transport physiology experiments using radioactive tracers . ) the r component is identified with the usual diffusive transport driven by the concentration gradient of x .the l component is identified with co - transport driven by coupling to the large flux of na ions from l to r. the net effect , therefore , is a result of two competing effects , which is important for our understanding the phenomenon .( it should be noted that these identifications are artificial .even when only diffusion is present without coupling , there are fluxes in both directions .these concepts , however , promote understanding and serve discussion by seeing where co - transport dominates . ) in [ fig2 ] ( and in later figures ) , therefore , we plot the l and r components in addition to their sum .co - transport occurs when the two ionic species go into the same direction . in this case, the l component is larger ( in absolute value ) than the r component , co - transport dominates over diffusion , and the sign of the net flux ratio is positive .no net co - transport was observed for the wider pore ( nm , [ fig2]a ) . the l component is small for every . in this case , there is enough space for the ions to travel past each other in opposite directions . for the narrow pore ( nm , [ fig2]b ) , on the other hand , we observed co - transport when the charge of the pore is large enough .when there is not enough charge around the pore ( ) , there is not enough attraction to attract cations into the pore . in this case , the pore is not multiply occupied and the movement of na and x can not be coupled through momentum exchange . the l component is zero for . increasing the pore charge, coupling appears , the l component becomes non - zero , and the r component vanishes .the maximum in the l component at appears because the na flux is small in this case , so we normalize with a smaller number .increasing further , na flux increases , so the flux ratio decreases . the fact that co - transport requires multiply - occupied pore is supported by an analysis where we computed the probabilities of finding various combinations of ions in the pore ( see [ tab1 ] ) . for pore charge , for example , the pore is empty in 97.66 % of the time , it contains one na in 2.33 % of the time , and it contains one x in 0.01 % of the time .obviously , no coupling is possible in this case . for charge , on the other hand , the pore contains only na ions most of the time ( 97.68 % ) , but in the remaining time it contains one x ion next to one or more na ions . in this case , coupling is possible and , together with the confiningly small radius of the pore , it results in x flux in the opposite direction of its concentration gradient .another interesting aspect is that the flux ratio is saturated ; it does not increase further when the pore charge is increased ( in absolute value ) to extreme values .once coupling is established , the flux ratio can not be increased further by increasing . in the next step ,we investigated how is this coupling is established .as seen above , the narrowness of the pore is necessary .it is believed , because of an experiment by hodgkin and keynes , that coupled transport requires the transport in a long , narrow pore where single filing of ions is forced .but , how long does the pore have to be to produce coupling ? [ fig3 ]shows the x/na flux ratio ( and its components ) as a function of the length of the narrow cylindrical part of the pore for and nm .it is seen that co - transport occurs even if the narrow cylindrical part is absent ( nm ) .the observed flux ratio is smaller , but the phenomenon is definitely present . for small , there is more `` leakage '' of x ions in the `` wrong '' ( r ) direction , but the l component is sufficient to more than balance it .examination of concentration profiles ( [ fig4 ] ) show that the pore region is crowded even if the cylindrical part is absent ( nm ) ; this is ensured by the pore charge ( ) .the important thing is that the ions must crowd in a bottleneck of the pore so that the ion present in abundance ( na ) can obstruct the diffusion of the other ion ( x ) normally driven by its own concentration gradient ; that is , na ions stand as an obstacle to the movement of x from r to l no matter whether the pore is long or not .we find , therefore , that single - filing in a long narrow pore is not necessary to establish coupling between the ions taking part in co - transport .this result should be taken into consideration when protein structures are analyzed from the point of view of co - transport .one does not need a long classical channel ; a short but narrow opening suffices to produce coupled transport .\(a ) ( b ) next , we investigated how the co - transport depends on x and na concentration ratios .we fixed the na concentration on the r side at 10 mm and changed it on the l side in the range 75 - 300 mm .we also fixed the x concentration on the l side at 1 mm and changed it on the r side in the range 0.316 - 31.6 mm , the concentration gradient x ions must fight against .[ fig5]a shows the net x/na flux ratio as a function of the x concentration ratio for different values of }^{\mathrm{l}}/\mbox{[na]}^{\mathrm{r}}^{+} ] , we observe co - transport if }^{\mathrm{r}} ] is not too large .obviously , if it is too large , the r diffusion dominates over the l co - transport . as }^{\mathrm{r}}/\mbox{[x]}^{\mathrm{l}}\rightarrow 1 ^{+} ] decreases ( far left points of [ fig5]a ) .this limiting value decreases with increasing na concentration ratio . to understand why ,consider the concentration profiles as [ na] increases ( [ fig6 ] ) .these show that the concentration of x decreases substantially with higher [ na] .apparently , there is a competition between x and na ions for the pore ; increasing na concentration in the l bath from where it arrives favors adsorbing more na and less x in the pore .also , we varied the l - side x concentration while keeping the na concentrations unchanged .our results ( [ fig7 ] ) show that the flux ratio increases approximately linearly with [ x] because more x ions are adsorbed in the pore as the l - side x concentration increases .combined , these results of varying [ na] and [ x] independently indicate that their ratio determines how much x is in the pore and therefore how much x is conducted .our results have biological implications in that they provide support to the idea that channel - like co - transport through narrow pores can be a kind of transfer mode in transporters. the traditional view is that co - transporters behave the same way as those using primary energy ( atpases , for example ) . in this picture , substrates bind to the transporter , induce a conformational change , and this change in configuration results in release of substrates on the other side of the membrane . new experimental evidence from fluorescencespectroscopy and electrophysiology , however , shows that current flowing through co - transporters can be orders of magnitudes larger than that possible on the basis of the alternating access model .these currents are in the range of currents carried by ion channels .moreover , it was reported that the number of the dominant ions far exceeds ( 10 to a 100 times ) the number of co - transported substrate molecules , in contrast to the fixed stoichiometric mechanism .our simulations are consistent with these properties and show this same range of downhill to uphill ion flux ratio. our results also have implications for engineered materials .synthetic nanopores and nanoporous materials have the advantage over biological that their properties ( e.g. , length , radius , and charge ) can be more easily manipulated .therefore , our systematic study gives insights into how to optimize these parameters .our simulations also show that only a very small segment of the pore needs to be single - filing .this is important for membranes that are usually several microns thick ; having sub - nanometer - wide pores spanning the entire membrane would not only be difficult to make , but would also dramatically increase the resistance to ion flow .in addition , we showed that large uphill ion concentration gradients ( more than 10-fold ) can still sustain co - transport .that can be important for applications like amplifying the concentration of a low - concentration analyte molecule .one can use a co - transporting membrane to shuttle analyte molecules to higher and higher concentrations to make it easier to analyze them or detect their presence .similarly , one can accumulate an ion concentration gradient for energy storage .or , one can potentially remove contaminating ions ( e.g. , radioactive ions ) with this mechanism . nanopores in membranes are becoming small enough that these applications will be possible soon .currently , nanopores can be made to have diameters of nm ( reviewed by howorka and siwy ) , which is almost small enough to force single - filing of ions .moreover , biological ion channels are now being incorporated into these synthetic nanopores , including gramicidin a which is a single - file channel. therefore , we suggest that co - transport of ions may be a new possible application of nanopores .in conclusion , we have performed dmc simulations for a reduced model of coupled transport in a narrow multiply occupied pore .the simplified model made it possible to obtain simulation results with good statistics ; the error bars in the figures are about in the size of the symbols . at the same time, our model includes the relevant physics to ensure that our results and interpretations are valid .we found that x ions can travel uphill using momentum coupling with na ions that are present at high concentration ( compared to x ) , driven by their own concentration gradient with normal diffusion .co - transport occurs because thermal motion produces momentum - coupling between x and na ions on the microscopic level .macroscopic parameters influence this coupling in various ways .geometrical parameters ( and ) and pore charge ( ) determine how strong this coupling is in the crowded bottleneck of the pore .na and x concentrations both influence the number of x ions in the pore , and , therefore , the x flux .we found that coupling can be established without single filing in a long narrow pore ; a short , but narrow opening , where ions are crowded by strong electrostatic attraction exerted by pore charges , is enough .voltage was not applied in this study , so the passive diffusion of ions is driven by only the concentration differences between the control cells .this is the first logical step in studying the phenomenon of co - transport .a voltage driving the na ions in the direction of their concentration gradient would facilitate co - transport because it would increase the flux of na .due to momentum transfer , co - transport would be stronger as the drift velocity of na ions increases .in addition , any voltage favoring na conduction would similarly aid x conduction .a voltage of opposite sign , on the other hand , would work against co - transport by decreasing na current and moving x away from the membrane .simulations of these effects would be interesting ( using one of the electrostatic algorithms of refs . ) and can be the topic of future studies .however , the main conclusions of this study would not change . moreover ,one of the main goals of our work is new low - energy applications for nanopores and , therefore , we show that co - transport can occur in systems that are completely passive ( i.e. , where the membrane potential is not fixed and only concentration differences drive the process ) .we want to thank lou defelice for the conversations that inspired this work .we are grateful for the valuable discussions with bob eisenberg and for drawing our attention to the importance of unidirectional fluxes .we acknowledge the financial support of the hungarian state and the european union under tmop-4.2.2/b-10/1 - 2010 - 0025 and tmop-4.1.1/c-12/1/konv-2012 - 0017 . the support of the hungarian national research fund ( otka k75132 ) is acknowledged .we have chosen the dmc method as introduced by rutkai et al . to simulate the movement of x and na ions through our model pore .the concentrations of these ions are maintained on the two sides of the membrane with grand canonical monte carlo ( gcmc ) simulations in control cells .thus , we simulate steady - state flux .dmc provides an alternative with many advantages over molecular dynamics and brownian dynamics , for example , shorter computation time , easier handling of hard sphere forces . in dmc, the flux of a given ionic species , , can be computed by counting the particles crossing a predefined reference plane ( from l to r and from r to l ) in a given mc time interval ( time is expressed as the number of trial mc steps in dmc ) . in the procedure developed by rutkai and kristf to simulate mixtures with dmc , the computed number is divided by the square root of the mass of the given component .this is the only place , where particles masses enter the calculation .choice for the mass of the x ions would influence the flux ratios , but it does not influence the qualitative trend in the figures .the x ions are 7.7 times heavier than na ions , in our calculations .the basic dmc step is the random particle displacement , in which one particle is chosen from the particles available in the system with probability , and it is moved into a new position , , with respect to the old position , , with , where is a vector containing three coordinates that are uniformly generated random numbers in the interval $ ] .the move is accepted with probability , where is the energy - change associated with the movement . may increase , for example , if the electrostatic energy becomes unfavorable or two ions come too close and overlap , resulting in rejection of that new configuration .the dmc method is based on the assumption that the sequence of configurations generated by the above steps can be considered as a dynamic evolution of the system in time .dmc does not generate deterministic trajectories ; it reproduces average dynamic properties such as the mean - square displacement .compared to molecular dynamics , dmc does not guarantee an absolute measure of physical time ; it only ensures proportionality , which is why it directly provides only relative fluxes . the choice of the maximum displacement , , is a central problem in the dmc method . for systems , where every species are modeled explicitly, the value of can be determined from the average free path that a molecule can move toward its neighbors until collision as described in the paper of rutkai and kristf in detail . in this case , the key property determining the value of is the density of the fluid .the algorithm of rutkai and kristf was justified by comparing to results of molecular dynamics simulations .when we simulate particles moving in an implicit solvent , on the other hand , can be chosen to mimic the stochastic random walk of particles colliding with the solvent molecules . in this case , the dmc method is more reminiscent of brownian dynamics simulations .consequently , tuning the parameter , dmc simulations can mimic both molecular and brownian dynamics simulations depending on the presence of implicit degrees of freedom .although we observed some ( slight ) sensitivity of our quantitative results to the value of , we did not change its value in our calculations , because we are interested in general qualitative behavior .changing did not influence our qualitative conclusions . to maintain concentrations on the two sides of the membrane, we apply gcmc in the large , bulk - like containers ( called `` control cells '' ) on the two sides of the membrane .this is the dual control volumes ( dcv ) method to maintain steady - state flux .note that the dcv method was applied in the case of ionic systems by i m et al . for the first time .the control cells are charge neutral on average , although charge can fluctuate in them as individual ions are inserted / deleted in the gcmc steps .the electrochemical driving force for the passive diffusion of ions is the gradient of the electrochemical potential , , where is a reference chemical potential , is boltzmann s constant , is temperature , is the electrical potential , is the concentration , is the excess chemical potential , and is the charge of ionic species .the electrochemical driving force basically has two components : the gradient of the chemical potential ( ) and the gradient of the electrical potential ( ) . in this study, we imposed an applied voltage ( i.e. , electrical potential difference between the baths ) of 0 v by enforcing charge neutrality in each of the control cells .this is shown in [ fig8 ] with the electrostatic potential profiles for several cases ( computed by inserting test charges in the system uniformly ) .the same effect would have been achieved with electrodes at the ends of the baths ( but without the charge neutrality condition ) , something done in many labs with biological ion channels and synthetic nanopores . without the explicit electrodes ,each control cell approximates a large , well - stirred bath .if only a small number of pores are included in the membrane , then this is an excellent approximation because only a small amount of excess charge is moved in that case .moreover , the ion current through these pores will always be small because they are very narrow and thus have high resistance . in applications where many pores are in a membrane, this approximation may break down .however , the man - made membranes we have in mind are generally micrometers thick and thus have very low capacitance ; this would not be true of 3 nm thick biological membranes .moreover , flowing fresh electrolyte solution on the side of ion accumulation can be done for some applications .thus , while in principle enough ions can accumulate to create a membrane voltage to stop the co - transport , our large , well - stirred bath approximation is reasonable for many engineering applications .we used this well - stirred bath approximation because simulating very long pores and large baths is challenging with any simulation technique . however , using short pores and small baths introduces artifacts of charge accumulation and with it a transmembrane potential . while this can happen in biological situations where cell membranes have a large capacitance andthe cytoplasm is not at all well - stirred , the applications we have in mind with man - made membranes do not have these properties ; baths are generally large so that the ions can quickly diffuse away . by using the control cells the way we did, we show the general principle of co - transport ( which will occur in longer pores as well ) , but avoid the artifact of charge accumulation produced by using thin membranes and small baths .periodic boundary conditions were used for the control volumes in directions perpendicular to the direction of the transport ( and ) , while the cell was confined between hard in the dimension .gcmc simulations use the chemical potentials as independent variables and apply particle insertion / deletion steps thus simulating a system with fluctuating particle numbers .this fluctuation , however , occurs around well - defined average values , so the composition ( the concentrations of the various species ) in the control cells is well - defined .the chemical potentials of the various species have been determined with the adaptive gcmc method .the dmc technique coupled to control cells ( called dmc+dcv ) was used to study transport through ion channels and carbon nanopores .the assumption that makes gcmc simulation in the control cells possible is that they are separately in equilibrium .it is possible , however , to apply gcmc simulations in the transport region too using a local electrochemical potential as introduced in the local equilibrium monte carlo ( lemc ) method .we also demonstrated that the dmc technique can be coupled to the lemc method .hodgkin , a. l. ; keynes , r. d. _ j. physiol . _ * 1955 * , _ 128 _ , 6188 defelice , l. j. ; goswami , t. _ ann .physiol . _* 2007 * , _ 69 _ , 87112 hille , b. _ ion channels of excitable membranes _ , 3rd ed . ;sinauer associates : sunderland , 2001 howorka , s. ; siwy , z. _ chem .rev . _ * 2009 * , _ 38 _ , 23602384 hall , a. r. ; scott , a. ; rotem , d. ; mehta , k. k. ; bayley , h. ; dekker , c. _ nature nanotechnology _ * 2010 * , _ 5 _ , 874877 kocer , a. ; tauk , l. ; djardin , p. _ biosens . bioelectron ._ * 2012 * , _ 38 _ , 110 balme , s. ; janot , j .- m . ;berardo , l. ; henn , f. ; bonhenry , d. ; kraszewski , s. ; picaud , f. ; ramseyer , c. _ nano letters _ * 2011 * , _ 11 _ , 712716 rogers , b. ; pennathur , s. ; adams , j. _ nanotechnology : understanding small systems _ , 2nd ed .; crc press : boca raton , fl , 2011 baker , r. _ membrane technology and applications _ ; wiley , 2012 chen , d. p. ; eisenberg , r. s. _ biophys . j. _ * 1993 * , _ 65 _ , 727746 eisenberg , r. s. in _ new developments and theoretical studis of proteins _ ; elber , r. , ed . ; world scientific : philadelphia , 1996 ; chapter atomic biology , electrostatics , and ionic channels , pp 269357 defelice , l. j. ; adams , s. v. ; ypey , d. l. _ biosystems _ * 2001 * , _ 62 _ , 5766 graf , p. ; nitzan , a. ; kurnikova , m. g. ; coalson , r. d. _ j. phys . chem .b _ * 2000 * , _ 104 _ , 1232412338 graf , p. ; kurnikova , m. g. ; coalson , r. d. ; nitzan , a. _ j. phys . chem . b _ * 2004 * , _ 108 _ , 20062015 cheng , h. y. ; coalson , r. d. _ j. phys . chem .b _ * 2005 * , _ 109 _ , 488498 hoyles , m. ; kuyucak , s. ; chung , s. h. _ computer phys .comm . _ * 1998 * , _ 115 _ , 4568 chung , s. h. ; hoyles , m. ; allen , t. ; kuyucak , s. _ biophys . j. _ * 1998 * , _ 75 _ , 793809 chung , s. h. ; allen , t. w. ; hoyles , m. ; kuyucak , s. _ biophys . j. _ * 1999 * , _ 77 _ , 25172533 corry , b. ; kuyucak , s. ; chung , s. h. _ biophys . j. _ * 2000 * , _ 78 _ , 23642381 corry , b. ; allen , t. w. ; kuyucak , s. ; chung , s. h. _ biophys .j. _ * 2001 * , _ 80 _ , 195214 corry , b. ; hoyles , m. ; allen , t. w. ; walker , m. ; kuyucak , s. ; chung , s. h. _ biophys .j. _ * 2002 * , _ 82 _ , 19751984 chung , s. h. ; kuyucak , s. _ biochim .acta - biomembr ._ * 2002 * , _ 1565 _ , 267286 corry , b. ; kuyucak , s. ; chung , s. h. _ biophys . j. _ * 2003 * , _ 84 _ , 35943606 corry , b. ; vora , t. ; chung , s .- h . _. acta _ * 2005 * , _ 1711 _ , 7286 i m , w. ; seefeld , s. ; roux , b. _ biophys .j _ * 2000 * , _ 79 _ , 788801 noskov , s. y. ; i m , w. ; roux , b. _ biophys. j. _ * 2004 * , _ 87 _ , 22992309 allen , t. w. ; andersen , o. s. ; roux , b. _ j. gen .physiol . _ * 2004 * , _ 124 _ , 679690 luo , y. ; egwolf , b. ; walters , d. e. ; roux , b. _ j. phys . chem .b _ * 2010 * , _ 114 _ , 952958 egwolf , b. ; luo , y. ; walters , d. e. ; roux , b. _ j. phys . chem .b _ * 2010 * , _ 114 _ , 29012909 lee , k .- i . ; jo , s. ; rui , h. ; egwolf , b. ; roux , b. ; pastor , r. w. ; i m , w. _ j. comp .chem . _ * 2012 * , _ 33 _ , 331339 chou , t. _ j. chem .phys . _ * 1999 * , _ 110 _ , 606615 chou , t. ; lohse , d. _ phys . rev .lett . _ * 1999 * , _ 82 _ , 35523555 rutkai , g. ; kristf , t. _ j. chem . phys . _* 2010 * , _ 132 _ , 124101 rutkai , g. ; boda , d. ; kristf , t. _ j. phys . chem .* 2010 * , _ 1 _ , 21792184 csnyi , e. ; boda , d. ; gillespie , d. ; kristf , t. _ biochim .et biophys .acta - biomembranes _ * 2012 * , _ 1818 _ , 592600 eisenberg , r. s. ; klosek , m. m. ; schuss , z. _ j. chem . phys . _* 1995 * , _ 102 _ , 17671780 eisenberg , b. _ chem . phys .lett . _ * 2011 * , _ 511 _ , 16 jacquez , j. a. _ compartmental analysis in biology and medicine _ , 2nd ed . ; the university of michigan press : ann arbor , 1988 rakowski , r. _ biophys . j. _ * 1989 * , _ 55 _ , 663671 lester , h. a. ; cao , y. ; mager , s. _ neuron _ * 1996 * , _ 17 _ , 807810 sonders , m. s. ; amara , s. g. _ curr .neurobiol . _ * 1996 * , _ 6 _ , 294302 alberts , b. ; johnson , a. ; lewis , j. ; raff , m. ; roberts , k. ; walter , p. _ molecular biology of the cell _ , 5th ed .; garland science : new york , 2008 defelice , l. j. _ nature _ * 2004 * , _ 432 _ , 279 lee , k .- i . ;rui , h. ; pastor , r. w. ; i m , w. _ biophys .j. _ * 2011 * , _ 100 _ , 611 619 kwon , t. ; harris , a. l. ; rossi , a. ; bargiello , t. a. _ j. gen .physiol . _* 2011 * , _ 138 _ , 475493 de biase , p. m. ; solano , c. j. f. ; markosyan , s. ; czapla , l. ; noskov , s. y. _ j. chem .comp . _ * 2012 * , _ 8 _ , 25402551 boda , d. ; gillespie , d. _ j. chem .comput . _ * 2012 * , _ 8 _ , 824829 huitema , h. e. a. ; van der eerden , j. p. _ j. chem . phys . _ * 1999 * , _ 110 _ , 32673274 martin , m. g. ; thompson , a. p. ; nenoff , t. _ j. chem . phys . _ * 2001 * , _ 114 _ , 71747181 berthier , l. _ phys .e _ * 2007 * , _ 76 _ , 011507 binder , k. _ monte carlo methods in statistical physics _ ; springer : heidelberg , 1979 pohl , p. ; heffelfinger , g. ; smith , d. _ mol .phys . _ * 1996 * , _ 89 _ , 17251731 enciso , e. ; almarza , n. g. ; murad , s. ; gonzalez , m. a. _ mol . phys . _ * 2002 * , _ 100 _ , 23372349 heffelfinger , g. s. ; van swol , f. _ j. chem .phys . _ * 1994 * , _ 100 _ , 7548 lsal , m. ; brennan , j. k. ; smith , w. r. ; siperstein , f. r. _ j. chem .phys . _ * 2004 * , _ 121 _ , 4901 malasics , a. ; boda , d. _ j. chem . phys . _ * 2010 * , _ 132 _ , 244103 seo , y. g. ; kum , g. h. ; seaton , n. a. _ j. membr .sci . _ * 2002 * , _ 195 _ , 6573 hat , z. ; boda , d. ; kristf , t. _ j. chem .phys . _ * 2012 * , _ 137 _ , 054109
|
dynamic monte carlo simulations are used to study coupled transport ( co - transport ) through sub - nanometer - diameter pores . in this classic hodgkin - keynes mechanism , an ion species uses the large flux of an abundant ion species to move against its concentration gradient . the efficiency of co - transport is examined for various pore parameters so that synthetic nanopores can be engineered to maximize this effect . in general , the pore must be narrow enough that ions can not pass each other and the charge of the pore large enough to attract many ions so that they exchange momentum . co - transport efficiency increases as pore length increases , but even very short pores exhibit co - transport , in contradiction to the usual perception that long pores are necessary . the parameter ranges where co - transport occurs is consistent with current and near - future synthetic nanopore geometry parameters , suggesting that co - transport of ions may be a new application of nanopores . * toc graphic : * + * keywords : * + cotransport , diffusion , simulation , dynamic monte carlo , modeling
|
gravitation rules .it is what forms dark matter halos and giant molecular clouds .it is also what compresses these clouds of gas to such an extent that stars form out of them . andthese newly - formed stars are often bound to each other by the means of this force , forming binary or multiple stellar systems .although the details of binary star formation are still not fully understood ( e.g. ) , it is now acknowledged that stellar multiplicity is more the rule than the exception .observations suggest that over of the stars in our galaxy are part of double , triple , quadruple , or even sextuple systems .because stars grow in size considerably as they evolve , it is estimated that those binaries with a period of less than days will inevitably interact at some point of their life .when such close interactions occur , material is transferred from one star to the other and the course of evolution of the stars is irreversibly altered .likewise , such close interactions can also be triggered by stellar encounters in dense stellar environments ( e.g. captures and exchanges ; ) .the first realization of the importance of binary interactions may have been by , who suggested an interesting solution to the paradox of the algol system , in which the more evolved star is also the least massive . suggested that the close proximity of the two components must have led to significant mass transfer from the initially more massive star to the least massive one until the mass ratio was reversed .this discovery opened the way to many more types of stars , such as cataclysmic variables and x - ray binaries , helium white dwarfs , and blue stragglers , whose very existence could now be understood in terms of close binary evolution . even for stars that are not transferring mass , a close companion can have all sorts of effects on their observable properties , such as increased chromospheric and magnetic activity ( e.g. rs canum venaticorum stars ; ) . since the work of , much effort has been put into better understanding binary evolution and its by - products .however , the usual roche lobe formalism used to study binary stars applies only to the ideal case of circular and synchronized orbits .the addition of simple physics such as radiation pressure , for example , is enough to significantly modify the equipotentials of binary systems and estimates of fundamental parameters such as the roche lobe radius consequently become uncertain . since the evolution of close binaries depends , among other things , on the rate at which mass transfer proceeds _ and _ , analytical prescriptions for the mass transfer rate also become uncertain .the determination and characterization of the mass transfer rate in binary stars therefore represents a key issue that needs to be addressed , especially given that surveys of binary stars have shown that a non - negligible fraction ( in the sample of ; see also ) of semi - detached or contact systems have eccentric orbits .recent analytical work by , who investigated the secular evolution of eccentric binaries under episodic mass transfer , have shown that , indeed , eccentric binaries can evolve quite differently from circular ones . however , because mass transfer can occur on short dynamical timescales , it is necessary to use other techniques to characterize it fully .in particular , hydrodynamics has shown to be useful for studying transient phenomena and episodes of stable mass transfer .simple ballistic models ( e.g. ) and two - dimensional hydrodynamical simulations of semi - detached binaries ( e.g. ) have generally been used in the past to study the general characteristics of the flow between two stars and the properties of accretion disks . later three - dimensional models with higher resolution allowed for more realistic studies of coalescing binaries ( e.g. ) and accretion disks ( e.g. ) in semi - detached binaries , all mainly focusing on the secular and hydrodynamical stability of binaries and on the structure of the mass transfer flow .more recently , and used grid - based hydrodynamics to simulate the coalescence of -polytropes , representative of low - mass main - sequence stars ( m 0.5 m ) .they also investigated the onset of dynamical and secular instabilities in close binaries and were able to get a good agreement with theoretical expectations .characterization of the accretion process and the behaviour of the accreted material onto the secondary star requires that the secondary be modeled realistically . in most simulations to date , the secondary has often been modeled using point masses or simple boundary conditions to approximate the surface accretion .moreover , as pointed out by , the use of polytropes , instead of realistic models , may lead to significantly different internal structures for collision products , which may arguably be applicable to mass - transferring binaries .therefore , more work remains to be done in order to better understand how mass transfer operates in binary systems . here , we present our alternate hydrodynamical approach to the modeling of mass transfer in binaries .we first discuss the usual assumptions made when studying binary stars [sect : theory ] .our computational method , along with our innovative treatment of boundary conditions , are introduced and tested in [sect : method ] .we discuss initial conditions for binary stars in sph and the applicability of our technique to binary stars in [sect : binaries ] .our first mass transfer simulation is analyzed in [sect : transfer ] .the work presented here is aimed at better understanding the process of mass transfer and will be applied to specific binary systems in another paper ( ; hereafter , ) .our understanding of interacting binary stars is prompted by observations of systems whose existence must be a result of mass transfer .these observations and simple physical considerations have driven the theoretical framework we use to model these systems . under the assumptions of point masses ( m and m ) ,circular and synchronous orbits , the gravitational potential in a rotating reference frame gives rise to equipotentials with particular shapes , as described in .the equipotential that surrounds both stars and intersects at a point between the two stars is the roche lobe and forms a surface within which a star s potential dominates over its companion s . approximates the equivalent roche lobe radius ( r ) to an accuracy of 1 over the whole range , by where is the semi - major axis .analytical studies of binary stars usually rely on this definition of the roche lobe radius , but as discussed in [sect : intro ] , such estimates for may not always be reliable .once a star overfills its roche lobe , mass is transferred to the companion s potential well. the rate at which material is transferred is , in general , a strong function of the degree of overflow of the donor , , where is the radius of the star . in general ,mass is assumed to be transferred through the l point and , using simple physical assumptions such as an isothermal and inviscid flow and an ideal gas pressure law , shows that the mass transfer rate can be expressed as here , is the photospheric radius , h is the pressure scale height of the donor star , and is the mass transfer rate of a star exactly filling its roche lobe .this mass transfer rate depends rather strongly on the degree of overflow and goes to zero exponentially if the star is within its roche lobe . provides estimates for of about 10 m yr and for low - mass main - sequence stars .this model of mass transfer has been successfully applied to cataclysmic variables where the photosphere of the donor is located about one to a few inside the roche lobe . for cases where the mass transfer rates are much larger , ( see also and ) derives , in a similar way , the mass transfer rate for donors that can be approximated by polytropes of index . in such cases , the mass transfer rate is where is a canonical mass transfer rate which depends on m , m , and .the dependence of the mass transfer rate on is again found , although somewhat different than that of .this rate is also zero when and is applicable when the degree of overflow is much larger than the pressure scale height and mass transfer occurs on a dynamical timescale . in any case ,once a star fills its roche lobe , the mass transfer is driven by the response of the star s and roche lobe radii upon mass loss .stars with deep convective envelopes tend to expand upon mass transfer whereas radiative stars tend to shrink upon mass transfer .the ensuing response of the roche lobe radius , which may expand or shrink , will therefore dictate the behaviour of the mass transfer rate .if the mass of the stars changes , then both the period and separation ought to readjust . this behaviour can be shown by taking the time derivative of the total angular momentum of a system of two point mass orbiting each other with an eccentricity : equation [ eq : jdot3 ] shows that as the masses of the stars change and as mass and angular momentum are being lost from the system , both the orbital separation and the eccentricity change .the exact behaviour of these quantities depends of course on the degree of conservation of both total mass and angular momentum . for the usual assumptions of circular orbits and conservative mass transfer, we can further impose that , and , therefore reducing equation [ eq : jdot3 ] to assuming to be the donor and more massive ( i.e. and ) we find that the separation decreases until the mass ratio is reversed , at which point the separation starts increasing again . for main - sequence binaries , where the most massive star is expected to overfill its roche lobe first, the separation is therefore expected to decrease upon mass transfer .the theoretical framework derived in this section has generally been applied to the study of close binaries . however , strictly speaking , it is not valid in most instances .close binaries are not all circular and synchronized , and the roche lobe formalism therefore does not apply .this , in turn , makes estimates of mass transfer rates rather uncertain .moreover , conservative mass transfer is more an ideal study case than a realistic one and the secular evolution of binary system becomes a complex problem . to circumvent these difficulties , approximations to the mass transfer and accretion rates as well as to the degree of mass loss have to be made .however , to better constrain these approximation or avoid over - simplifications , one can use hydrodynamics , which is well suited for modeling and characterizing episodes of mass transfer .we therefore discuss our hydrodynamics technique and show how it can be used to better constrain mass transfer rates in binary systems .smoothed particle hydrodynamics ( sph ) was introduced by and in the context of stellar astrophysics .its relatively simple construction and versatility have allowed for the modeling of many different physical problems such as star formation ( ) , accretion disks ( e.g. ) , stellar collisions ( ) , galaxy formation and cosmological simulations ( e.g. ) .our code derives from that of , which is based on the earlier version of and . here, we only emphasize on the main constituents of our code .the reader is referred to these early works for complementary details .sph relies on the basic assumption that the value of any smooth function at any point in space can be obtained by averaging over the known values of the function around this point .this averaging is done using a so - called ` smoothing kernel ' to determine the contribution from neighbouring particles .the smoothing kernel can take many forms ( see e.g. ) ; here we use the compact and spherically symmetric kernel first suggested by . to prevent the interpenetration of particles in shocks andallow for the dissipation of kinetic energy into heat , we include an artificial viscosity term in the momentum and energy equations .the artificial viscosity can also take various forms ( e.g. ) ; we use the form given by with and .we allow for the smoothing length to change both in time and space , and we use individual timesteps for the evolution of all the required quantities . in this work , we assume an equation of state for ideal gases of the form , where is the ratio of the heat capacities . finally , we use the parallelized version of our code ( openmp ) , which scales linearly up to cpus for simulations of particles . as discussed by ,the inner parts of stars in close binaries generally remain unaffected by the presence of a companion , and only the structure of the outermost layers is modified by close tidal interactions .this result prompted us to model only the outer parts of the stars with appropriate boundary conditions .such an approach effectively reduces the total number of sph particles in our simulations without decreasing the spatial resolution .conversely , for the same amount of cpu time , modeling only the outermost layers of stars allows for the use of more particles , therefore enhancing the spatial and mass resolutions .moreover , cpu time is spent solely on particles actually taking part in the mass transfer or being affected by the companion s tidal field .sph codes calculate hydrodynamical quantities by averaging over a sufficiently large number of neighbours . for particles located close to an edge or a boundary, two things happen .first , since there are no particles on one side of the boundary , a pressure gradient exists and the particles tend to be pushed further out of the domain of interest .second , if the number of neighbours for each particle is kept fixed by requirements , as it is in our code , then the smoothing length is changed until enough neighbours are enclosed by the particle s smoothed volume .this lack of neighbours therefore effectively decreases the spatial resolution at the boundary and underestimates the particle s density . in such circumstances ,the implementation of boundary conditions is required .boundary conditions have often been implemented using the so - called _ ghost _ particles , first introduced by ( see also ) .ghost particles , like sph particles , contribute to the density of sph particles and provide a pressure gradient which prevents the latter from approaching or penetrating the boundary .ghost particles can be created dynamically every time an sph particle gets within two smoothing lengths of the boundary .when this occurs , the position of each ghost is mirrored across the boundary from that of its parent sph particle ( along with its mass and density ) .therefore , the need for ghosts ( and a boundary ) occurs only when a particle comes within reach of the boundary . here, however , we use a slightly different approach , based on the work of and .the approach of these authors differs from the mirrored - ghost technique in that the ghosts are created once , at the beginning of the simulation , and their relative position remains fixed in time during the simulation .we further improve upon this technique in order to model the outer parts of self - gravitating objects .starting from our relaxed configurations , we identify any particles as ghosts if they are located within three smoothing lengths of the boundary , which , at this point , is arbitrarily determined .particles located above the boundary are tagged as sph particles , whereas the remaining ones are erased and replaced by a central point mass whose total mass accounts for both the particles removed and those tagged as ghosts .point masses interact with each other _ and _ sph particles via the gravitational force only .point masses are also used when modeling massive or giant branch stars whose steep density profile in the core is hard to resolve with sph particles .figure [ fig : ghosts ] illustrates how our boundary conditions are treated in our code . here, we use three smoothing lengths of ghosts as a first safety check in order to prevent sph particles from penetrating the boundary .we also enforce that no particle goes further than one smoothing length inside the boundary by repositioning any such particle above the boundary .we further ensure conservation of momentum by imparting an equal and opposite acceleration to the central point mass ( since the point mass and the ghosts move together ) , which we write as where is the hydrodynamical acceleration imparted to particle from ghost .this term is added to the usual gravitational acceleration of the central point mass .ghosts are moved with the central point mass and are given a fixed angular velocity .note that at this point , has already been calculated by our code so that this calculation requires no extra cpu time .ghost particles are also included in the viscosity calculations to realistically mimic the interface .we now show that our new boundary condition treatment is well suited for the modeling of stars in hydrostatic equilibrium .we first relax a 0.8-m star with rotation ( ; solar units ) .the relaxation of a star requires a fine balance between the hydrodynamical and gravitational forces , and therefore allows to assess the accuracy of our code .we model our stars using theoretical density profiles as given by the yale rotational evolution code ( yrec ; ) .sph particles are first spaced equally on a hexagonal close - packed lattice extending out to the radius of the star .the theoretical density profile is then matched by iteratively assigning a mass to each particle .typically , particles at the centre of the stars are more massive than those located in the outer regions , by a factor depending on the steepness of the density profile .as discussed by , this initial hexagonal configuration is stable against perturbations and also tends to arise naturally during the relaxation of particles .stars are relaxed in our code for a few dynamical times to allow for the configuration to redistribute some of its thermal energy and settle down .once the star has reached equilibrium , we remove the particles in the central regions and implement our boundary conditions .we give the star , the ghosts and the central point mass translational and angular velocities .the final relaxed configuration of our star , with the boundary set to of the star s radius , is shown in figure [ fig : star+ghosts1 ] along with the density and pressure profiles in figure [ fig : star+ghosts2 ] . by using our initial configuration for the setup of ghosts ,we ensure that the ghosts position , internal energy , and mass are scaled to the right values and that the ensuing pressure gradient maintains the global hydrostatic equilibrium .the total energy ( which includes the gravitational , kinetic , and thermal energies ) for the model of figure [ fig : star+ghosts1 ] evolved under translation and rotation is conserved to better than over the course of one full rotation .these results suggest that our treatment of boundaries is adequate for isolated stars in translation and rotation . of the equatorial planeare plotted .the profiles of the ghost particles exactly match that of the theoretical profiles , showing that the overall profiles are very close the the theoretical ones . ]we now discuss the modeling of binary stars with our new boundary conditions .in particular , we develop a self - consistent technique for relaxing binary stars and show that our code , along with our new boundary conditions , can accurately follow and maintain two stars on relatively tight orbit for many tens of dynamical times .we emphasize that the location of the boundary is , at this point , arbitrary .we will discuss this issue in more details in [sect : eccentric ] . as discussed in [sect : stars ] , stars must be relaxed prior to being used in simulations .this is also true for binary systems since tidal effects are not taken into account when calculating the theoretical density profiles of the individual stars .therefore , care must be taken when preparing binary systems for sph simulations . to account for the different hydrostatic equilibrium configuration of binary stars , we use the fact that the stars should be at rest in a reference frame that is centered at the centre of mass of the binary and that rotates with the same angular velocity as the stars .a centrifugal term is added to the acceleration of the sph particles , which we find by requiring that it cancels the net gravitational acceleration of the stars ( e.g. ) . by assuming an initial orbital separation, we find , at each timestep , the necessary angular velocity that cancels the stars net gravitational acceleration .note that we do not account for the coriolis force since the system is assumed to be at rest in the rotating frame .the net acceleration of the centre of mass of star is calculated in the following way : where , and are respectively the total mass and the hydrodynamical and gravitational accelerations of star , and the summation is done over particles that are bound to star .we therefore get the following condition for the angular velocity : where is the distance of particle to the axis of rotation of the binary .we find the angular velocity for both stars and take the average value and add it to the acceleration of each sph particle . to ensure that the orbital separation is kept constant during the relaxation, we also reset the stars to their initial positions after each timestep by a simple translation .an example of such a detached relaxed binary is shown in the left panel of figure [ fig : detached ] .after the initial relaxation , the stars are put in an inertial reference frame by using the value of the angular velocity obtained from the binary relaxation procedure . to assess the physicality and numerical integration capabilities of our hydrodynamical code ,we have evolved different wide binaries on circular orbits for a small number of orbits .this is important for simulations of mass transfer as any changes in orbital separation must be driven by the mass transfer itself and not the initial conditions and/or numerical errors inherent to our method .figure [ fig : panels0606 ] shows the normalized orbital separation and different energies as a function of time for a m binary with each star fully modeled ( i.e. without a boundary ) .each star contains sph particles and the initial separation is r .our relaxation procedure yields an angular velocity of which , because of the large initial separation , is identical to the keplerian value .the different forms of energy are all well conserved , as well as the orbital separation , which remains constant to better than for over six orbits .the slight ( anti - symmetric ) variations observed in the orbital separation and the kinetic energy are an indication that the system is on a slightly eccentric orbit . at this level , however , we estimate that our code can properly ( and physically ) evolve two stars orbiting around each other .circular binary modeled with ,000 particles initially relaxed using the method of [sect : binaryrelax ] with fixed orbital separation . ]figure [ fig : panels0808 ] shows the comparison between the normalized energies and orbital separation of a m binary modeled with a different number of particles ( the high resolution simulation is also shown in figure [ fig : detached ] ) .the solid and dotted lines correspond , respectively , to the systems modeled without and with boundary conditions . in the simulations shown here ,the boundary is set at of the star s radius .figure [ fig : panels0808 ] shows that introducing the boundary conditions smoothes the oscillations seen for the fully modeled binaries .this is explained by the fact that the calculation of the gravitational force on the central point mass , and hence most of the star s mass , is done using a direct summation ( instead of a binary tree ) , therefore improving the accuracy and reducing the oscillations in the orbital separation .circular binary modeled with ( dotted line ) and without ( solid line ) boundary conditions .the upper two panels are for a low - resolution simulation containing ( full star ) particles and the lower two panels are for a high - resolution simulation containing particles ( full star ) . ]figure [ fig : panels0808 ] also shows that increasing the spatial resolution increases the quality of the evolution of the orbits .for comparison , the orbital separation of our high - resolution simulation varies by less than , compared to for the low - resolution ( figure [ fig : panels0808 ] ) .moreover , in all cases , the total energy is conserved to over the whole duration of the simulations and the gravitational and thermal energies ( not shown ) remain constant to better than .similarly , figure [ fig : panels08048 ] shows the evolution of a m binary system over four orbits and a relatively low number of particles .note that only the primary is modeled with the use of our boundary conditions for reasons that are discussed in [sect : summary ] .again , the orbital separation remains fairly close to its initial value , to within at the end of four orbits .the total energy is conserved to better than and only the kinetic energy oscillates significantly .we note that , when comparing our results of circular orbits with those of other authors , our binary relaxation procedure yields quantitatively comparable orbital behaviours .the results of show oscillations of in the orbital separation over three orbits whereas the simulations from of two unequal - mass binaries showed a constant orbital separation for many tens of orbits with an accuracy of . and , using a specifically designed grid - based hydrodynamics code , both maintain equal- and unequal - mass binaries on circular orbits with an accuracy of over five orbits .circular binary modeled with ( dotted line ) and without ( solid line ) boundary conditions . ]finally , we note that although our boundary condition treatment does require more calculations ( e.g. the contribution from all the particles to the point mass acceleration ) , the simulations of binary systems can be sped up by up when modeled with boundary conditions .this depends of course on the location of the boundary , which at this point is arbitrary but chosen wherever it makes the most sense for the problem at hand .note , however , that the boundary should be placed at least a few smoothing lengths from the surface of the star .the case of an eccentric orbit is interesting since tidal forces are time - dependent and can vary greatly depending on the orbital phase .here , we show that despite the large tidal forces at periastron , our boundary conditions are well suited for the modeling of such systems .the system we model consists of two main - sequence stars with masses of and m with an eccentricity of and evolved for over four orbits ( code units ) .the total number of particles nears and the location of the boundary is at of the stars radius , which , as shown in figure [ fig : encmass ] , is deep inside the star so that the effects of tidal force are negligible . indeed ,figure [ fig : encmass ] shows the radius of the primary enclosing different fractions of the total bound mass ( in sph particles ) as a function of time for our binary system . for example , the radii containing to of the total bound mass are shown to not change significantly during the whole duration of this simulation .in fact , only the outer radius of the star , containing over of the bound mass , oscillates during each orbit .therefore , in this case , the choice of the location of the boundary ( dotted line ) is well justified and figure [ fig : encmass ] shows that the use of our method for eccentric binaries is adequate . replacing the core of a star with a central point mass anda boundary remains a valid approximation as long as the boundary is deep enough inside the envelope of the star .m binary with .the dotted line represents the location of the boundary .we now present the results from the simulation presented in [sect : eccentric ] . in particular , we are interested in the mass transfer rates observed along the eccentric orbit .we also assess the physicality and limits of our approach later in [sect : summary ] .figure [ fig : densr014 ] shows the logarithm of the density for particles close to the orbital plane .the use of our boundary conditions can be seen as the centre of the stars is devoid of particles , except for the central point masses ( not shown ) .short episodes of mass transfer are observed close to periastron while mass transfer stops as the stars get further apart .the material being transferred hits the secondary and disturbs its outer envelope such that the latter loses some material . in the end , the secondary is surrounded by a relatively thick envelope .m binary with .the orbital period is about time units and the central point masses are not shown . ] to estimate the mass transfer rates , we have to determine the component to which every sph particle is bound .we use a total - energy ( per unit mass ) criterion , as presented by , and determine whether a particle is bound to the primary , the secondary , or the binary as a whole . in particular , given that most of the mass of the two stars is contained in the point masses , we use the latter as the main components to which particles are bound . for a particle to be bound to any of the components , we require its total energy relative to the component under consideration to be negative . the total energy of any particle with respect to both the point masses and the binary s centre of mass is defined as where and are the relative velocity and separation , respectively , between particle and component .moreover , we require the separation to be less than the current separation of the two centres of mass of the stars ( in this case , the point masses ) . for particles that satisfy both of these criteria for both stellar components ,we assign them to the stellar component for which the total energy is most negative .if only the energy condition is satisfied for the stellar components , or the energy with respect to the binary is negative , the particle is assigned to the binary component . finally , if the total relative energy is positive , the particle is unbound and is assigned to the ejecta .using the total mass bound to the stellar components as a function of time , we can determine the mass transfer rates for the system modeled .we use a simple approach to determine the instantaneous mass transfer rates based on the difference of the total mass bound of each component between two successive timesteps , i.e. where refers to the _ bound _ mass and the indices refer to component and timesteps and .figure [ fig : mdots ] shows the mass transfer rate of the primary as a function of time . distinct episodes of mass transfer are observed to occur once per orbit and peak around a few m yr .the number of particles transferred during each episodes is which , given the masses of the sph particles , may limit our ability to resolve lower mass transfer rates .indeed , sph requires a minimum number of neighbouring particles to calculate the density and in cases where only a handful of particles are transferred , the sph treatment may not be adequate . given the masses of the particles , low numbers of particles therefore set lower limits to the mass transfer rates that our simulations can resolve . using more particles of smaller masses would definitely allow for the resolution of lower mass transfer rates , as discussed in [sect : summary ] .apart from the main episodes of mass transfer , we also observe secondary peaks occurring before the main episodes of mass transfer .our simulation shows that some of the material lost both by the primary and the secondary falls back onto both components and this is what is observed here .m binary with .the number of particles transferred during each episode is , which represents an approximative lower limit to our ability to adequately resolve mass transfer due to the statistical nature of sph . ]analytical approaches used to study the evolution of binaries usually rely on prescriptions or approximations when dealing with mass transfer rates . to relax some of these approximations , hydrodynamicalmodeling can be used , although it remains a hard task for many physical and numerical reasons . to circumvent some of these difficulties, we introduced an alternate technique using boundary conditions and ghost particles to model only the outermost parts of both the donor and the accretor stars .the location of the boundary is arbitrarily set . since only the surface materialis involved in mass transfer , our approach allows for better spatial and mass resolutions in the stream of matter .moreover , our method allows for the modeling of both the accretor and the donor simultaneously while using less cpu time and maintaining realistic density profiles , taken from stellar models .our code was shown to work particularly well for stars of equal mass and stars that are centrally condensed . indeed , replacing the dense core of a massive star by a point mass is a good approximation .this is generally true for stars with masses m , although the evolutionary stage of the star may modify its density profile .low - mass stars have shown to be more difficult to properly model with our approach ( see figure [ fig : panels08048 ] ) and we suggest that limiting our new boundary conditions to centrally condensed stars will in general yield better results .we also discussed the setup of proper initial conditions for modeling binary stars , which we have tested on both equal- and unequal - mass binaries .we demonstrated that our relaxation procedure is consistently implemented in our code and that it allows for the evolution over many orbits of equal - mass detached circular binaries and maintain their orbital separation to within . in light of the results from our first simulation of mass transfer, we establish that typical mass transfer rates that can be modeled with our new boundary conditions ( i.e. m yr ) are consistent with the estimates of who investigated the formation of blue stragglers through episodes of mass transfer onto main - sequence stars .we therefore suggest that our method consisting of modeling both stars simultaneously with appropriate boundary conditions can be applied to the problem of mass transfer in main - sequence binaries and help clarify the origin of blue stragglers . using particles of lowermass allows for the modeling of lower mass transfer rates , although this requires the use of more particles and cpu time .pushing the boundary further out or using a point mass to model the secondary ( as in ) can also allow for the use of more particles and a better mass transfer rate resolution .using a point mass as the secondary would however counter the benefit of our method to be able to model two interacting stars simultaneously .the physicality of our simulations depends of course on the physical ingredients we put in our code . as such, we do not include the effects of radiation pressure and energy transport mechanisms by radiation or convection .these effects may have significance especially when studying the long - term evolution of mass - transferring binaries , where radiative cooling in the outer layers of the stars and envelope might be more important .moreover , like any numerical technique , our method has some of its own limitations , and we discuss them now . by construction ,our boundary is `` semi - impermeable '' , in that it does not allow particles to go through it .we set three smoothing lengths of ghosts and enforce that particles be artificially repositioned if they happen to cross the boundary .however , it is possible that these conditions fail when dealing with large mass transfer rates and if some particles find themselves inside the boundary , around the central point mass , our code has to be stopped .this particle penetration limits our ability to adequately model episodes of extremely ( and unrealistically ) large mass transfer rates ( m yr ; see paper ii ) .however , we do not think that our boundary should be so particle - tight since we do expect some mixing in the envelope of the secondaries .although moving the boundary to a smaller radius could fix the issue of particle penetration , it would counter the use and benefits of our approach . instead , we suggest using sink particles at the centre of the stars in order to account for deep mixing .sink particles are like point masses but , in addition , their mass and momentum are allowed to increase as sph particles get accreted .also , as discussed in [sect : boundary ] , the relative position of our ghost particles is fixed in time , thus imitating a solid boundary .the boundary is not allowed to change its shape and/or provide a time - variable pressure gradient on the sph particles . as a first approximation ,this is a valid treatment ( e.g. ) .however , when the gravitational potential changes significantly along the orbit , like on eccentric orbits , tidal forces may become non - negligible . butmodeling the effect of tidal forces on the boundary is costly , in terms of cpu time , as it involves calculating the gravitational force on the ghosts .this calculation is not done in our code as of now .we showed , however , that placing the boundary deep inside the star decreases the effect of tidal forces on the boundary and validates the use of our method .finally , we note that the angular velocity of the ghost particles is maintained fixed during our simulations .this is a valid assumption as synchronization occurs over timescales that are much longer ( years ) than the duration of our simulations .our relaxation procedure for binary stars has proven to be especially efficient for equal - mass binaries . however , for unequal - mass binaries , we do not quite achieve the same level of accuracy for the evolution of orbital separation .we think the reason for this difference comes from the fact that the equal - mass systems we model are perfectly symmetric , i.e. the two stars are exact replicas of each other , whereas in the case of unequal - mass binaries , symmetry is broken . for equal - mass binaries ,the gravitational acceleration calculations are exactly equal and opposite and the two stars are evolved identically . but given the adaptive nature of our code , stars ( and particles ) of different masses may be evolved on different timesteps and care should be taken if the two stars ( and particles ) are to be evolved consistently .we performed test runs during which we forced our code to use a smaller common timestep , but this approach does not improve the results . on the other hand , using a more direct summation approach for the calculation of the gravitational force ( by using a smaller binary tree opening angle ) is found to improve the results .indeed , results from test runs using such an approach show much improvements in evolving stars on circular orbits .however , doing so also makes our simulations significantly longer to run .therefore , we think the observed behaviour of our unequal - mass binaries is the result of our calculation of gravitational acceleration through a binary tree , although more work remains to be done .the method presented in this paper has been shown to be well suited for modeling the hydrodynamics of interacting binary stars . here , we used it to model roche lobe overflow , during which only the outermost layers of the stars are actively involved .we propose that the approach presented in this work can also be applied to many different situations , such as wind accretion and interstellar medium accretion , and that it can help better understand how stars react to mass loss and mass accretion in general .in particular , we have emphasized that the roche lobe formalism is not applicable in the case of eccentric and asynchronous binaries .we think our alternate method can be used to better understand and characterize the onset of mass transfer in such systems .this is the subject of a subsequent paper in which we model eccentric systems of different masses , semi - major axes , and eccentricities .we wish to thank the anonymous referee as well as doug welch and james wadsley for useful comments and discussions about this project .this work was supported by the natural sciences and engineering research council of canada ( nserc ) and the ontario graduate scholarship ( ogs ) programs , and made possible in part by the facilities of the shared hierarchical academic research computing network ( sharcnet : www.sharcnet.ca ) .abt , h.a & levy , s.g .1976 , , 30 , 273 bate , m.r . ,bonnell , i.a . , & price , n.m .1995 , , 277 , 362 bate , m.r .1995 , ph.d . thesis , university of cambridge , uk benz , w. 1990 , in buchler j.r . , ed .the numerical modeling of nonlinear stellar pulsations : problems and prospects .kluwer , dordrecht , p.26 9 benz , w. , bowers , r.l . ,cameron , a.g.w . , & press , w. 1990 , , 348 , 647 bisikalo , d.v .1998 , , 300 , 39 blondin , j.m . ,richards , m.t ., & malinowski , m.l .1995 , , 445 , 939 chen , x. , & han , z. 2008 , , 387 , 1416 church , r.p . ,dischler , j. , davies , m.b . ,tout , c.a ., adams , t. , & beer , m.e .2009 , , 395 , 1127 crawford , j.a .1955 , , 121 , 71 cummins , s.j , & rudman , m. 1999 , , 152 , 584 dsouza , m.c.r . ,motl , p.m. , tohline , j.e . , & frank , j. 2006 , , 643 , 381 dan , m. rosswog , s. , & brggen , m. 2008 , arxiv:0811.1517 dermine , t. , jorissen , a. , siess , l. , & frankowski , a. 2009 , , 507 , 891 deupree , r.g . , &karakas , a.i .2005 , , 633 , 418 duquennoy , a. & mayor , m. 1991 , , 248 , 485 eggleton , p.p .1983 , , 268 , 368 eggleton , p.p .2006 , evolutionary processes in binary and multiple stars .cambridge univ . press , cambridge fischer , d.a .& marcy , g.w .1992 , , 396 , 178 flannery , b.p .1975 , , 201 , 661 gaburov , e. , lombardi , j.c .jr , & portegies zwart , s. 2010 , , 402 , 105 gingold , r.a . , & monaghan , j.j . 1977 , , 181 , 375 gokhale , v. , peng , x.m . , &frank , j. 2007 , , 655 , 1010 governato , f. et al .2009 , , 398 , 312 guenther , d.b . ,demarque , p. , kim , y .- c . , & pinsonneault , m.h .1992 , , 387 , 372 halbwachs , j.l . , mayor , m. , udry , s. , & arenou , f. 2003 , , 397 , 159 iben , i. jr 1991 , , 76 , 55 lajoie , c .- p . , & sills , a. 2010 , , submitted lombardi , j.c .rasio , f.a . , & shapiro , s.l .1995 , , 445 , l117 lombardi , j.c .jr . , sills , a. , rasio , f.a . , & shapiro , s.l .1999 , , 152 , 687 lombardi , j.c .jr . , proulx , x.f . ,dooley , k.l ., theriault , e.m . ,ivanova , n. , & rasio , f.a .2006 , , 640 , 441 lubow , s. h. , & shu , f. h. 1975 , , 198 , 383 lucy , l.b .1977 , , 82 , 1013 mashchenko , s. , couchman , h.m.p . , &wadsley , j. 2006 , nature , 442 , 539 mayer , l. et al .2007 , , 661 , l77 monaghan , j.j . &lattanzio , j.c .1985 , , 149 , 135 monaghan , j.j .1989 , , 82 , 1 monaghan , j.j .1994 , , 110 , 399 morris , j.p . , fox , p.j ., & zhu , y. 1997 , , 136 , 214 morton , d.c . 1960 , , 132 , 146 motl , p.m. , tohline , j.e . , & frank , j. 2002 , , 138 , 121 paczyski , b. 1965 , , 15 , 89 paczyski , b. 1971 , , 9 , 183 paczyski , b. , & sienkiewicz , r. 1972 , , 22 , 73 petrova , a.v . , & orlov , v.v . 1999 , , 117 , 587 pooley , d. , & hut , p. 2006, , 646 , l143 price , d. 2005 , phd thesis , univ .cambridge ( astro - ph/0507472 ) price , d. , & bate , m.r .2009 , , 398 , 33 raguzova , n.v .& popov , s.b .2005 , astron .trans . , 24 , 151 rasio , f.a . , & shapiro , s.l .1994 , , 432 , 242 rasio , f.a . , &shapiro , s.l .1995 , , 438 , 887 ritter , h. 1988 , , 202 , 93 rodono , m. 1992 , in kondo , sistero , & polidan , eds , iau symp . 151 , evolutionary processes in interacting binary stars , kluwer - dordrecht , p.71 rosswog , s. , speith , r. , & wynn , g.a .2004 , , 351 , 1121 sawada , k. , matsuda , t. , & hachisu , i. 1986 , , 219 , 75 sepinsky , j.f . , willems , b. , & kalogera , v. 2007a , , 660 , 1624 sepinsky , j.f . , willems , b. , kalogera , v. , & rasio , f.a .2007b , , 667 , 1170 sepinsky , j.f . , willems , b. , kalogera , v. , & rasio , f.a .2009 , , 702 , 1387 sills , a.i . , & lombardi , j.c.jr 1997 , , 484 , l51 sills , a. , lombardi jr . , j.c . ,bailyn , c.d ., demarque , p. , rasio , f.a . , & shapiro , s.l .1997 , , 487 , 290 sills , faber , j.a ., lombardi jr . , j.c . ,rasio , f.a . , & warren , a.r .2001 , , 548 , 323 stinson , g.s .2009 , , 395 , 1455 takeda , h. , miyama , s. m. , & sekiya , m. 1994 , prog .phys . , 92 , 939 tohline , j.e .2002 , , 40 , 349 warner , b. , & peters , w.l . 1972 , , 160 , 15
|
close interactions and mass transfer in binary stars can lead to the formation of many different exotic stellar populations , but detailed modeling of mass transfer is a computationally challenging problem . here , we present an alternate smoothed particle hydrodynamics approach to the modeling of mass transfer in binary systems that allows a better resolution of the flow of matter between main - sequence stars . our approach consists of modeling only the outermost layers of the stars using appropriate boundary conditions and ghost particles . we arbitrarily set the radius of the boundary and find that our boundary treatment behaves physically and conserves energy well . in particular , when used with our binary relaxation procedure , our treatment of boundary conditions is also shown to evolve circular binaries properly for many orbits . the results of our first simulation of mass transfer are also discussed and used to assess the strengths and limitations of our method . we conclude that it is well suited for the modeling of interacting binary stars . the method presented here represents a convenient alternative to previous hydrodynamical techniques aimed at modeling mass transfer in binary systems since it can be used to model both the donor and the accretor while maintaining the density profiles taken from realistic stellar models .
|
the emergency of collective behavior among coupled oscillators is a rather common phenomenon . in nature ,one typically finds interacting chaotic oscillators which through the coupling scheme form small and large networks .surprisingly , even though chaotic systems possess an exponential divergency of nearby trajectories , they can synchronize due to the coupling , still preserving the chaotic behavior .indeed , synchronization phenomena have been found in a variety of fields as ecology , neuroscience , economy , and lasers . in the last years some types of synchronizationhave been reported .a rather interesting kind is a weak synchronization , namely phase synchronization ( ps ) , that does not reveal itself directly from the trajectory , but as a boundedness of phase difference between the interacting oscillators .in such a synchronization the trajectories can be uncorrelated , and therefore , the oscillators present some independence of the amplitudes , but still preserving the collective behavior. this phenomenon can arise from a very small coupling strength .it has been reported that it mediates processes of information transmission and collective behavior in neural and active networks , and communication processes in the human brain .its presence has been found in a variety of experimental systems , such as in electronic circuits , in electrochemical oscillators , plasma physics , and climatology . in order to state the existence of ps, one has to introduce a phase for the chaotic oscillator , what is not straightforward .even though the phase is expected to exist to a general attractor , due to the existence of the zero lyapunov exponent , its explicit calculation may be impossible .actually , even for the simple case of coherent attractors , it has been shown that phases can be defined in different ways , each one being chosen according to the particular case studied .however , all of them agree for sufficiently coherent attractors . in spite of the large interest in this field , there is still no general , systematic , and easy way to detect the existence of this phenomenon , mainly , due to the fact that the phase is rather difficult ( often unknown ) to calculate .the calculation becomes even harder if the oscillators are non - coherent , e.g. the funnel oscillator .therefore , in order to present a general approach to detect ps , with practical applications , we must overcome the need of a phase . in many casesthe phase can be estimated via the hilbert transformation or a wavelet decomposition .supposing that it is possible to get a phase , the approach developed in ref . gives rather good results .it is grounded on the idea of conditional observations of the oscillators . whenever the phase of one of the oscillators is increased by , we measure the phase of the other oscillators .the main idea is that if one has ps , the distribution of these conditional observation in the phase presents a sharp peak , and therefore ps can be detected .there are a few approaches that try to overcome the difficulties of not having a general phase . for periodically driven oscillators, there is an interesting approach , very useful and easy to implement that overcomes the need of a phase , the stroboscopic map technique .it consists in sampling the chaotic trajectory at times , where is an integer and is the period of the driver .the stroboscopic map was used to detect ps .the basic idea is that if the stroboscopic map is localized in the attractor , ps is present .actually , the stroboscopic map is a particular case of the approach of ref . . indeed ,since the driver is periodic , the observation of the trajectory of the chaotic oscillators at times is equivalent to observe the oscillators at every increasing of in the phase of the driver . furthermore , if the chaotic oscillator presents a sharp conditional distribution , this means that the stroboscopic map is localized .the advantage of such an approach is that it does not require the introduction of a phase neither in the periodic oscillator nor in the chaotic one . in the case of two or more coupled chaotic oscillators , namely and , the stroboscopic map techniques can be no longer applied .however , if the oscillators are coherent and have a proper rotation , a generalization of the stroboscopic map has been recently developed . instead of observing the oscillators at fixed time intervals , multiples of the period , one can define a poincar section in and then observe every time the trajectory of crosses the poincar section . if the oscillators are in ps , these observations give place to a localized set .another approach that is relevant to the present problem is the one developed in ref .this approach consists of defining a point and a small neighborhood of this point composed by points , where , with being the number of points within the defined neighborhood .then , one observes the oscillator at the times , which gives place to the points .again , the idea is that if the oscillators present synchronization , the cloud of points occupies an area much smaller than the attractor area .further , estimators have been introduced to quantify the amount of synchronization . even though the intuition says that localized set implies the presence of synchronization ,there is a lack of theoretical analysis showing such a result for a general oscillator .moreover , as far as we know , there are no results that guarantee that such an approach works for multiple time - scale oscillators . in addition , it is not clear what kind of points ( events ) could be chosen , and finally , how one should proceed in the case that the small neighborhood of the point has infinitely many neighbor points . in this work ,we extent the ideas of ref . .we show that all these approaches can be put in the framework of localized sets .our results demonstrate that for general coupled oscillators and , if one defines a typical event in and then observes the oscillator whenever this event occurs , these observations give rise to a localized set in the accessible phase space if ps exists .these results can be applied to oscillators that possess multiple time - scales as well as in neural networks . as an application, we analyze the onset of ps in neural networks .we show that in general neural networks one should expect to find clusters of phase synchronized neurons that can be used to transmit information in a multiplexing and multichannel way .finally , we relate the localized sets from our theory to the information exchange between the coupled chaotic oscillators .the paper is organized as follows : in sec .[ setup ] we define the dynamical systems we are working on . in sec .[ localized ] we give a result that enables the identification of ps without having to measure the phase .we illustrate these findings with two coupled rssler oscillators in sec .[ ross ] . for oscillators possessing multiple time - scalesour main results are discussed in sec .[ timescales ] , and then illustrated in sec .[ neus ] for bursting neurons coupled via inhibitory synapses .our results are also applied to neural networks of excitatory neurons in sec .we briefly discuss how to apply these ideas into high dimension oscillators and experimental data series in sec .finally , we analyze the relation between the localized sets and the transmission of information in chaotic oscillators in sec .moreover , in appendix [ proof ] we prove the main theorem of sec . [ localized ] about the localization of sets in ps .we consider oscillators given by first order coupled differential equations : where , , and , is the output vector function , and is the coupling strength between and .note that could also depend on the coordinates and on time . from now on , we shall label the coupled oscillator by subsystem .next , we assume that each has a stable attractor , i.e. an inflowing region of the phase space where the solution of lies .further , we assume that the subsystem admits a phase .therefore , the condition for ps between the oscillators and can be written as : where and are integers , and the inequality must hold for all times , with being a finite number . for a sake of simplicity , we consider the case where , in other words ps . herein , we suppose that a frequency can be defined in each subsystem , such that : where is a continuous function bounded away from zero .furthermore there is a number such that .this phase is an abstract phase in the sense that it is well defined , but we are not able to write the function for a general oscillator .we also consider the frequencies not to be too different , such that , in general , through the coupling ps can be achieved .in this section we present our main result . the basic ideaconsists in the following : given two subsystems and , we observe whenever an event in the oscillator happens . as a consequence of these conditional observations , we get a set . depending on the properties of this set one can state whether there is ps .the conditional observations could be given by a poincar section , if it is possible to define a poincar section with the property that the trajectory crosses it once per cycle in a given direction .we wish to point out that in this case , one is able to have more information about the dynamics and the phase synchronization phenomenon . as an example , one can introduce a phase , and estimate the average frequency of the oscillators .however , these techniques based on the poincar section can not be applied to attractors without a proper rotation , where such a section can not be well defined . our main resultovercomes the need of a poincar section . we show that one can use any typical event to detect ps .such events may be the crossing of the trajectory with a small piece of a poincar section ( when it is possible to defined such a section ) , the crossing of the trajectory with an arbitrary small segment , the entrance of the trajectory in an -ball , and so on .the only constraint is that the event must be typical ( we shall clarify what we mean by typical , later on ) and the region where the event is defined must have a positive measure. let be the time at which the event in the subsystem happens .then , we construct the set : where is the initial point within the attractor of .next , we define what we understand by localized set .let be a subset of .the set is localized in if there is a cross section and a neighborhood of such that [ loc ] an illustration of the definition is given in fig .[ defi ] . under the assumptions of sec .[ setup ] , the following result connects the existence of phase synchronization with the localization of sets the : given a typical event , with positive measure , in the oscillator , generating the times .the observation of at generates a localized set if there is ps .this result constitutes a direct generalization of approaches of refs . . as a consequence ,this result shed a light into the problem of ps detection , which turned out to be a rather difficult task , depending on the system faced .therefore , ps can be detected in real - time experiments and in data analysis by verifying whether the sets are localized , without needing any further calculations . in this sectionwe investigate the mechanism for the non localization of the sets .we let the event definition be an entrance in an -ball in both subsystems , with being the radius . when is small enough, we can demonstrate that ps leads to the locking of all unstable periodic orbits ( upo ) between the subsystems . if the set is localized , then all upos between and are locked .* proof : * we demonstrate this result by absurd .let us assume that there is ps ; as a consequence the set is localized .suppose that there is an upo , regarded as in , and another upo , regarded as in , and that they are not locked ( there is no rational number that relates both frequencies ) .so , there is a mismatch between the frequencies of the two upos . given an -ball around ( resp . ) , where ( resp . ) , any point distant from , where , follows \le \varepsilon_j ] is a metric .an initial condition inside the -ball is governed by the upo till a time , see fig .[ upo ] for an illustration .next , we construct the set by sampling the trajectory of whenever the trajectory enters in the -ball , which is equivalent to observe every period of the upo . there is an one - to - one correspondence ( isomorphism ) between the dynamics of the conditional observations and the dynamics of the irrational rotation in the unitary circle , , , where is the frequency mismatch between the two upos , here given by : where is the angular frequency of .this means that the points of will be dense around the upo , and therefore , the set is not localized ; there is no ps , what contradict our assumption . indeed , since , it is impossible to bound the phase difference between and by a finite number .thus , in order to have localized sets , all upos must be locked. this shows that the mechanism for the non - localization of the sets will be the existence of unlocked upos between and .similar results have been pursued for periodically driven oscillators , .right at the desynchronization some upos become unlocked and the stroboscopic map becomes non - localized , and some phase slips happen , generating an intermittent behavior .the duration of the phase slips are related to the number of unlocked upos .of course , in this regime the set is a non - localized set . however , if one looks for finite time intervals the set may be apparently localized .we first illustrate this result for two coupled rssler oscillators , given by : with , and .in such a coherent oscillator , we can simply define a phase , where , which provides an explicity equation for it .indeed , taking the derivative with respect to time : which can be written as , which provides : noting that . in a more compact notation, we consider , then eq . ( [ phase ] ) can be written as where represents the vectorial product .equation ( [ ph ] ) can be used to calculate the phase of the oscillators , and there is ps if remains bounded as . in order to apply our resultswe may define an event occurrence in both oscillators .we define the event in oscillator to be the trajectory crossing with the segment : the crossings generate the times .the event in the oscillator happens whenever its trajectory crosses the segment : the crossings generates the times .then , the set is constructed by observing the oscillators at times for and , the set spreads over the attractor of [ fig [ event ] ( a ) ] , and spreads over the attractor of [ fig . [ event](b ) ] .therefore , there is no ps , i.e. the phase difference diverges [ fig [ event ] ( e ) ] .indeed , a calculation of the frequencies shows that and .as we increase the coupling , ps appears . in particular , for and , the sets and are localized [ figs [ event ] ( c ) and ( d ) , respectively ] .hence , the phase difference is bounded [ fig . [ event](f ) ] .the average frequency is .our main goal is to state the existence of ps , however , we can also estimate the synchronization level between and by means of the localized sets .this can be done by introducing an estimator .one way to estimate the amount of synchrony is to define : where vol denotes the volume .if there is no ps , the set spreads over the attractor of , see fig .[ event](a , b ) , then , . as the oscillators undergo a transition to ps , becomes smaller than .the lower is the stronger the synchronization level is . for attractors with the same topology as the rssler oscillator , can be easily calculated . instead of computing the volume, we calculate the area occupied by the attractor in the plane .the area of the attractor of can be roughly estimated by the area of the disk with radii and , see fig .thus , . on the other hand ,the set is confined into an angle [ fig .[ hjk ] ] .therefore , the area of the set can be estimated as .thus , the estimator can be written as : we have used eq .( [ est_h ] ) to estimate the amount of synchronization between the two coupled rssler of eq .( [ rossler ] ) .we fix and vary the mismatch parameter within the interval ] , connected via excitatory chemical synapses .the mismatch parameter is the intrinsic current .since the meaningful parameter is , for which the hr neuron best mimics biological neurons , we introduce mismatches around this value for all the neurons within the network .thus , given a random number uniformly distributed within the interval ] , there is the formation of clusters .this scenario of cluster formation is neither restricted to this hr model nor to the synapse model .it can also be found in square - wave and parabolic bursters , and it is in general achieved quite before the onset of complete synchronization .for example , we use a more simplified hr model given by : , with the parameters : ; being the connectivity matrix and a fast threshold modulation as synaptic input given by ,\ ] ] with and . as before , is the synaptic strength and the reversal potential in order to have an excitatory synapse . for a homogeneous random network of identical hr neurons , with [ fig . [nets](b ) ] , the theory developed in ref . predicts the onset of complete synchronization at , while we found that ps in the whole network is already achieved at .clusters of ps , however , appear for a much smaller value of the coupling strength , actually at .next , we apply the same procedure as before and we compute the variance of the average bursting time on the ensemble of neurons within the network .the result is depicted in fig .[ hasler_ps ] , the inset numbers indicate the amount of clusters . as we have pointed out ,such clusters are rather suitable for communication exchanging mainly for two reasons : they have different frequencies , therefore , each cluster may be used to transmit information in a particular bandwidth , which may provide a multiplexing processing of information . clusters of phase synchronous neurons provide a multichannel communication , that is , one can integrate a large number of neurons ( chaotic oscillators ) into a single communication system , and information can arrive simultaneously at different places of the network .this scenario may have technological applications , e.g. in digital communication , and it may also guide us towards a better understanding of information processing in real neural networks .it is easy to say whether the set is localized in a two dimensional plane ; this could be done for example by visual inspection . in multi - dimensional systemit might not be obvious whether the set is localized .this is mainly due to the fact that in a projection of a higher dimensional system onto a low dimensional space , the set might fulfill the projected attractor .therefore , the analysis of the localization might have to be realized in the full attractor of the subsystem .the analysis is also relatively easy if we bring about a property of the conditional observation . whenever there is ps , the conditional observation , given by , is not topologically transitive in the attractor of , i.e. is localized .the conditional observations are topologically transitive in the attractor of if for any two open sets , to check whether is localized , we do the following .if there is ps , for it exists infinitely many such that where is an open ball of radius centered at the point , and is small .we may vary and to analyze whether it is possible to fulfill eq .( [ practice_basic_sets ] ) .whenever this is possible , it means that the set does not spreads over the attractor of , and therefore , there is ps . for analysis of ps basing on experimental data where the relevant dynamical variables can be measured , so that the phase space is recovered , our approach can be used straightforwardly . if one just has access to a bivariate time series , one first has to reconstruct the attractors , and then proceed the ps detection by our approachin this section , we analyze the relationship between the sets and the capacity of information transmission between chaotic oscillators . in order to proceed such an analysis, we may assume that the oscillators are identical or nearly identical .such that the synchronized trajectories are not far from the synchronization manifold , i.e. the subspace where .next , for a sake of simplicity we consider only oscillators whose trajectory possess a proper rotation and are coherent , e.g. the standard rssler oscillator .however , the ideas herein can be extended to other oscillators as well . the amount of information that two systems and can exchange is given by the mutual information : where is the entropy of the oscillator and is the conditional entropy between and , which measures the ambiguity of the received signal , roughly speaking the errors in the transmission . as pointed out in ref . the mutual information can be also estimated through the conditional exponents associated to the synchronization manifold .the mutual information is given by : where are the positive conditional lyapunov exponents associated to the synchronization manifold , the information produced by the synchronous trajectories , and are the positive conditional lyapunov exponents transversal to the synchronization manifold , related with the errors in the information transmission . in ps can be small , which means that one can exchange information with a low probability of errors .so , ps creates a channel for reliable information exchanging . in general, we expect , where are the positive lyapunov exponents .thus .in order to estimate an upper bound for , we need to estimate , what can be done directly from the localized sets .the conditional transversal exponent can be estimated from the localized sets by a simple geometric analysis . at the time the oscillator reaches the poincar plane at while the oscillator is at .the initial distance between the trajectories is .this distance evolves until the time when the oscillator reaches the poincar plane at , while the trajectory of is at .the new distance is .therefore , we have : so , the local transversal exponent is given by : where we use the convention .of course , we only estimate the conditional exponent close to the poincar plane . hence, if we change the poincar plane the conditional exponent may also change , i.e. there are some events that carry more information than others .we illustrate this approach for two coupled rssler oscillators .we set the parameters to , , , , and . as shown in ref . at , the two oscillator undergo a transition to ps .in particular , for we have .we estimate at this situation by means of eq .( [ lambda ] ) .we set the poincar section at , and compute for cycles , i.e. 65,000 crossing of the trajectory with and .we get .note that we are not computing , but rather , the maximum , namely .therefore , it is natural to expect to be smaller than .however , the upper bound to the information exchange can be estimated by , that is , the maximum amount of information that can flow through the coupled oscillators if we encode the trajectory using the poincar plane .furthermore , it seems that when the level of synchronization is large , the estimation of , by means of eq .( [ lambda ] ) , might become problematic , due to strong fluctuations in $ ] .we have proposed an extension of the stroboscopic map , as a general way to detect ps in coupled oscillators .the idea consists in constraining the observation of the trajectory of an oscillator at these times in which typical events occur in the other oscillator .this approach provides an efficient and easy way of detecting ps , without having to explicitly calculate the phase .we have shown that if ps is present , the maps of the attractor appear as a localized set in the phase - space .this has been illustrated in coherent oscillators , the coupled rsslers , as well as in non - coherent oscillators , spiking / bursting neurons of hr type coupled with chemical synapses . as we have shown in neural networks ,the appearance of clusters of ps is rather common , which may be relevant for communication mainly due to two aspects : the clusters provide multiplexing information processing , namely each cluster may be used to transmit information within a bandwidth . provide a multichannel communication , that is , a large number of neurons is integrated into a single communication system .moreover , we have analyzed the relation between the information exchanging and the localized sets .we have roughly estimated the errors in the information transmission from the localized sets .* acknowledgment * we would like to thank m. thiel , m. romano , c. zhou , and l. pecora for useful discussions .this work was financially supported by the helmholtz center for mind and brain dynamics , eu cost b27 and dfg spp 1114 .in this appendix we prove the theorem 1 .it is instructive to give a sketch of the proof , in order to have a better understanding of the result .we split the demonstration into the following four steps : we show that the increasing of in the phase defines a smooth section on , which does not intersect itself . we show that observing the oscillator whenever oscillators crosses gives place to a localized set . , we show that the observation of whenever crosses a piece of the section also gives place to a localized set. using these results we show that , actually , the localized sets can be constructed using any typical event . to show this , we only note that given a typical event with positive measure , we can choose to be close to the event occurrence , implying that shortly before or shortly after of every event occurrence , a crossing of the trajectory with will happen .thus , if we observe whenever the event occurs in we will have a set that a close the , and therefore , localized .next , we formalize the heuristic ideas .let us introduce .thus , we construct a section . is smooth since both and are smooth .indeed , given two points , with , there is a such that , and furthermore , we can construct a continuous section , by conveniently choosing points . the fact that does not intersect itself comes from the uniqueness of , and from the fact that the , which implies that the phase is an one - to - one function with the trajectory .note that , obviously , this section depends on the initial conditions . _ proof : _ let be the poincar map associated to the section , such that given a point , so = , where = . from now on , we use a rescaled time , with . for a slight abuse of notationwe omit the .there are numbers such that where , by time reparametrization , . if both oscillators are in ps , then , and so : with .now , we analyze one typical oscillation , using the basic concept of recurrence .given the following starting points and , we evolve both until returns to .let us introduce bringing the fact that , we have : now , by using the fact that , we can write : so , given a point evaluated by the time when the trajectory of returns to the section , the point returns near the section , and vice - versa .therefore , it is localized . for a general case, we have to show that a point , in the section , evolved by the flow for an arbitrary number of events in the oscillator , still remains close to , in other words , it is still localized .this is straightforward , since .so , we demonstrated that the ps regime implies the localization of the set .now , we show that the localization of the set implies ps .supposing that we have a localized set , so , eq . ( [ diferenca_temporal ] ) is valid , by the above arguments .therefore , we just have to show that eq .( [ diferenca_temporal ] ) implies ps . with effect, we have which is equal to .this may be written as .next , noting that , we get : where .therefore , if the time event difference is bounded it implies the boundedness in the phase .thus , we conclude our result . let be the times at which the trajectory crosses a piece of .if there is ps , then the observation of the trajectory of at times gives place of a localized set ._ proof : _ note that the observation of the trajectory of at times gives place to a set , while the observations at times give place to a subset of .therefore , whenever is localized , it implies the localized of . proof : _ let the event be the entrance in an -ball , such that the event occurrence produces the time series , in .there is , at least , one intersection of this ball with the section . since depends on the initial conditions , we can choose an initial condition right at the -ball event .next , we choose such that it is completely covered by the -ball . since the measure of the -ball is small , , the time difference between crossings of the trajectory with and the -ball is small , thus , there is a number such that : therefore , if we observe the trajectory of at times , we have a localized set in .thus , we conclude our result : the observation of the trajectory of whenever typical events in occurs generates localized sets if , and only if , there is ps . a. pikovsky , m. rosenblum , and j. kurths , _ synchronization : a universal concept in nonlinear sciences _ , ( cambridge university press , 2001 ) ; s. boccaletti , j. kurths , g. osipov , d. valladares , and c. zhou , phys .rep . * 366 * , 1 ( 2002 ) ; j. kurths , s. boccaletti , c. grebogi , et _ al ._ chaos * 13 * , 126 ( 2003 ) .we consider vol to be the euclidian volume that contains all the points of the set .thus , if the set is spread over a sphere of radius the volume of the set will be . the same is valid for the attractor volume .note the the choice of is somewhat arbitrary .we could also compare the euclidian distance between the points of the attractor and the points of the localized set .this would lead to the approach of ref . .a chaotic set is always transitive through the flow .so , given a set of initial conditions , its evolution through the flow eventually reaches arbitrary open subsets of the original chaotic attractor .however , the conditional observations might not possesses the transitive property .we could also show that is smaller than 1 by the following argument .note that the number of crossings and of the oscillators and with the sections and , respectively , may not be always the same .they can differ by an unity , since the events are not simultaneous .therefore , we have .however , , then , using this into the previous equation and normalizing , we have , for large enough .
|
we present an approach which enables to identify phase synchronization in coupled chaotic oscillators without having to explicitly measure the phase . we show that if one defines a typical event in one oscillator and then observes another one whenever this event occurs , these observations give rise to a localized set . our result provides a general and easy way to identify ps , which can also be used to oscillators that possess multiple time scales . we illustrate our approach in networks of chemically coupled neurons . we show that clusters of phase synchronous neurons may emerge before the onset of phase synchronization in the whole network , producing a suitable environment for information exchanging . furthermore , we show the relation between the localized sets and the amount of information that coupled chaotic oscillator can exchange .
|
p2p live streaming applications like p2p iptv are emerging on the internet and will be massively used in the future. the p2p traffic counts already for a large part of the internet traffic and this is mainly due to p2p file - sharing applications as bittorrent or edonkey .video streaming services like youtube appeared only a few months ago but contribute already to an important part of the internet traffic .it is expected that p2p iptv will largely contribute to increase the overall internet traffic .it is therefore important to study p2p iptv traffic and to characterize its properties .+ the characterization of p2p iptv traffic will allow us to understand its impact on the network .p2p iptv applications have stringent qos constraints ( e.g. bandwidth , delay , jitter ) and their traffic characterization will enable to understand their exact needs in network resources .the knowledge of the traffic properties enables the development of synthetic traffic generation models that are key input parameters when modeling or simulating these systems .indeed , the modeling or simulating steps are necessary to design judiciously applications . from a traffic engineering point of view, well understanding p2p iptv traffic is essential for internet service providers to forecast their internal traffic and ensure a good provisioning of their network . andlast but not least , global knowledge of the traffic properties will highlight some drawbacks of the applications and will make it possible to improve the design of new p2p iptv architectures .for instance , an important concern of these systems is the scalability .the traffic characterization may help estimate the impact of overhead traffic generated by the signaling .+ in this paper , we present a multiscale analysis of the structure of the traffic generated by the most popular p2p iptv applications , namely pplive , ppstream , sopcast and tvants .+ during the 2006 fifa world cup , we performed an extensive measurement campaign .we measured the network traffic generated by broadcasting soccer games by the previously mentioned applications .the multiscale behavior of the collected traffic is analyzed using a wavelet transform based tool . in this paper, we characterize the network traffic of p2p iptv systems at different time scales and compare their properties .to the best of our knowledge , this is the first work that does a comparative multiscale characterization of p2p iptv traffic .+ our multiscale p2p iptv traffic analysis shows significant differences in the scaling behaviors of tcp and udp traffic .the tcp traffic presents periodic behavior while the udp traffic is stationary and presents long - range dependency characteristics , which will affect the quality of the video reception .the signaling traffic has an impact on the download traffic but it has negligible impact on the upload traffic .the upload traffic generated by p2p iptv systems have different scaling characteristics compare to the download traffic and both sides of the traffic has to be taken into account to design judiciously p2p iptv traffic models .moreover , the traffic granularity has to be considered while using traffic models to simulate these systems .the rest of the paper is organized as follows .firstly , we present the related work in section [ sec : related ] . in section [ sec : experiments ] , we give an overview of the measured applications and describe our measurement experiment setup . in section [ sec : metho ] , we present our methodology to analyze the traffic at different time scales .we present our p2p iptv traffic analysis in section [ sec : results ] and discuss the results in section [ sec : discussion ] .finally , we conclude the paper and give perspectives in section [ sec : conclusion ] .nowadays , an increasing number of p2p iptv measurement studies is conducted to analyze the mechanisms of such systems .+ zhang et . al present the first measurement results about their protocol donet , which were deployed on the internet and called coolstreaming .they provide network statistics , like user s behavior in the whole system and the quality of video reception .+ hei et al . made a complete measurement of the popular pplive application .they made active measurements by instrumentalizing their own crawler and give many architecture and overlay details like buffer size or number of peers in the networks .vu et al . made also active measurements of the pplive system and derive mathematical models for the distributions of channel population size or session length .+ in our previous work , we passively measured the network traffic generated by several popular applications during a worldwide event .we compared the measured applications by inferring their underlying mechanisms and highlight their design differences and similarities .ali et al . made passive measurements of pplive and sopcast applications and analyze the performance and characteristics of such systems .+ still in their previously mentioned works , ali et al . provide their own methodology to study the data exchanges of such p2p applications . based on their measurement studies , hei et al . developed also a methodology to estimate the overall perceived video quality throughout the network .+ all these works studied p2p iptv systems by measuring the traffic and tried to infer their mechanisms , but they did not characterize the correlation structure of the generated traffic at different time scales to understand its properties and its impact on the network .for our p2p iptv traffic measurement experiments , we chose four applications , namely pplive , ppstream , sopcast and tvants because they were very popular on the internet . whenever these applications are freely available , their source codes are not open and their exact implementation details and used protocols are still widely unknown .we can only rely on reverse engineering to understand their transmission mechanisms .+ all these applications claim to use swarming protocol like donet . similarly to bittorrent , video data flows are divided into data chunks and each peer downloads the chunck of data to other peers concurrently .the peers know how to download the video data chunks by exchanging randomly with other peers information about the data chunks they have or neighbor peers they know . with this signaling traffic, each peer discovers iteratively new peers , new available data chunks and is able to download video from several peers . in these p2p protocols ,there are two kinds of traffic : video traffic where peers exchange data chunks with each other and signaling traffic where peers exchange information to get the data .+ as we show in , all the applications transport video and signaling traffics differently : ppstream uses exclusively tcp for all traffics while pplive adds udp for some signaling traffic .sopcast uses almost entirely udp and tvants is more balanced between tcp ( 75% ) and udp for all kinds of traffic .+ in the next section , we will present the measurement experiments platform we used to collect the p2p iptv traffic ..packet traces summary [ cols="^,^,^,^,^,^,^,^,^,^ " , ] we already show in that all the applications do not implement the same mechanisms to download the video .according to the applications , the video can be received from only a few provider peers at the same time or from many peers and the video peers session durations are various .this explains why the amount of data transported by the top flows are different for all the applications .+ the top download flows sent by the top peer to our nodes will be analyzed by using wavelet based transform method with ldestimate , similarly to the previous experiments .due to space limitation , we only present on fig .[ fig : topflows ] , the energy spectra for a single tcp application ( pplive fig [ subfig : pplive_flow ] and [ subfig : pplive_flow_video ] ) and for udp application ( sopcast fig .[ subfig : sopcast_flow ] and [ subfig : sopcast_flow_video ] ) .the other tcp applications plots are similar to the presented tcp application and can be found in the appendix .+ we notice for the two applications that their video energy spectra look similar to their overall energy spectra .this was expected because these flows are sent by the top contributor peers and transport almost entirely video packets and not signaling packets .removing signaling traffic on these flows can only have a limited impact , depending on the signaling packets in the flows .+ for example , the top download flow of sopcast transport 15,867 packets of signaling traffic ( ) counting for 2.71 mbytes whereas pplive top download flow transport only 5,731 signaling packets ( ) counting for 0.33 mbytes . + regarding tcp applications fig .[ subfig : pplive_flow ] and [ subfig : pplive_flow_video ] , until time scale , the energy spectra of the top flow look similar to the aggregate traffic . beyond this time scale ,the energy spectra of the top download flow are different from the aggregate traffic because the energy spectra of the top flow are increasing .+ with udp applications , until time scale , the energy spectra of the top flow are different from the aggregate traffic . fig .[ subfig : sopcast_flow ] and [ subfig : sopcast_flow_video ] show an energy bump at time scale , then the energy spectra increase slightly from to . beyond , we observe the linear increase usually observed for the udp energy spectra . in this experiment, we observe that the top flows in the download traffic do not have the same scaling properties as the aggregate download traffic .we did the same experiments for the 10th top download flows ( i.e. the 10th flow according to data volume transported ) .the 10th top flows present the same scaling properties as the top flows .the plots for the 10th top flows can be shown in the appendix .+ the aggregate traffic is not only the mix of every single flow .the granularity of the p2p iptv traffic has to be taken into account when designing p2p iptv traffic models .in this work , we analyzed the p2p iptv traffic by using a wavelet based transform method .this allows us to characterize this traffic and to understand its properties and impact on the network .thanks to our original p2p iptv traffic analysis , we have many new findings and observations that have to be summarized and discussed . + first of all , we observed that the energy spectra of tcp applications are different from the energy spectra of udp applications ( section [ sec : versus ] ) .one of the most relevant difference is the energy bump observed in the spectra of tcp applications at time scale ( 5.12s ) , which indicates a possible periodic behavior in the traffic . intuitively , we could believe these differences come from the two different transport protocols used. however , a 5 seconds periodic behaviors is a very long duration for tcp mechanisms and tcp should not be the responsible of this periodic behavior . with a simple application design difference, the scaling properties of the generated traffic do not have the same impact on the network .+ secondly , for all the applications , the signaling traffic represents a larger part in the download traffic than the upload traffic ( section [ subsec : signaling ] ) . the signaling traffic has clearly an impact on the scaling properties of the download traffic and has no impact on the upload traffic .this observation is important since signaling traffic is necessary to coordinate the data exchanges in such p2p systems . for scalability reasons , the amount of signaling traffic has to be kept as low as possible .the download signaling traffic comes from other peers on the internet that request the video data .efforts have to be made to reduce the number of packets sent by the signaling protocol to get the video data and to preserve the scalability of these systems in the network .+ then , the previous observation highlights an important point when modeling p2p iptv traffic : the download traffic has not the same properties as the upload traffic .the differences between both sides of the traffic ( i.e. upload and download ) have to be taken into account carefully when designing synthetic traffic generation models .+ the generated traffic of tcp applications is not stationary beyond time scale . on the contrary ,the traffic of udp applications is stationary .as shown in section [ subsec : stationarity ] , the stationnarity experiment proves that the signaling traffic of udp application involves long - range dependency in the download traffic .the udp application experiments also long - range dependency in the upload traffic . in presence of traffic lrd, the network conditions are always changing and it becomes a hard task to provide qos parameters as delay for users to get good quality video .+ this finding highlights the not so trivial choice of transport protocols for p2p iptv traffic .it is usually admitted that the non - elastic data transfer -as video- has to rely on udp but we show that udp traffic may lead to trouble in the network traffic . + finally , the aggregate download traffic of p2p iptv systems has not exactly the same scaling properties as the top download flow ( section [ sec : topdl ] ) .the granularity of the traffic has to be taken into account when designing p2p iptv traffic models .a p2p iptv traffic model based only on flows properties would fail to capture the global characteristics of the aggregate traffic .the use of an inappropriate traffic model would lead to wrong results when simulating new architectures with such significant input parameter .in this paper , we analyzed network traffic generated by p2p iptv applications .we performed an extensive measurement campaign during the 2006 fifa world cup and we measured the most popular p2p iptv applications on the internet .we used wavelet transform based method to study the p2p iptv traffic at different time scales and to characterize its properties .+ our multiscale traffic analysis show how different are the scaling properties of the tcp and udp traffics .for all the applications , the signaling traffic has a significant impact on the download traffic but not on the upload traffic .it involves scalability concerns regarding the p2p iptv signaling protocols used to download the video data .the udp traffic is stationary and leads to long - range dependency of the traffic .the choice of udp as transport protocol for non - elastic transfers in p2p networks becomes not so trivial since the traffic lrd indicates that the traffic is not predictable in the network .the scaling properties of the download traffic are different from the upload traffic .the traffic granularity and both traffic directions have to be taken into account to model p2p iptv traffic accurately . + currently , we are analyzing the traffic collected during other games and under different network environments to extend our observations . it will allow us to have a finer analysis of our findings and could also help to answer to the open questions introduced by this work . in a long - term work ,the characterization of the p2p iptv traffic will help us to accurately model and simulate such systems .10 [ 1]#1 url [ 2]#2 b. cohen , `` incentives build robustness in bittorrent , '' 2003 .http://www.edonkey2000.com .. http://www.pplive.com .http://www.ppstream.com .http://www.sopcast.com .http://www.tvants.com .x. zhang , j. liu , and b. li , `` on large - scale peer - to - peer live video distribution : coolstreaming and its preliminary experimental results , '' in _ proc . mmsp _ ,x. zhang , j. liu , b. li , and t. p. yum , `` coolstreaming / donet : a data - driven overlay network for peer - to - peer live media streaming , '' in _ proc .infocom _ , 2005 .x. hei , c. liang , j. liang , y. liu , and k. w. ross , `` insights into pplive : a measurement study of a large - scale p2p iptv system , '' in _ proc . of iptv workshop , international world wide web conference _ , 2006 .x. hei , c. liang , j. liang , y. liu , and k. w. ross , `` a measurement study of a large - scale p2p iptv system , '' _ to appear _ in _ ieee transactions on multimedia _ , 2007 .l. vu , i. gupta , j. liang , and k. nahrstedt , `` measurement and modeling of a large - scale overlay for multimedia streaming , '' in _ proc . of qshine07 ,international conference on heterogeneous networking for quality , reliability , security and robustness _, 2007 .t. silverston and o. fourmaux , `` measuring p2p iptv systems , '' in _ proc . of nossdav07 ,international workshop on network and operating systems support for digital audio & video _ , 2007 .s. ali , a. mathur , and h. zhang , `` measurement of commercial peer - to - peer live video streaming , '' in _ proc .of workshop in recent advances in peer - to - peer streaming _ , 2006 .x. hei , y. liu , and k. w. ross , `` inferring network - wide quality in p2p live streaming systems . ''[ online ] .available : http://cis.poly.edu/~ross/papers/buffermap.pdf http://content.lip6.fr / traces/. http://www.joost.com .d. veitch and p. abry , `` matlab code for the wavelet based analysis of scaling processes . ''[ online ] .available : http://www.cubinlab.ee.mu.oz.au/darryl
|
p2p iptv applications arise on the internet and will be massively used in the future . it is expected that p2p iptv will contribute to increase the overall internet traffic . in this context , it is important to measure the impact of p2p iptv on the networks and to characterize this traffic . during the 2006 fifa world cup , we performed an extensive measurement campaign . we measured network traffic generated by broadcasting soccer games by the most popular p2p iptv applications , namely pplive , ppstream , sopcast and tvants . from the collected data , we characterized the p2p iptv traffic structure at different time scales by using wavelet based transform method . to the best of our knowledge , this is the first work , which presents a complete multiscale analysis of the p2p iptv traffic . + our results show that the scaling properties of the tcp traffic present periodic behavior whereas the udp traffic is stationary and lead to long - range depedency characteristics . for all the applications , the download traffic has different characteristics than the upload traffic . the signaling traffic has a significant impact on the download traffic but it has negligible impact on the upload . both sides of the traffic and its granularity has to be taken into account to design accurate p2p iptv traffic models .
|
the _ mutual information _ ( also called _ cross entropy _ or _ information gain _ ) is a widely used information - theoretic measure for the stochastic dependency of discrete random variables .it is used , for instance , in learning _bayesian nets _ , where stochastically dependent nodes shall be connected ; it is used to induce classification trees .it is also used to select _ features _ for classification problems , i.e. to select a subset of variables by which to predict the _ class _ variable .this is done in the context of a_ filter approach _ that discards irrelevant features on the basis of low values of mutual information with the class _ _ .the mutual information ( see the definition in section [ dmi ] ) can be computed if the joint chances of two random variables and are known . the usual procedure in the common case of unknown chances is to use the _ empirical probabilities _ ( i.e. the sample relative frequencies : ) as if they were precisely known chances .this is not always appropriate .furthermore , the _ empirical mutual information _ does not carry information about the reliability of the estimate . in the bayesian framework one can address these questions by using a ( second order ) prior distribution , which takes account of uncertainty about . from the prior and the likelihood one can compute the posterior , from which the distribution of the mutual information can in principle be obtained .this paper reports , in section [ mr ] , the _ exact _ analytical mean of and an analytical -approximation of the variance . these are reliable and quickly computable expressions following from when a _ dirichlet _ prior is assumed over .such results allow one to obtain analytical approximations of the distribution of .we introduce asymptotic approximations of the distribution in section [ ba ] , graphically showing that they are good also for small sample sizes .the distribution of mutual information is then applied to feature selection .section [ tpf ] proposes two new filters that use _ credible intervals _ to robustly estimate mutual information .the filters are empirically tested , in turn , by coupling them with the _ naive bayes classifier _ to incrementally learn from and classify new data .on ten real data sets that we used , one of the two proposed filters outperforms the traditional filter : it almost always selects fewer attributes than the traditional one while always leading to equal or significantly better prediction accuracy of the classifier ( section [ ea ] ) .the new filter is of the same order of computational complexity as the filter based on empirical mutual information , so that it appears to be a significant improvement for real applications .the proved importance of the distribution of mutual information led us to extend the mentioned analytical work towards even more effective and applicable methods .section [ ta ] proposes improved analytical approximations for the tails of the distribution , which are often a critical point for asymptotic approximations .section [ etis ] allows the distribution of mutual information to be computed also from incomplete samples .closed - form formulas are developed for the case of feature selection .consider two discrete random variables and taking values in and , respectively , and an i.i.d .random process with samples drawn with joint chances .an important measure of the stochastic dependence of and is the mutual information : where denotes the natural logarithm and and are marginal chances .often the chances are unknown and only a sample is available with outcomes of pair .the empirical probability may be used as a point estimate of , where is the total sample size .this leads to an empirical estimate for the mutual information .unfortunately , the point estimation carries no information about its accuracy . in the bayesian approach to this problemone assumes a prior ( second order ) probability density for the unknown chances on the probability simplex . from this onecan compute the posterior distribution ( the are multinomially distributed ) and define the posterior probability density of the mutual information : denotes the mutual information for the specific chances , whereas in the context above is just some non - negative real number . will also denote the mutual information _ random variable _ in the expectation ] .expectations are _ always _ w.r.t . to the posterior distribution .] with sharp upper bound , the integral may be restricted to , which shows that the domain of is . ] and the variance =e[(i - e[i])^{2}] ] of the approximation is of the order , _ if _ and are dependent . in the opposite case , the term in the sum drops itself down to order resulting in a reduced relative accuracy of ( [ varappr ] ) .these results were confirmed by numerical experiments that we realized by monte carlo simulation to obtain `` exact '' values of the variance for representative choices of , , , and .let us now consider approximating the overall distribution of mutual information based on the formulas for the mean and the variance given in section [ mr ] .fitting a normal distribution is an obvious possible choice , as the central limit theorem ensures that converges to a gaussian distribution with mean ] . since is non - negative , it is also worth considering the approximation of by a gamma ( i.e. , a scaled ) .even better , as can be normalized in order to be upper bounded by 1 , the beta distribution seems to be another natural candidate , being defined for variables in the $ ] real interval .of course the gamma and the beta are asymptotically correct , too .we report a graphical comparison of the different approximations by focusing on the special case of binary random variables , and on three possible vectors of counts .figure [ fig1 ] compares the exact distribution of mutual information , computed via monte carlo simulation , with the approximating curves .the figure clearly shows that all the approximations are rather good , with a slight preference for the beta approximation .the curves tend to do worse for smaller sample sizes as it is was expected. higher moments computed in may be used to improve the accuracy .a method to specifically improve the tail approximation is given in section [ ta ] .classification is one of the most important techniques for knowledge discovery in databases .a classifier is an algorithm that allocates new objects to one out of a finite set of previously defined groups ( or _ classes _ ) on the basis of observations on several characteristics of the objects , called _ attributes _ or _features_. classifiers can be learnt from data alone , making explicit the knowledge that is hidden in raw data , and using this knowledge to make predictions about new data .feature selection is a basic step in the process of building classifiers .in fact , even if theoretically more features should provide one with better prediction accuracy ( i.e. , the relative number of correct predictions ) , in real cases it has been observed many times that this is not the case .this depends on the limited availability of data in real problems : successful models seem to be in good balance of model complexity and available information . in facts, feature selection tends to produce models that are simpler , clearer , computationally less expensive and , moreover , providing often better prediction accuracy .two major approaches to feature selection are commonly used : _ filter _ and _ wrapper _ models .the filter approach is a preprocessing step of the classification task .the wrapper model is computationally heavier , as it implements a search in the feature space . from now onwe focus our attention on the filter approach .we consider the well - known filter ( f ) that computes the empirical mutual information between features and the class , and discards low - valued features .this is an easy and effective approach that has gained popularity with time .cheng reports that it is particularly well suited to jointly work with bayesian network classifiers , an approach by which he won the _ 2001 international knowledge discovery competition _ .the `` weka '' data mining package implements it as a standard system tool ( see , p. 294 ) .a problem with this filter is the variability of the empirical mutual information with the sample .this may allow wrong judgments of relevance to be made , as when features are selected by keeping those for which mutual information exceeds a fixed threshold in order for the selection to be robust , we must have some guarantee about the actual value of mutual information .we define two new filters .the _ backward filter _ ( bf ) discards an attribute if its value of mutual information with the class is less than or equal to with given ( high ) probability .the _ forward filter _ ( ff ) includes an attribute if the mutual information is greater than with given ( high ) probability .bf is a conservative filter , because it will only discard features after observing substantial evidence supporting their irrelevance .ff instead will tend to use fewer features , i.e. only those for which there is substantial evidence about them being useful in predicting the class .the next sections present experimental comparisons of the new filters and the original filter f.for the following experiments we use the naive bayes classifier . this is a good classification model despite its simplifying assumptions , see , which often competes successfully with the state - of - the - art classifiers from the machine learning field , such as c4.5 .the experiments focus on the incremental use of the naive bayes classifier , a natural learning process when the data are available sequentially : the data set is read instance by instance ; each time , the chosen filter selects a subset of attributes that the naive bayes uses to classify the new instance ; the naive bayes then updates its knowledge by taking into consideration the new instance and its actual class .the incremental approach allows us to better highlight the different behaviors of the empirical filter ( f ) and those based on credible intervals on mutual information ( bf and ff ) .in fact , for increasing sizes of the learning set the filters converge to the same behavior . for each filter , we are interested in experimentally evaluating two quantities : for each instance of the data set , the average number of correct predictions ( namely , the prediction accuracy ) of the naive bayes classifier up to such instance ; and the average number of attributes used . by these quantitieswe can compare the filters and judge their effectiveness .the implementation details for the following experiments include : using the beta approximation ( section [ ba ] ) to the distribution of mutual information ; using the uniform prior for the naive bayes classifier and all the filters ; using natural logarithms everywhere ; and setting the level of the posterior probability to .as far as is concerned , we can not set it to zero because the probability that two variables are independent ( ) is zero according to the inferential bayesian approach .we can interpret the parameter as a degree of dependency strength below which attributes are deemed irrelevant .we set to , in the attempt of only discarding attributes with negligible impact on predictions .as we will see , such a low threshold can nevertheless bring to discard many attributes .table [ tab1 ] lists the 10 data sets used in the experiments .these are real data sets on a number of different domains .for example , shuttle - small reports data on diagnosing failures of the space shuttle ; lymphography and hypothyroid are medical data sets ; spam is a body of e - mails that can be spam or non - spam ; etc .._data sets used in the experiments , together with their number of features , of instances and the relative frequency of the majority class .all but the spam data sets are available from the uci repository of machine learning data sets .the spam data set is described in and available from androutsopoulos s web page.[tab1 ] _ [ cols="^,^,^,^",options="header " , ] the remaining cases are described by means of the following figures .figure [ fig2 ] shows that ff allowed the naive bayes to significantly do better predictions than f for the greatest part of the chess data set . the maximum difference in prediction accuracyis obtained at instance 422 , where the accuracies are 0.889 and 0.832 for the cases ff and f , respectively .figure [ fig2 ] does not report the bf case , because there is no significant difference with the f curve .the good performance of ff was obtained using only about one third of the attributes ( table [ tab2 ] ) .figure [ fig3 ] compares the accuracies on the spam data set .the difference between the cases ff and f is significant in the range of instances 32413 , with a maximum at instance 59 where accuracies are 0.797 and 0.559 for ff and f , respectively .bf is significantly worse than f from instance 65 to the end .this excellent performance of ff is even more valuable considered the very low number of attributes selected for classification . in the spam case ,attributes are binary and correspond to the presence or absence of words in an e - mail and the goal is to decide whether or not the e - mail is spam .all the 21611 words found in the body of e - mails were initially considered .ff shows that only an average of about123 relevant words is needed to make good predictions .worse predictions are made using f and bf , which select , on average , about 822 and 13127 words , respectively .figure [ fig4 ] shows the average number of excluded features for the three filters on the spam data set .ff suddenly discards most of the features , and keeps the number of selected features almost constant over all the process .the remaining filters tend to such a number , with different speeds , after initially including many more features than ff . in summary ,the experimental evidence supports the strategy of only using the features that are reliably judged as carrying useful information to predict the class , provided that the judgment can be updated as soon as new observations are collected .ff almost always selects fewer features than f , leading to a prediction accuracy at least as good as the one f leads to .the comparison between f and bf is analogous , so ff appears to be the best filter and bf the worst .however , the conservative nature of bf might turn out to be successful when data are available in groups , making the sequential updating be not viable . in this case , it does not seem safe to take strong decisions of exclusion that have to be maintained for a number of new instances , unless there is substantial evidence against the relevance of an attribute .the expansion of around the mean can be a poor estimate for extreme values or and it is better to use tail approximations .the scaling behavior of can be determined in the following way : is small iff describes near independent random variables and .this suggests the reparameterization in the integral ( [ midistr ] ) .only small can lead to small .hence , for small we may expand in in expression ( [ midistr ] ) . correctly taking into account the constraints on , a scaling argument shows that .similarly we get the scaling behavior of around . can be written as , where is the entropy . without loss of generality .if the prior converges to zero for sufficiently rapid ( which is the case for the dirichlet for not too small ) , then gives the dominant contribution when .the scaling behavior turns out to be .these expressions including the proportionality constants in case of the dirichlet distribution are derived in the journal version . in the followingwe generalize the setup to include the case of missing data , which often occurs in practice .for instance , observed instances often consist of several features plus class label , but some features may not be observed , i.e. if is a feature and a class label , from the pair only is observed .we extend the contingency table to include , which counts the number of instances in which only the class is observed (= number of instances ) .it has been shown that using such partially observed instances can improve classification accuracy .we make the common assumption that the missing - data mechanism is ignorable ( missing at random and distinct ) , i.e.the probability distribution of class labels of instances with missing feature is assumed to coincide with the marginal .the probability of a specific data set of size with contingency table given , hence , is . assuming a uniform prior bayes rule leads to the posterior .the mean and variance of in leading order in can be shown to be & = & i({\bf \hat{\pi}})+o(n^{-1 } ) , \\\mbox{var}[i ] & = & \frac{1}{n}[\tilde{k}-\tilde{j}^{2}/\tilde{q}-\tilde{p}% ] + o(n^{-2}),\end{aligned}\ ] ] where the derivation will be given in the journal version .note that for the complete case , we have , , , , , and , consistently with ( [ varappr ] ) .preliminary experiments confirm that ff outperforms f also when feature values are partially missing .all expressions involve at most a double sum , hence the overall computation time is . for the case of missing class labels , butno missing features , symmetrical formulas exist . in the general case of missing features and missing class labels estimates for have to be obtained numerically , e.g. by the em algorithm in time , where is the number of iterations of em .in we derive a closed form expression for the covariance of and the variance of to leading order which can be evaluated in time .this is reasonably fast , if the number of classes is small , as is often the case in practice .note that these expressions converge for to the exact values .the missingness needs not to be small .this paper presented ongoing research on the distribution of mutual information and its application to the important issue of feature selection . in the former case , we provide fast analytical formulations that are shown to approximate the distribution well also for small sample sizes .extensions are presented that , on one side , allow improved approximations of the tails of the distribution to be obtained , and on the other , allow the distribution to be efficiently approximated also in the common case of incomplete samples . as far as feature selectionis concerned , we empirically showed that a newly defined filter based on the distribution of mutual information outperforms the popular filter based on empirical mutual information .this result is obtained jointly with the naive bayes classifier .more broadly speaking , the presented results are important since reliable estimates of mutual information can significantly improve the quality of applications , as for the case of feature selection reported here .the significance of the results is also enforced by the many important models based of mutual information .our results could be applied , for instance , to _ robustly _ infer classification trees .bayesian networks can be inferred by using credible intervals for mutual information , as proposed by .the well - known chow and liu s approach to the inference of tree - networks might be extended to credible intervals ( this could be done by joining results presented here and in past work ) .i. androutsopoulos , j. koutsias , k. v. chandrinos , g. paliouras , and d. spyropoulos .an evaluation of naive bayesian anti - spam filtering . in g.potamias , v. moustakis , and m. van someren , editors , _ proc . of the workshop onmachine learning in the new information age _ , pages 917 , 2000 .11th european conference on machine learning .g. h. john , r. kohavi , and k. pfleger .irrelevant features and the subset selection problem . in w.w. cohen and h. hirsh , editors , _ proceedings of the eleventh international conference on machine learning _ , pages 121129 , new york , 1994 .morgan kaufmann .
|
mutual information is widely used in artificial intelligence , in a descriptive way , to measure the stochastic dependence of discrete random variables . in order to address questions such as the reliability of the empirical value , one must consider sample - to - population inferential approaches . this paper deals with the distribution of mutual information , as obtained in a bayesian framework by a second - order dirichlet prior distribution . the exact analytical expression for the mean and an analytical approximation of the variance are reported . asymptotic approximations of the distribution are proposed . the results are applied to the problem of selecting features for incremental learning and classification of the naive bayes classifier . a fast , newly defined method is shown to outperform the traditional approach based on empirical mutual information on a number of real data sets . finally , a theoretical development is reported that allows one to efficiently extend the above methods to incomplete samples in an easy and effective way . robust feature selection , naive bayes classifier , mutual information , cross entropy , dirichlet distribution , second order distribution , expectation and variance of mutual information .
|
turbulent diffusivity has been an important concept for the mean - field modeling of the interior convection and dynamo of the sun and stars ( see the review by * ? ? ?it is a substantial factor for the transport of the angular momentum and magnetic field .while the non - turbulent molecular diffusivities are much smaller , i.e. , molecular viscosity is and molecular magnetic diffusivity is in the solar convection zone , the random advective motion of gases in turbulence is considered to behave as a strong diffusion .the specific value is unknown but previous studies suggest that the value is around - ( e.g. * ? ? ?this value affects predictions of the next solar maximum .it also affects the symmetry of the global magnetic field and the strength of the polar field .the value determines the difference in rotation speed and the propagation speed of the torsional oscillation .thus , the estimation of this value is crucially important .some studies have already estimated the value of turbulent diffusivity on the solar surface through observations . investigated the evolution of active regions and derived the optimum value of turbulent diffusivity . also estimated the value of turbulent diffusivity through high resolution observations .they concluded that the turbulent diffusivity depends on the resolved scale , i.e. , the value becomes smaller with higher resolution . also found this type of dependency through observation of bright points .estimate the value of turbulent diffusion with numerical simulations of thermal convection using the test field method , which has been adopted for investigations of many different types of turbulence .a test magnetic field is passively transported by the convection flows with no back reaction . with the utilization of horizontally averaged values as a mean field ,the coefficients of the -effect and the turbulent diffusivity are measured based on the mean - field equations .report that the value of turbulent diffusivity is proportional to the square of vertical velocity and is approximately proportional to the wavelength of the test field .investigate the value of turbulent diffusivity with a realistic radiative mhd simulation and estimate the value of turbulent diffusivity from the decreasing rate of the total magnetic flux .estimate the turbulent magnetic diffusivity and kinetic viscosity in a forced isotropic turbulence using a similar way , i.e. from the decay rate of the magnetic field and the velocity field . use the cross helicity to estimate the turbulent magnetic diffusivity in a stratified medium with forced turbulence . this method to numerical calculation of thermal convection * and * observation of the sun . in this study, we introduce a new method to estimate the value of turbulent diffusivity .we investigate the development of a passive scalar whose initial condition is the gaussian function .the method is found to be well suited for a gaussian function at each time point and its peak density and spatial extent give us necessary information on the scalar s kinematics . a detailed explanation of the method is given in section [ model ] .the specific aims of this study are : ( 1 ) estimation of turbulent diffusivity of thermal convection with different sizes of the simulation box ; ( 2 ) investigation into the validity of approximation of turbulent diffusion in thermal convection ; ( 3 ) investigation into the dependence of turbulent diffusivity and the validity of approximation on the initial distribution scale .the three - dimensional hydrodynamic equation of continuity , equation of motion , equation of energy , and equation of state are solved in cartesian coordinates , where and denote the horizontal directions and denotes the vertical direction .the formulations are almost the same as those used by .equations are expressed as , \label{e : e1},\\ & & \frac{\partial { \bf v}}{\partial t}=-({\bf v}\cdot\nabla){\bf v}-\frac{\nabla p_1}{\rho_0 } -\frac{\rho_1}{\rho_0}g{\bf e_z}+\frac{1}{\rho_0}\nabla \cdot{\bf \pi},\label{e : e2}\\ & & \frac{\partial s_1}{\partial t}=-({\bf v}\cdot\nabla)(s_0+s_1 ) + \frac{1}{\rho_0t_0}\nabla\cdot(k\rho_0t_0\nabla s_1)+\frac{\gamma-1}{p_0}({\bf \pi}\cdot\nabla)\cdot{\bf v},\label{e : e3}\\ & & p_1 = p_0 \left(\gamma \frac{\rho_1}{\rho_0}+s_1\right),\label{e : e4}\end{aligned}\ ] ] where , , , and denote the time - independent , plane - parallel reference density , pressure , temperature , and entropy , respectively and denotes the unit vector along the -direction . is the ratio of specific heats , with the value for an ideal gas being . , , and denote the fluctuations of density , pressure and entropy from reference atmosphere , respectively .note that the entropy is normalized by specific heat capacity at constant volume .the quantity is the gravitational acceleration , which is assumed to be constant .the quantity denotes the viscous stress tensor , ,\end{aligned}\ ] ] and and denote the viscosity and thermal diffusivity , respectively . and are assumed to be constant throughout the simulation domain .we assume an adiabatically stratified polytrope for the reference atmosphere except for entropy : ^m , \\ & & p_0(z)=p_r \left [ 1-\frac{z}{(m+1)h_r } \right]^{m+1 } , \\ & & t_0(z)=t_r \left [ 1-\frac{z}{(m+1)h_r } \right],\\ & & h_0(z ) = \frac{p_0}{\rho_0g},\end{aligned}\ ] ] where , , , and denote the values of , , , ( the pressure scale height ) at the bottom boundary .the profile of the reference entropy is defined with a steady state solution of the thermal diffusion equation with constant : where is the non - dimensional superadiabaticity and is the value of at . in spite of a non - zero value of superadiabaticity , the adiabatic stratification is acceptable due to the small value of superadiabaticitiy .the strength of the diffusive coefficients and are expressed with the following non - dimensional parameters : the reynolds number , and the prandtl number , where the velocity scale . in all cases of this study , the parameters are set as , , .we calculate three cases with different box sizes ( see table [ table_param ] ) .the horizontal size is the same in all calculations , i.e. and the number of grids in , directions are set as .we adopt three different vertical sizes of box , , , and for cases 1 , 2 , and 3 respectively .the number of grids in these cases are set as , , and , respectively .the rayleigh number , which is defined as in these cases are estimated to be , , and respectively , where denotes the difference of entropy between the top and the bottom boundaries .the calculation domain is , and . the boundary conditions and the numerical method are the same as those used by .the boundary condition for the , direction is periodic for all variables , and the stress free and impenetrative boundary conditions are adopted and the entropy is fixed , i.e. at and . in this study , we calculate the evolution of passive scalar to estimate the value of turbulent diffusivity .along with the equations ( [ e : e1])-([e : e4 ] ) , we simultaneously solve the advection equation of the passive scalar as where is passive scalar density .although in eq .( [ advection ] ) , the diffusion term does not appear explicitly , we use tiny artificial viscosity on the passive scalar , a technique which is adopted in .its initial condition is set as we adopt three initial conditions , i.e. , , and for each of the different depth settings ( cases 1 - 3 ) ; hence the total number of cases is nine . in the initial condition the passive scalar does not depend on , since we focus on the turbulent diffusion in the horizontal direction . since the transport of the passive scalar is assumed to be approximated by a diffusion process with constant diffusivity , then its density should obey the two - dimensional diffusion equation as when the calculation domain is infinite , the analytical solution of eq .( [ e : diffusion ] ) with the initial condition of eq .( [ e : initial ] ) is expressed as where , . in this studywe adopt periodic boundary conditions ; thus the analytic solution is given by the periodic superposition of the above formula and can be expressed as }.\end{aligned}\ ] ] when the width of the gaussian function is narrower than box size ( ) , the analytical solution in the range , and , can be approximated as }. \label{eq_fit}\end{aligned}\ ] ] we estimate the value of turbulent diffusivity by the following steps : 1 .the advection eq .( [ advection ] ) is calculated with the obtained velocity of thermal convection .2 . the obtained passive scalar in each step is vertically averaged as note that by using this method , we will obtain an averaged turbulent diffusivity along the -direction .3 . the result of averaging , i.e. , eq . ([ e : average ] ) , is fitted with eq .( [ eq_fit ] ) .note that the fitting has only one parameter , and this parameter has information on both the height and the width of the gaussian function .according to the analytical relation , , we obtain the value of turbulent diffusivity from the slope of .figure [ conv ] shows the results of our hydrodynamic calculation .the three panels in the left , middle and right columns show the contours of entropy in cases 1 , 2 and 3 , respectively . due to the large rayleigh number ,the velocity with a large box size is high ( see the third row in table [ table_param ] ) .a detailed investigation of cell size distribution will be reported in our forthcoming paper ( iida et al , in preparation ) .figure [ passive ] shows the contour of the passive scalar whose width of the gaussian function at the initial condition is .we can see that the passive scalar is diffused with turbulent convection .the dependences of on are provided in figure [ fitting ] , and are shown to be almost linear .this shows the validity of the diffusive description for the turbulent transport by the convective motion .the estimated turbulent diffusivity is shown in figure [ scaling]a .it is derived through linear fittings to the curves in figure [ fitting ] in the range of where is chosen so as to reduce the fitting error ; it is given in table [ table_param ] .the scaling behavior of the obtained diffusion is studied by changing the depth of the simulation box in cases 1 , 2 and 3 . in figure[ scaling]a , the blue , green , and red lines show the values of turbulent diffusivity with , , and , respectively . the value of turbulent diffusivity scales with the size of the box , which is discussed in the next paragraph .the value of turbulent diffusivity also scales with the initial width of the gaussian function .using a wider gaussian function makes the larger size of the convection cell work more efficiently and generates a larger value of turbulent diffusivity . in the mean field model, it is thought that the coefficient of turbulent diffusion can be expressed as , where is the characteristic length scale of turbulence and is the root - mean - square ( rms ) velocity .the value of turbulent diffusivity is obtained in this study , and we can estimate the value of based on the mean field model i.e. , .the estimated horizontal rms velocities and characteristic length scale are shown in figure [ scaling]b and c , respectively .we discuss the dependence of on the box size with and . with a smaller box , i.e. ( case 2 ) and ( case 3 ) ,the characteristic lengths are almost the same as the sizes of the boxes ( the size of the box is indicated by the dashed line ) .it is natural that the largest cell size is determined by the size of the box and that the largest cell is most effective for advecting the passive scalar .although we expected that the characteristic length of case 1 would also be the same as , the obtained characteristic length scale was smaller than even with and .a possible reason for this result is that the characteristic length is restricted by the convection cell , which is also limited vertically by the pressure scale height ( ) or the density scale height ( ) .although should be evaluated in the horizontal scale , the mixture of the passive scalar may occur at approximately the same distance with the vertical scale .it should also be noted that when the narrowest gaussian function , i.e. ( red line ) , is used , the characteristic lengths are restricted by the width of the gaussian function in cases 1 and 2 .next , we discuss the validity of the approximation of turbulent diffusivity quantitatively . we calculate the estimated error of the linear fitting of as : ^ 2},\end{aligned}\ ] ] where is the number of data points along the time , is the -th estimated result and is the -th result of the fitted line . in figure [ scaling]d , we found a dependence of on , i.e. is larger with narrower .although the qualitative relation is not clear , it indicates that the estimated error tends to become smaller with a larger ratio of the initial width of gaussian function to the characteristic length ( )we investigated the value of horizontal turbulent diffusivity by a numerical calculation of thermal convection . in this study , we have introduced a new method , whereby the turbulent diffusivity is estimated by monitoring the time development of the passive scalar , which is initially distributed in a given gaussian function with a spatial scale .our conclusions are as follows : ( 1 ) assuming the relation where is the rms velocity , the characteristic length is restricted by the shortest one among the pressure ( density ) scale height and the region depth . ( 2 ) the value of turbulent diffusivity becomes larger with a larger initial distribution scale .( 3 ) the approximation of turbulent diffusion holds better when the ratio of the initial distribution scale to the characteristic length is larger .conclusion ( 2 ) is consistent with the results of observational study and a previous numerical study . in this study, we do not estimate the correlation length directly from the thermal convection .this will be achieved in our future work with an auto - detection technique and our characteristic length ( ) will be compared with directly estimated correlation length .we now assume that our characteristic length is an average of correlation length at each height ( iida et al . in prepareation ) .the turbulent diffusion in the horizontal directions is estimated in this work , but such estimations are also important in the vertical directions for addressing the solar dynamo problem from the viewpoint of the transport of magnetic flux from the surface to the bottom of the convection zone. such a study will be conducted in the future . although turbulent diffusivity averaged in the whole boxis estimated in this study , the dependence of this estimation on the height is important .there are , however , two reasons why it is difficult to estimate this dependence with our method .first , in our calculations the integrated passive scalar density is not conserved at each height .second , we found that it is difficult to estimate the diffusivity separately for each horizontal plane only by solving eq .( [ advection ] ) two - dimensionally in the plane the because the results show that the passive scalar density is strongly concentrated in the boundaries of the convection cells .such a spatially intermittent structure is inappropriate for obtaining a statistical property like the turbulent diffusivity .these difficulties will necessitate some substantial improvements in our method .we are also interested in the effect of feedback from the magnetic field to the convection because of its influence on the turbulent diffusivity .the authors thank n. kitagawa for helpful discussions .numerical computations were , in part , carried out on a cray xt4 at the center for computational astrophysics , cfca , of the national astronomical observatory of japan .the page charge for this paper is subsidized by cfca .this work was supported by grant - in - aid for jsps fellows .we have greatly benefited from the proofreading / editing assistance from the gcoe program . ) .the three panels in the left , middle , and right columns correspond to the results of cases 1 , 2 , and 3 , respectively .the rows in the top , middle and bottom columns show the plot at , , and , respectively .[ conv],width=566 ] .the panels in the left , middle , and right columns show the results in cases 1 , 2 , and 3 , respectively .the panels in the top row show the contour of passive scalar at .the bottom row shows the plot of passive scalar density averaged over , i.e., defined by eq .( [ e : average]).[passive],width=566 ] of the passive scalar as functions of time .panels a , b , and c show the dependence of with the sizes of boxes , , and , respectively .the blue , green , and red lines show the results with , , and , respectively .[ fitting],width=566 ] ( b ) dependence of horizontal rms velocity on the size of box .( c ) dependence of characteristic length on size of box .the dashed line shows the size of box .( d ) dependence of estimated error on the width of the gaussian function . in panels a and b , the blue , green , and red lines show the results with , , and , respectively . in panel d ,the solid , dotted and dashed lines show the results with , , and , respectively .[ scaling],width=566 ]
|
we investigate the value of horizontal turbulent diffusivity by numerical calculation of thermal convection . in this study , we introduce a new method whereby the turbulent diffusivity is estimated by monitoring the time development of the passive scalar , which is initially distributed in a given gaussian function with a spatial scale . our conclusions are as follows : ( 1 ) assuming the relation where is the rms velocity , the characteristic length is restricted by the shortest one among the pressure ( density ) scale height and the region depth . ( 2 ) the value of turbulent diffusivity becomes greater with the larger initial distribution scale . ( 3 ) the approximation of turbulent diffusion holds better when the ratio of the initial distribution scale to the characteristic length is larger .
|
this paper addresses the computation of a function that is key to the evaluation of both the random coding and sphere packing error exponents .this function , often denoted , is usually expressed as a maximization problem over input distributions .consequently , it is conceptually easily bounded from below : any feasible input distribution gives rise to such a bound . in this paperwe propose to use a dual expression for an expression that involves a minimization over output distributions in order to derive _ upper _ bounds on .we shall demonstrate this approach by studying the cutoff rate of non - coherent ricean fading channels . to that endwe shall have to study the appropriate modifications to the function that are needed to account for input constraints and when the channel input and output alphabets are infinite .it should be noted that the dual expression we propose to use is not new , ( * ? ? ?* ex . 23 in ch . 2.5 ) .we merely extend it here to input constrained channels over infinite alphabets and demonstrate how it can be used to derive _ analytic _ upper bounds on the random coding and sphere packing error exponents .for _ numerical _ procedures ( for unconstrained finite alphabet channels ) see .the rest of this introductory section is dedicated to the introduction of the function for discrete memoryless channels .we first treat unconstrained channel and then introduce the modifications that are needed to account for input constraints .we describe both the `` method of types '' approach and gallager s approach .we pay special attention to the modification that gallager introduced to account for cost constraints and to the duality between the expressions derived using the two approaches .this introduction is somewhat lengthy because , while the results are not new , we had difficulty pointing to a publication that introduces the two approaches side by side and that compares the two in the presence of cost constraints . in section [ sec :continuous ] we extend the discussion to infinite alphabets and prove the basic inequality on which our approach to upper bounding is based ; see proposition [ prop : upper ] . in section [ sec : ricean ]we introduce the discrete - time memoryless ricean fading channel with and without full or partial side information at the receiver , and we describe our asymptotic results on this channel s cutoff rate .these asymptotic results are derived using duality in section [ sec : derivation ] , which concludes the paper . to motivate the interest in the function we shall begin by addressing the case where there are no input constraints .the reliability function corresponding to rate- unconstrained communication over a discrete memoryless channel ( dmc ) of capacity is the best exponential decay in the blocklength of the average probability of error that one can achieve using rate- blocklength- codebooks .that is , where denotes the average probability of error of the best rate- blocklength- codebook for the given channel .the problem of computing the reliability function of a general dmc over the finite input and output alphabets and and of a general law is still open .various upper and lower bounds are , however , known . to derive lower bounds on the reliability function one must derive upper bounds on the probability of error of the best rate- blocklength- code .this is typically done by demonstrating the existence of good codes for which the average probability of error is small .one such lower bound on is the random coding lower bound . by considering an ensemble of codebookswhose codewords are chosen independently , each according to a product distribution of marginal law , gallager derived the lower bound where and since the law from which the ensemble of codebooks is constructed is arbitrary , gallager obtained the bound where is gallager s random coding error exponent a different random coding lower bound on the reliability function can be derived using the ensemble of codebooks where the codewords are still chosen independently , but rather than according to a product distribution , each is now chosen uniformly over a type class , , . with this approach one obtains , the lower bound where here the minimization is over all conditional laws the term denotes the mutual information corresponding to the channel and the input distribution ; and stands for . again , since the type according to which the ensemble is generated is arbitrary , one obtains where there is an alternative form for that will be of interest to us , ( * ? ? ?* ex . 23 in ch .this form is more similar to : where and where the minimization in the latter is over the set of all distributions on the output alphabet . in general , for any dmc and any input distribution , ( * ? ?* ex . 23 in ch .2.5 ) and hence with the inequalities typically being strict .these inequalities are a consequence of the fact that the `` average constant composition code '' performs better than the `` average independent and identically distributed code '' . however ,when optimized over the input distributions , the inequalities turn into equalities , , ( * ? ? ?* ex . 23 in ch .2.5 ) and i.e. , in fact , as shown in appendix [ app : lagrange ] , the optimization problems appearing on the lhs and on the rhs of are lagrange duals .consequently , we shall henceforth denote ( ) by and refer to ( ) as the random coding error exponent and denote it by . in terms of the function the random coding error exponent is thus given by the cut - off rate is defined by the function also plays an important role in the study of upper bounds to the reliability function .in fact , the sphere packing error exponent is given by combining with and we obtain the two equivalent expressions for we refer to the former expression as the `` primal '' expression and to the latter as the `` dual '' expression .the primal expression is useful for the derivation of lower bounds on .indeed , any distribution on the input alphabet induces the lower bound on the other hand , the dual expression is useful for the derivation of upper bounds .any distribution on the output alphabet yields the upper bound before we can use the above bounds for fading channels we need to extend the discussion to cost constrained channels and to channels over infinite input and output alphabets where the method of types can not be directly used .for now we continue our assumption of finite alphabets and address the cost constraint .suppose we limit ourselves to blockcode transmissions where we only allow codewords that satisfy where is a cost function on the input alphabet , is some pre - specified non - negative number , and , as before , is the blocklength .the reliability function is defined as in with the modification that should be now understood as the lowest average probability of error that can be achieved using a rate- blocklength- codebook all of whose codewords satisfy the cost constraint . to obtain lower bounds on gallager , modified his random coding argument in two ways .he introduced a new ensemble of codebooks and introduced an improved technique to analyze the average probability of error over this ensemble . for any probability law on the input alphabet satisfying } \leq \upsilon\ ] ] where } \triangleq \sum_{x \in { \mathcal{x } } } { \mathsf{q}}(x ) g(x)\ ] ] define } < \upsilon { \textnormal{\textsf{e}}_{{\mathsf{q}}}\!\left[{g(x)}\right ] } = \upsilon ] he considered an ensemble of codebooks where the codewords are chosen independently of each other , each according to the a - posteriori law of a sequence drawn iid according to conditional on .to prove the result when } = \upsilon ] this follows directly from . for a proof in the case } = \upsilon ] is borel measurable .finally assume the existence of an underlying positive measure on with respect to which all the probability measures are absolutely continuous .denote the radon - nykodim derivative of with respect to by thus , is the density at of the channel output corresponding to the input . for any input and any borel set as to the cost, we shall assume that the function is measurable and consider block codes that satisfy .we extend the definition to infinite alphabets as } \triangleq \int_{\x } g(x ) { \,\textnormal{d}}{\mathsf{q}}(x).\ ] ] definition is extended for any probability law on as for any input distribution satisfying the constraint } \leq \upsilon and } \\ { \displaystyle \bigl . e_{0}(\varrho , { \mathsf{q } } , r ) \bigr|_{r=0 } } & \text{otherwise } \end{cases}.\ ] ] ( note that following gallager , we allow for the optimization over only when under the law the random variable has a finite third moment . ) with this definition we can now define } \leq \upsilon } { e_{\textnormal{g,0}}^{\textnormal{m}}}(\varrho , { \mathsf{q}})\ ] ] and the cut - off rate as the random coding error exponent is achievable with block codes satisfying the constraint , .the following proposition proves in the more general case where the alphabets may be continuous .it is particularly useful for the derivation of upper bounds on .[ prop : upper ] consider as above a discrete - time memoryless infinite alphabet channel , an output measure , a measurable cost function , and some arbitrary allowed cost .let be an arbitrary density with respect to on the output alphabet .then for any distribution on satisfying the cost constraint } \leq \upsilon ] and the case where } = \upsilon ] . in the former case , by , and the result follows by an application of jensen s inequality and hlder s inequality : as for the case where } = \upsilon ] ) we have for any } - \upsilon \right ) \nonumber \\ & & -\ : ( 1+\varrho)\int_{x \in \x}\log \left ( \int_{y \in \y } e^{r(g(x)-\upsilon ) } w(y|x)^\frac{1}{1+\varrho } f_{r}(y)^\frac{\varrho}{1+\varrho } { \,\textnormal{d}}\mu(y ) \right){\,\textnormal{d}}{\mathsf{q}}(x ) \nonumber \\ & = & -(1+\varrho ) \int_{x \in \x } \log \left(\int_{y \in \y } e^{r(g(x)-\upsilon ) } w(y|x)^\frac{1}{1+\varrho } f_{r}(y)^\frac{\varrho}{1+\varrho } { \,\textnormal{d}}\mu(y ) \right){\,\textnormal{d}}{\mathsf{q}}(x ) \nonumber \\ & \geq & -(1+\varrho ) \log \int_{x \in \x } \int_{y \in \y } e^{r(g(x)-\upsilon ) } w(y|x)^\frac{1}{1+\varrho } f_{r}(y)^\frac{\varrho}{1+\varrho } { \,\textnormal{d}}\mu(y){\,\textnormal{d}}{\mathsf{q}}(x ) \nonumber\\ & \geq & - \log \int_{y \in \y } \left ( \int_{x \in \x } e^{r(g(x ) - \upsilon ) } w(y|x)^\frac{1}{1+\varrho } { \,\textnormal{d}}{\mathsf{q}}(x ) \right)^{1+\varrho } { \,\textnormal{d}}\mu(y ) \nonumber \\ & = & e_{0}(\varrho , { \mathsf{q } } , r).\end{aligned}\ ] ] where the second equality follows because in the case we are considering now } = \upsilon ] to obtain the lower bound : where is defined in .to derive upper bounds on we can use the above proposition by choosing some arbitrary output density to obtain } \leq \upsilon}\left\ { -(1+\varrho ) \int_{x\in \x } \log \left ( \int_{y \in \y } w(y|x)^{\frac{1}{1+\varrho } } f_{r}(y)^{\frac{\varrho}{1+\varrho } } { \,\textnormal{d}}\mu(y ) \right ) { \,\textnormal{d}}{\mathsf{q}}(x)\right\}.\end{gathered}\ ] ]the discrete - time memoryless ricean fading channel with partial receiver side information is a channel whose input takes value in the complex field and whose corresponding output constitutes of a pair of complex random variables and .we shall refer to as `` the received signal '' and to as the `` side information ( at the receiver ) '' .the joint distribution of corresponding to the input is best described using the fading complex random variable and the additive noise complex random variable .the joint distribution of , , and does not depend on the input .the additive noise is independent of the pair and has a circularly symmetric complex gaussian distribution of positive variance .the fading is of mean the `` specular component '' and it is assumed that is a unit - variance circularly symmetric complex gaussian random variable . is real and non - negative .the more general complex case can be treated by rotating the output . ]the pair and are jointly circularly symmetric gaussian random variables .we denote the conditional variance of given by .the received signal corresponding to the input is given by the case where corresponds to the case where and are independent , in which case the receiver can discard without loss in information rates .this case corresponds to `` non - coherent '' fading . in the case the receiver can precisely determine the realization of from .this corresponds to `` coherent detection '' .finally , the case corresponds to `` partially coherent '' communication . in this case carries some information about , but it does not fully determine . in this paper we shall only consider the case where .the case is much easier to analyze and has already received considerable attention in the literature .see for example , , , and the references in the latter .the special case of ricean fading with zero specular component is called `` rayleigh fading '' . the non - coherent ( ) capacity of this channelwas studied in , and . the coherent case ( )was studied in .the capacity of the non - coherent ricean channel ( and ) was studied in - and . unless some restrictions are imposed on the input , the capacity and cut - off rate of this channel are infinite .two kinds of restrictions are typically considered .the first corresponds to an average power constraint . hereonly blockcodes where each codeword satisfies with are allowed . in this context rather than denoting the allowed cost by we shall use the more common symbol , which stands here for the average energy per symbol .that is , we only allow blocklength- codes in which every codeword satisfies the second type of constraint is a peak power constraint . herewe only allow channel inputs that satisfy where now stands for the allowed peak power .such a constraint is best treated by considering the channel as being free of constraints but with the input alphabet now being any codebook satisfying the peak power constraint also satisfies the average power constraint hence the capacity and reliability function under the peak constraint can not exceed those under the average constraint . irrespective of whether an average power or a peak power constraint is imposed , at high snr the capacity of this channel is given asymptotically as where the correction term depends on the snr and tends to zero as the snr tends to infinity . here denotes the exponential integral function and we define the value of the function at as , where denotes euler s constant .( with this definition the function is continuous from the right at . ) here we shall study the cutoff rate in two cases .first , in the absence of side information ( ) we will show that irrespective of whether a peak or average power constraint is imposed here denotes the zero - th order modified bessel function of the first kind , which is given by and the term is a correction term that depends on the snr and that approaches zero as the snr tends to infinity .figure [ fig : plot1 ] depicts the second order term ( the constant term ) in the high snr expansion of channel capacity and of the cutoff rate as a function of the specular component in the absence of side information .for a zero specular component the difference between the two second order terms is nats ; for very large specular components ( ) this difference approaches nats .[ cc][cc]second order terms ( nats ) [ cc][bc]the specular component [ cc][cc]of [ cc][cc]of [ cc][cc] for the case where the side information is present but is not perfect ( ) we only treat the case of zero specular component ( , i.e. , rayleigh fading ) .we obtain the expansion where is the complete elliptic integral of the first kind : for the case of rayleigh fading with perfect side information ( ) see . for the case of `` almost perfect side information '' ( ) we note the expansion which follows from the approximation for some figure [ fig : plot2 ] depicts the second order terms of channel capacity and the cutoff rate as a function of the estimation error in estimating the fading from the side information for rayleigh fading channels ( ) .[ fig : plot ] [ cc][cc]second order terms ( nats ) [ cc][bc]the estimation error [ cc][cc]of [ cc][cc]of [ cc][cc] derive an upper bound on the cut - off rate of the ricean channel in the absence of side information we use proposition [ prop : upper ] with the density ( w.r.t .the lebesgue measure on ) here the parameters , , and can be chosen freely in order to obtain the tightest bound , and denotes the incomplete gamma function , ( this family of densities was introduced in for the purpose of studying the fading number . ) by proposition [ prop : upper ] applied with we obtain for any law under which } \leq { { \mathcal e}}\ ] ] the upper bound where and from {grad_ryz_94} ] .we next choose , as before , to be a law under which is circularly symmetric with whence by proposition [ prop : devdeal ] and applied to the ricean channel of fading mean and granular component and the tightness of the lower bound for every .the desired result now follows from and using the dominated convergence theorem and .in this appendix we prove the following lagrange duality : [ prop : lagdual ] for any discrete memoryless channel and any , the problem is a lagrange dual of the problem where is a distribution on the input alphabet . in particular , since strong duality holds , consider a discrete memoryless channel with input , and output , .we henceforth introduce the more standard , for optimization problems , vector notation for functions on discrete domains .hence , let be a probability distribution on and be a matrix whose ( i , j)-th element is given by hence , ( [ eq : expgal ] ) can be written as : where is an auxiliary vector that we introduce in this problem .the domain of this optimization problem is . for any the objective functionis convex in .furthermore , all equality and inequality constraints are affine .hence , the problem is a _ convex optimization problem_. we will perform a relaxation , which is nevertheless tight for the optimal values of and , to the constraint , namely the lagrangian function of this problem is where , , , and .since the lagrangian function is affine with respect to , we impose the dual inequality constraint , minimize the lagrangian over and obtain the lagrange dual problem this is a concave problem , with the objective function being monotonic with respect to all the optimization variables .since we maximize it in a polyhedron , the optimum will be on the boundary , of maximum distance from the hyperplane and of minimum distance from all hyperplanes that define the polyhedron .therefore , some dual constraint has to be active , i.e. , consequently , the dual problem becomes we perform the transformation of variables , where is chosen to be a probability distribution and is the appropriate normalizing scalar .optimizing over yields which , because of the fact that is concave with respect to and monotonic with respect to , concludes the proof .we begin with the case where the cost constraint is active . fix some and let and achieve }=\upsilon } \max_{r \geq 0 } e_{0}(\varrho , { \mathsf{q } } , r)\ ] ] so that }=\upsilon } \max_{r \geq 0 } e_{0}(\varrho , { \mathsf{q } } , r).\ ] ] following ( * ? ? ?* eq . ( 7.3.26 ) ) we define with this definition we have by and }=\upsilon } \max_{r \geq 0 } e_{0}(\varrho , { \mathsf{q } } , r ) & = e_{0}(\varrho , { \mathsf{q } } _ { * } , r _ { * } ) \nonumber \\\label{eq : a17 } & = - \log \sum_{y \in { \mathcal{y } } } \alpha^{1+\varrho}(y ) .\end{aligned}\ ] ] also , by ( * ? ? ?* eq . ( 7.3.28 ) ) consider now the distribution on given by we now have by that for any distribution and if } = \upsilon ] guarantees that the introduction of the exponential term has zero net effect ; the subsequent equality by ; the subsequent inequality by ; and the final equality by. it thus follows upon taking the supremum in the above over all laws satisfying } = \upsilon ] is also small .indeed , which combines with the monotonicity of and the fact that the argument to the exponential function is negative to demonstrate that and hence that on the other hand a straightforward calculation demonstrates that where the first inequality follows from the monotonicity of and the final inequality follows from simple algebra .we thus conclude that
|
we propose a technique to derive upper bounds on gallager s cost - constrained random coding exponent function . applying this technique to the non - coherent peak - power or average - power limited discrete time memoryless ricean fading channel , we obtain the high signal - to - noise ratio ( snr ) expansion of this channel s cut - off rate . at high snr the gap between channel capacity and the cut - off rate approaches a finite limit . this limit is approximately 0.26 nats per channel - use for zero specular component ( rayleigh ) fading and approaches 0.39 nats per channel - use for very large specular components . we also compute the asymptotic cut - off rate of a rayleigh fading channel when the receiver has access to some partial side information concerning the fading . it is demonstrated that the cut - off rate does not utilize the side information as efficiently as capacity , and that the high snr gap between the two increases to infinity as the imperfect side information becomes more and more precise . keywords : asymptotic , channel capacity , cut - off rate , fading , high snr , ricean fading .
|
network operators continuously upgrade their networks due to increasing demands for ubiquitous communications caused by two reasons .the increased burstiness of high - volume traffic is the first . as popularities of cloud computing , internet of things ( iot ) , and high - speed mobile communications increase, network traffic is becoming extremely dynamic and bursty .thus , just enlarging the network capacity is not an economical choice for network operators , while neglecting it will strongly affect the qos .second , various emerging networking applications , such as online gaming , data backups and virtual machine migrations , which are aimed at ubiquitous communications , also result in heterogeneous demands in optical telecom networks .in addition , today s network customers request more customized services , such as virtual private network ( vpn ) and video on demand ( vod ) , as well as more differentiated requirements with different prices . for some of the trafficwhich is delay - insensitive and can accept some compromise in bandwidth or other aspects , it can be preempted by more important " requests when the network becomes congested .therefore , to maintain cost - effectiveness and customers loyalty , network operators can provide different grades of service besides sufficient bandwidth for these emerging demands , instead of managing to support all the traffic without distinction . to address these problems , _degraded provisioning _ is proposed for network operators to provide a degraded level of service when network congestion occurs instead of no service at all .generally , degraded provisioning refers to two different approaches : 1 ) keeping the total amount of transferred traffic constant by time prolongation or modulation level adjustment with immediate service access ( qos - assured ) , and 2 ) directly degrade request bandwidth without time or modulation compensation , or no guarantee for immediate access ( qos - affected ) .we focus on qos - assured degraded provisioning problems in this study .when performing degradation , two questions may come up : which request to degrade ? and how much to degrade ?a new service level agreement ( sla ) with the customer may answer these questions when traffic requests are sorted in different priorities in service - differentiated networks . in multi - layer networks , qos - assured degradation has different implementation methods in different layers .the basic idea of degraded provisioning in electric layer is trading time for space .this means that , for a delay - insensitive and degradation - tolerant traffic request , when congestion occurs and the bandwidth is in shortage , we degrade its transmission rate to enlarge the current available bandwidth ( space ) , and extend its service holding time accordingly on the premise that the full traffic amount is constant .note that a traffic request can not be degraded arbitrarily , and it may be constrained by a certain deadline , no matter strict or relaxed . in the elastic optical layer , degradation refers to decreasing the number of occupied spectrum slots for a lightpath and raising the modulation level accordingly to guarantee the data rate of the lightpath . in ofdm - based elastic optical networks ,modulation level can be reconfigured in the dsp and dac / adc via software , which enables the dynamic adjustment of lightpath modulation level . however , higher - level modulation tends to have shorter transmission reach , which is also a constraint on optical degradation .therefore , we should consider different constraints in both layers together to perform degraded provisioning in multi - layer networks . due to the flexibility enabled by degraded provisioning, there have been many studies on this topic in different kinds of optical networks in recent years . in conventional wdm networks ,roy _ et al . _ studied supporting degraded service using multipath routing in a qos - affected way . studied reliable multipath provisioning , exploiting the flexibility in bandwidth and delay .andrei _ _ et al.__ proposed a deadline - driven method to flexibly provision services in optical networks , but this work provided equal accessibility to all requests without distinction and did not guarantee immediate service access . introduced a dynamic scheme to reduce blocking by exploiting degraded - service tolerance in telecom networks in qos - affected way , but this work tried to degraded the minimum number of connections without considering different priorities of requests .they also applied this qos - affected degraded - service - provisioning method to increase network survivability with certain sla .mixed - line - rate ( mlr ) network was proposed to provide flexibility in optical layer to reduce transmission costs . in mlr networks ,vadrevu _ et al . _ proposed a qos - affected degradation scheme using multipath routing to support degraded services considering minimum - cost mlr network design .but the itu - t grid limit still putd constraints on optical layer flexibility . as a major development of optical - layer technology ,elastic optical networking enables more flexibility in optical modulation and spectrum allocation .distance - adaptive spectrum resource allocation is a similar approach as optical degradation , but its limitations are that the modulation format of a lightpath is configured at one time based on the transmission reach and can not be adjusted according to the fluctuation of traffic . proposed a dynamic algorithm for joint multi - layer planning in ip - over - flexible - optical networks , but this work did not consider dynamic adjustment of lightpath modulation format when network congestion occurs .recent progress in modulation format conversion enables all - optical ook to 16qam adjustment , and its advantages in elastic optical networking were demonstrated by yin _ when modulation format can be converted at an intermediate node of a lightpath .therefore , the concept of degraded provisioning can be extended to the elastic optical layer , and this important issue has not been fully understood , with no previous studies . in this work ,we investigate the dynamic qos - assured degraded provisioning problem in service differentiated multi - layer elastic optical networks .we summarize our contributions as follows : 1 ) to the best of our knowledge , this is the first investigation on qos - assured degraded provisioning problem in multi - layer networks with optical elasticity ; 2 ) we propose an enhanced multi - layer network architecture and solve the complex dynamic degraded routing problem ; and 3 ) we further propose novel dynamic heuristic algorithms in both layers to achieve degraded resource allocation .illustrative numerical results show that a significant reduction of network service failures can be achieved by multi - layer degradation , up to two orders of magnitude .in dynamic traffic scenario , traffic requests arrive one at a time and hold for certain durations .we design dynamic heuristic algorithms to cope with online degraded provisioning problems in realistic topology .we first decompose the problem into two subroutines : 1 ) degraded routing , and 2 ) degraded resource allocation .degraded routing solves the subproblem of degraded - route computation when conventional routing can not be performed due to resource shortage in some links of the network .optical degraded routing acts similarly as electric degraded routing , and the term _ request _ here refers to the _ lightpath request _ in optical layer , and the _ service request _ in electric layer .there are two major considerations on degraded routing : route hops and potential degraded requests .the route hops represent the overall amount of resources occupied by the new request , while the potential degraded requests represent how much the new request will affect existing requests .we define a link in any layer of the multi - layer network as a tuple : , which is the link from to . is a set that contains existing requests routed on this link , and is the available capacity of this link . is a request , and is a degraded route for in electric or optical layer .we introduce two metrics to evaluate the route .route hops ( rh ) represents the resource occupancy of a request , and potential degraded requests ( pdr ) evaluates the impact of a request on existing requests .note that the operation returns the number of elements in a set , and numbers of them are calculated as : .\mathbf{\theta}| % \vspace { -0.5em}\ ] ] to calculate a route for minimizing rh , the dijkstra algorithm is applied . however, minimizing pdr is not that easy . because the minimizing - pdr problem aims to obtain a route that has the smallest pdr among all possible routes between a given source - destination ( s - d ) pair , while existing requests on a link are time - varying and several links may support the same request together .a straight - forward idea is to list all possible routes between a given s - d pair and compare their pdr .but , the complexity of this process is ( denotes the number of nodes ) , which is not suitable for dynamic traffic , and new methods are needed . here, we propose the enhanced multi - layer network architecture by introducing the auxiliary _ request layer _ , which lies directly above the electric layer ( fig .1 ) , and solve the minimizing - pdr problem in polynomial time . in the enhanced multi - layer architecture ,the request layer consists of all nodes in the other two layers and directional weighed links .link weight equals to the number of existing requests between the node pairs .in fact , in our proposed architecture , it is existing requests , rather than available resources in conventional multi - layer networks , that are mapped to the upper layer .this is because the goal is to minimize the potential affected requests , not resource occupancy . with this enhanced architecture , the _ minimizing - pdr problem _ on one layer ( optical or electric layer )is transformed into a _ weighed shortest - path problem _ on the upper layer ( electric or request layer ) , and thus can be solved using a shortest - path algorithm . ' '' '' + * algorithm * : minimizing - pdr algorithm + ' '' '' + initialize topology matrix of the upper - layer ; ++ ; run dijkstra algorithm on the upper - layer topology , and a route is returned ; find a request among all with the shortest traffic hops in lower layer , and acquire its route ; combine all the acquired routes together , and return it ; ' '' '' now , we introduce two policies of degraded routing : we manage to minimize rh first , and then minimize pdr .we try to minimize pdr as a primary goal , then we minimize rh .when a degraded route ( electric ) or ( optical ) is acquired , we need to decide which request or requests to degrade , and how much to degrade them . in the multi - layer network , degraded resource allocation refers to different operations in different layers , which should be further studied separately .we propose the ed - ba algorithm based on a determined degraded route .2(a ) shows the basic principle of the ed - ba algorithm , that requests with higher priorities can preempt " those requests with no higher priorities . here , the term preempt " means that some existing requests are degraded in transmission rate due to arrival of a new request with priority no smaller than them .degradation in transmission rate will cause service - holding - time prolongation , which should not exceed the deadline of the request .meanwhile , when performing degradation , we manage to degrade the minimal number of requests to provide just - enough bandwidth for the new arriving one . for convenience , a traffic service request on electric layer is defined as a tuple : , which mean source , destination , bandwidth , arrival time point , holding time , prolongation deadline and priority of a service request , respectively .we define a function , which sorts elements in set in ascending order of . ' '' '' + * algorithm * : ed - ba algorithm + ' '' '' + current time , arriving request , = 1 ; ; _ /*potential degraded links*/ _ ; continue ; 0 ; _ /*accumulate available bandwidth*/ _degrade to its maximum extent , s.t . ; ; request routed successfully on ; break ; request blocked ; = 0 ; break ; continue ; request is routed successfully ; ' '' '' in an elastic optical network , optical degradation refers to the reduction of occupied - spectrum - slot numbers of a lightpath , and the basic idea of optical degradation is to raise some of the lightpaths modulation level to spare enough slots for a new lightpath s establishment .optical degradation should obey the modulation - distance constraint , and thus shorter lightpaths have more opportunities to be degraded .the number of spectrum slots in a fiber is . is a binary bitmask that contains bits to record the availability of each spectrum slot in fiber , e.g. , =1 ] . | \sum_{f = 1}^{|\mathcal{p}^o|-1}s_f[p ] \lor s_{f+1}[p]=0\ } % \vspace { -0.5em}\ ] ] we define slot border through lightpaths ( sbtl ) to evaluate whether a slot border locates inside the occupied spectrum of a lightpath . denotes index of spectrum slot borders , and there are borders , thus ] with the largest value in ; scan the spectrum to acquire and ( ) ; ; _ /*degrade to modulation level */ _ setup a new lightpath with slots , starting from ; ; _ /*degrade to modulation level */ _ setup a new lightpath with slots , starting from ; request blocked ; break ; choose smallest in , and perform sentence 7 to 17 ( here , let ) ; request blocked ; break ; ' '' '' in degraded routing stage , both minimizing - rh and minimizing - pdr problems can be solved with ( dijkstra algorithm , -node topology ) complexity with the enhanced multi - layer architecture . in degraded resource allocation stage , we further evaluate the complexity of the two proposed algorithms in different layers . the worst case for the degraded route that the it goes through almost every node of the topology , and the hops is . in ed - ba algorithm, we suppose that the maximum number of existing requests on each link is , which is related to traffic load , and the time complexity is . in od - msa algorithm ,the time complexity is , where is a constant parameter in a certain fiber configuration .hence , the complexity of the proposed dynamic degraded provisioning scheme is and can be used in online decision making for dynamic traffic accommodation .table i summarizes the relationships among modulation formats , lightpath data rate , and transmission reach based on the results reported in .we assume that the default modulation format is bpsk in the network .m95pt < m25pt < m25pt < m25pt <m25pt < modulation format & bpsk & qpsk & 8qam & 16qam + modulation level & 2 & 4 & 8 & 16 + bits per symbol & 1 & 2 & 3 & 4 + slot bandwidth ( ghz ) & 12.5 & 12.5 & 12.5 & 12.5 + data rate ( gbps ) & 12.5 & 25 & 37.5 & 50 + transmission reach ( km ) & 9600 & 4800 & 2400 & 1200 + we consider the usnet topology ( fig .3 ) for dynamic performance simulation .all fibers are unidirectional with 300 spectrum slots , and the spectrum width of each slot is 12.5 ghz .traffic requests are generated between all node pairs , and characterized by poisson arrivals with negative exponential holding times .the granularities of requests are distributed independently and uniformly from 5 gbps to 150 gbps .the maximum acceptable value of degraded transmission rate is uniformly distributed between 100 and 25 percent of their original bandwidth .there are 5 priorities with equal amount each .the lightpath establishment threshold for grooming is chosen to be 150 gbps , which is equals to the largest request bandwidth , because this threshold has been demonstrated to perform the best of blocking performance .an event - driven dynamic simulator has been developed to verify the effectiveness of the heuristic algorithms .six degrdation policies ( oe - minpdr , o - minpdr , e - minpdr , oe - minrh , o - minrh , e - minrh ) , which are combinations of two degraded routing policies ( minpdr , minrh ) , and three degraded resource allocation policies ( oe : both - layer degradation , o : optical degradation only , e : electric degradation only ) are studied .4 depicts the bandwidth blocking probability ( bbp ) advantages of our proposed scheme over the conventional scheme ( threshold - based grooming , no degradation ) with different degradation policies .4(a ) shows the overall performance of all requests , and we can find there is a crossing point between optical degradation and both - layer degradation . in low - load area( 26 - 34 ) , both - layer degradation ( oe - minpdr , oe - minrh ) performs the best , up to two orders of magnitude , while in high - load area ( 36 - 44 ) , optical - layer degradation ( o - minpdr , o - minrh ) performs the best .the reason is that , in high - load conditions , electric degradation ( e - minpdr , e - minrh ) achieves worse bbp than no degradation , which affects the blocking reduction by optical degradation in both - layer degradation . figs .4(b ) and 4(c ) show the bbp performance of requests in the highest and lowest priority .and we can conclude that all degradation policies can achieve significant blocking reduction in the highest priority , while , for the lowest - priority , the blocking performance acts similar as requests with all priorities do .we also observe some common patterns in these three graphs .first , optical degradation performs almost the same regardless of priorities , because optical degradation does not involve service priorities as electric degradation does .second , minpdr performs better in optical - related degradations ( both - layer degradation and optical degradation ) , while minrh performs better only in electric degradation .this is because the route minpdr returns tends to have a smaller number of existing requests , which increases the elements in assi or sbtl , thus increasing possibility of successful optical degradation .and the route minrh returns tends to have more existing requests , which provides more requests ( thus , more bandwidth and more opportunity ) for the arriving request to preempt on electric layer , which increases the possibility of successful electric degradation. actually , different mechanisms of optical degradation and electric degradation determine that minpdr is more suitable for optical degradation , while minrh suits electric degradation better . the result that both - layer degradation and optical degradation performs similarly reveals that optical degradation has stronger influence on blocking reduction because it can enlarge the network capacity by high - order modulation while electric degradation just deals with the bandwidth - time exchange to trade time for bandwidth under a given network capacity . to study the instantaneous working mechanism and performance of the proposed degraded provisioning scheme , we conduct transient analysis on instantaneous network throughput and bbp , and the results are shown in fig .5 . from figs . 5(a ) and 5(b ) , we obtain similar conclusions as the dynamic evaluations , that optical - related degradation achieves better compliance with the offered load in minpdr , while electric degradation accomplishes better improvements in minrh .5(c ) shows the instantaneous bbp variance over time , and we observe that different levels of blocking reduction can be achieved by different degradation policies , both - layer degradation policies have the largest blocking reduction , and oe - minpdr performs even better ( almost zero blocking ) .in this work , we investigated dynamic qos - assured degraded provisioning problem in service - differentiated multi - layer networks with optical elasticity . we proposed and leveraged the enhanced multi - layer architecture to design effective algorithms for network performance improvements .numerical evaluations showed that we can achieve significant blocking reduction , up to two orders of magnitude via the new degraded provisioning policies .we also conclude that optical - related degradation achieves better performance with minpdr , while electric degradation has lower blocking with minrh due to different mechanisms of multi - layer degradation .
|
the emergence of new network applications is driving network operators to not only fulfill dynamic bandwidth requirements , but offer various grades of service . degraded provisioning provides an effective solution to flexibly allocate resources in various dimensions to reduce blocking for differentiated demands when network congestion occurs . in this work , we investigate the novel problem of online degraded provisioning in service - differentiated multi - layer networks with optical elasticity . quality of service ( qos ) is assured by service - holding - time prolongation and immediate access as soon as the service arrives without set - up delay . we decompose the problem into degraded routing and degraded resource allocation stages , and design polynomial - time algorithms with the enhanced multi - layer architecture to increase the network flexibility in temporal and spectral dimensions . illustrative results verify that we can achieve significant reduction of network service failures , especially for requests with higher priorities . the results also indicate that degradation in optical layer can increase the network capacity , while the degradation in electric layer provides flexible time - bandwidth exchange .
|
the rapid increase of mobile traffic primarily driven by data - intense applications such as video streaming and mobile web requires new wireless architectures and techniques .hcns have attracted much interest due to their potential of improving system capacity and coverage with increasing density . because of the opportunistic and dense deployment with sometimes limited site - planning , hcns have at the same time contributed to rendering interference the performance - limiting factor .base station ( bs ) cooperation , which aims at increasing the signal - to - interference ratio ( ) at victim users , is a promising technique to cope with newly emerging interference situations .bs cooperation has been thoroughly analyzed in . to address interference issues associated with heterogeneous deployments and to make use of the increased availability of wireless infrastructure , bs cooperationwas also studied for hcns . in authors demonstrated that with low - power bss irregularly deployed inside macro - cell coverage areas , bs cooperation achieves higher throughput gains compared to the macro - cell only setting , and hence as hcns create new and complex cell borders more users profit from tackling other - cell interference through bs cooperation .the applicability of coordinated scheduling / beamforming ( cs / cb ) cooperation for hcns was studied in , where it was found that practical issues such as accurate csi feedback and tight bs synchronization required for coherent cooperation may disenchantingly limit the achievable gains .such practical challenges associated with bs cooperation are by no means unique to hcns , and hence other techniques with less stringent requirements have been studied as well .one such technique is non - coherent jt , in which a user s signal is transmitted by multiple cooperating bss without prior phase - mismatch correction and tight synchronization across bss . at the user, the received signals are non - coherently combined , thereby providing opportunistic power gains .the standardization interest for non - coherent jt , is particularly due to its _ lower implementation complexity _ for both the backhaul and the csi feedback and its ability for _ balancing load _ ; features of essential importance in hcns .besides , analyzing bs cooperation in hcns entails several challenges due to the many interacting complex system parameters , e.g. , radio channel , network geometry , and interference . to make things even more difficult ,these parameters typically differ across tiers , e.g. , bs transmit power , channel fading or cell association . to address these challenges , _ stochastic geometry _ has recently been proposed and used for analyzing cooperation in cellular networks . in this paper , we model and analyze non - coherent jt cooperation in hcns .the contributions are summarized below .* analytical model : * a tractable model for hcns with non - coherent jt is proposed in section [ sec : model ] .the model incorporates cooperation aspects of practical importance such as user - centric clustering and channel - dependent cooperation activation , each of which with a tier - specific threshold that models the complexity and overhead allowed in each tier .other aspects such as bs transmit power , path loss , and arbitrary fading distribution are also assumed tier - specific .* coverage probability : * as the main result , the coverage probability under non - coherent jt is characterized in section [ sec : cov_prob ] for a typical user .the main result has a compact semi - closed form ( derivatives of elementary functions ) and applies to general fading distributions .we also propose a simple but accurate linear approximation of the coverage probability . * design insights : * _ load balancing : _ balancing load in two - tier hcns , by additionally pushing more users to small cells in order to let these cells assist macro bss with non - coherent jt , is favorable only to a limited extent . as small - cell cooperative clusters are increased , spectral efficiency gains grow only approximately logarithmically while cell load in those cells increases much faster . at small cluster sizes of small cells , generously stimulating cooperation by channel - dependent cooperation activation yields considerable spectral efficiency gains without consuming much radio resources . + _ intra - cluster scheduling in small cells : _ when cooperation is aggressively triggered , small cells should reuse the resources utilized by non - coherent jt , i.e. , intra - cluster frequency reuse ( fr ) , to obtain cell - splitting gains . in lightly - loaded small cells with less aggressive triggering ,not reusing these resources , i.e. , intra - cluster cs , is better to avoid harmful interference .we consider an ofdm - based co - channel -tier hcn with single - antenna bss in the downlink .the locations of the bss in the tier are modeled by a stationary planar poisson point process ( ppp ) with density .the bs point processes are assumed independent .every bs belonging to the tier transmits with power .a signal transmitted by a tier bs undergoes a distance - dependent path loss , where is the path loss exponent of the tier .[ fig : illustration ] illustrates the considered scenario .the entire set of bss , denoted by , is formed by superposition of the individual random sets , i.e. , . by , the point process is again a stationary ppp with density .we assume single - antenna users / receivers to be distributed according to a ppp . by slivnyaks theorem , we evaluate the system performance at a _ typical _receiver located at the origin without loss of generality .the transmitted signals are subject to ( frequency - flat ) block - fading .the ( power ) fading gain from the -th bs in the tier to the typical user at the origin is denoted by .we assume that the are i.i.d ., i.e. , the fading statistics may possibly differ across the tiers .when appropriate , we will drop the index in .we further require that =1 ] for all .heterogeneous propagation conditions might , for instance , be due to different antenna heights across tiers .thermal noise is neglected for analytical tractability but can be included in the analysis ._ bs clustering model : _ we employ a dynamic user - centric bs clustering method . in this method ,bss with sufficiently high average received signal strength ( rss ) monitored at a given user form a cooperative cluster to cooperatively serve this user .transferring this to the model , the -th bs from the tier at location belongs to the cooperative cluster of the typical user if .hereby , denotes the tier rss threshold , which depends on the allowable cooperation overhead in the tier and serves as a design parameter .the set of cooperative bss from the tier , then , has the form rcl _ k\{_ik_k|._ik()^-1/_k}[eq : cluster ] .the corresponding subset of non - cooperative bss is denoted by .practical user - centric clustering methods slightly differ from the above clustering model as the rss _ difference _ to the serving bs is considered .modeling this kind of clustering is analytically more involved and is deferred to future work .[ c][c ] [ c][c ] [ c][c ] [ c][c ] ) form a cooperative cluster for the typical user .nearby tier-2 bss ( inside dark - shaded region with radius ) join this cooperative cluster .all other nodes create out - of - cluster interference.,title="fig:",scaledwidth=47.0% ] [ fig : illustration ] rcl _ _k(s)&=&\{-_k_k^2/_k__k}[eq : lap_pk ] _ channel - dependent cooperation activation : _ whether a bs of a cooperative cluster gets engaged in a cooperative transmission to a particular user typically depends on its instantaneous channel to that user . to capture the basic impact of this channel - dependent mechanism , we use the following model : the -th cooperative bs of the tier joins a cooperative transmission to the typical user if , where and is the cooperation activation threshold corresponding to the tier . similar to ,the variable serves as a tunable design parameter to trade off performance against overhead .the subset of _ active _ cooperative bss from the tier serving the typical user is denoted as rcl _, k\{_ik_k|._ik()^-1/_k}.[eq : scheduled ] we denote by the set of cooperative bss from the tier not participating in the cooperative transmission to the typical user. these bss may remain silent ( intra - cluster cs ) or may serve other users ( intra - cluster fr ) on the resources used for the cooperative transmission . _non - coherent joint - transmission : _ in non - coherent jt , bss scheduled for cooperative transmission to a user transmit the same signal without prior phase - alignment and tight synchronization to that user . at the user ,the multiple copies are received non - coherently . at the typical user , the then be expressed as rcl & & , [ eq : sir ] where * is the received signal power , * is the intra - cluster interference , * is the out - of - cluster interference .note that in the denominator of is zero when intra - cluster cs is assumed instead of intra - cluster fr .also , the random variables , and are mutually independent .in this section , the coverage probability is derived for the typical user under non - coherent jt .it is defined as rcl _ ( ) for some threshold .note that the distributions of , and do not exhibit a closed - form expression in general . to get a better handle on the in, we therefore propose an approximation of the sum interference prior to characterizing the for the considered model .[ prop : approx ] the sum interference in can be approximated by a gamma distributed random variable having distribution , where rcl = [ eq : nu ] is the _ shape _ parameter and rcl = [ eq : sigma ] is the _ scale _ parameter .since and are mutually independent across tiers and by the linearity property of the expectation , the proof follows by computing the mean and variance of using campbell s theorem and applying a second - order moment - matching , see ( * ? ? ?* appendix b ) for details . for intra - cluster cs in the tier, one has to set in and .the gamma approximation of the sum interference created by poisson distributed interferers was also previously used in , where the accuracy was found satisfactorily high .it can be applied whenever the interference has finite mean and variance .[ thm : cov_prob ] the coverage probability of the typical receiver in the described hcn setting can be bounded above and below as rcl _1-_m=0 ^ -1_s=,[eq : cov_prob ] where is given by at the top of this page .see appendix . the worst - case gap between the lower and upper bound is equal to the value of the last summand . for integer - valued , either the upper or the lowerbound becomes exact .a simple approximation to can be obtained using a linear combination of the bounds in with weights chosen according to the relative distance of to and .[ col : approx_pc ] the coverage probability can be approximated as rcl _ & & 1-_m=0 ^ -1_s= + & & -(-)_s=.[eq : pc_approx ] as will be demonstrated later , the approximation in turns out to be reasonable accurate despite its simple form. it may furthermore be interesting to study the conditioned upon a fixed number of cooperating bss in every tier .we denote by the combined received signal power from the tier conditional on cooperative -tier bss .conditioned on the fact that tier- bss belong to the cooperative set of the typical user , the conditional laplace transform of is rcl __ k|c_k(s)=(1+__k(s))^c_k.[eq : lap_pk_con ] computing the -th derivative in is quite involved since and are composite functions .generally , the -th derivative of composite functions can be efficiently obtained by fa di bruno s rule and bell polynomials , given that the derivatives of the outer and inner function are known .we next derive the -th derivative of the inner function ( i.e. , the exponent ) of .the conditional case can be obtained analogously .[ lem : lap_diff ] for , the -th derivative of the exponent of evaluated at is given by rcl __ k(-s)|_s=&=&_k_k^2/_k ( ) ^m-2/_k + & & . for the unconditioned case ,in particular , the computation of the required derivatives for obtaining can be further simplified by exploiting the exponential form of .the differentiation {s=-1/\theta\beta} ] , =4 ] in the ppp model .it can be seen that the gamma approximation of the interference from proposition [ prop : approx ] is accurate as the gap between the lower and upper bound enclosing the simulated is fairly small .also , the simple approximation from corollary [ col : approx_pc ] performs remarkably well ( here , the shape is ) .* effect of adding more tiers : * fig .[ fig : sir_tiers ] shows the impact on when adding additional tiers .interestingly , indicating the performance of non - coherent jt in terms of the number of tiers is not straightforward .for instance , the for tier-1+tier-2 hcns can be higher than for the case of three tiers .this is because , in this example , the clustering threshold in tier-3 was chosen relatively high , e.g. , due to complexity and overhead constraints , resulting in a rather unfavorable ratio of interference and cooperation .hence , adding more tiers exhibits a non - monotonic trend in terms of .* effect of load balancing : * non - coherent jt can be used also for load balancing , which is especially important in hcns to avoid under-/over - utilization of the different tiers . due to transmit power imbalance between the different tiers, this typically means to push users towards smaller cells , e.g. , by biasing cell association .balancing load using non - coherent jt is done by varying , of the corresponding small cells .importantly , imprudently stimulating more cooperation by lowering and/or increases the , however , possibly at the cost of an overwhelming load increase in the participating small cells . using the developed model ,this effect is analyzed next for the example of a -tier hcn . since describing cell load in hcns with cooperationis analytically difficult , we use a simple model for characterizing the load increase in the -tier cell due to cooperation . using itcan be seen that users closer than to a -tier bs request cooperation from that bs .second , given the stationarity of the user point process , the number of radio resources used for cooperation in a tier-2 cell is proportional to the number of cooperation requests .third , fixing an through some , , the load increase relative to measured as a function of , can then be defined as rcl _ k.[eq : load ] applying campbell s theorem for evaluating the expectations in , we obtain rcl _ k=-1 .note that does not characterize the total cell load , but rather characterizes the underlying trend as a function of and , as these two parameters strongly influence the number of radio resources used for cooperation .[ fig : sir_exp_se_load ] shows how the average spectral efficiency ] .it can be seen that ] .it can be seen that at low , switching from cs to fr does barely affect ( or ]-loss of 14.6% . in lightly - loaded cellscs should hence be used when a high is desired in order to additionally profit from muting intra - cluster interference .we developed a tractable model and derived the coverage probability for non - coherent jt in hcns , thereby accounting for the heterogeneity of various system parameters including bs clustering , channel - dependent cooperation activation , and radio propagation model . to the best of the authors knowledgethis is the first work to analyze cooperation in such generic hcns .the developed theory allowed us to treat important design questions related to load balancing and intra - cluster scheduling . where is the laplace transform of the combined received signal power .due to the independence property of the , we can decompose into , where is the laplace transform corresponding to the received power from tier bss .it can be obtained as where ( a ) follows from the i.i.d .property of the , ( b ) follows from the probability generating functional of a ppp , and ( c ) follows from interchanging expectation and integration and from the substitution .. then follows after partial integration .j. li _ et al ._ , `` performance evaluation of coordinated multi - point transmission schemes with predicted csi , '' in _ ieee intl .symposium on personal indoor and mobile radio commun .( pimrc ) _ , 2012 , pp .. j. g. andrews _et al . _ , `` an overview of load balancing in hetnets : old myths and open problems , '' _ submitted to ieee commun .abs/1307.7779 , 2013 , available at http://arxiv.org/abs/1307.7779 .r. tanbourgi , s. singh , j. g. andrews , and f. k. jondral , `` a tractable model for non - coherent joint - transmission base station cooperation , '' _ arxiv e - prints _ , jul .2013 , submitted to _ieee trans .wireless commun ._ , available at http://arxiv.org/abs/1308.0041 .g. nigam , p. minero , and m. haenggi , `` coordinated multi - point in heterogeneous networks : a stochastic geometry approach , '' in _ workshop on emerging technologies for lte - advanced and beyond 4 g , ieee globecom _ , 2013 , available at : www.nd.edu/~mhaenggi/pubs/globecom13b.pdf .s. singh , h. s. dhillon , and j. g. andrews , `` offloading in heterogeneous networks : modeling , analysis , and design insights , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 5 , pp .24842497 , dec .
|
base station ( bs ) cooperation is set to play a key role in managing interference in dense heterogeneous cellular networks ( hcns ) . non - coherent joint transmission ( jt ) is particularly appealing due to its low complexity , smaller overhead , and ability for load balancing . however , a general analysis of this technique is difficult mostly due to the lack of tractable models . this paper addresses this gap and presents a tractable model for analyzing non - coherent jt in hcns , while incorporating key system parameters such as user - centric bs clustering and channel - dependent cooperation activation . assuming all bss of each tier follow a stationary poisson point process , the coverage probability for non - coherent jt is derived . using the developed model , it is shown that for small cooperative clusters of small - cell bss , non - coherent jt by small cells provides spectral efficiency gains without significantly increasing cell load . further , when cooperation is aggressively triggered intra - cluster frequency reuse within small cells is favorable over intra - cluster coordinated scheduling . heterogeneous cellular networks , cooperation , non - coherent joint - transmission , stochastic geometry .
|
constraint programming ( cp ) is widely used to solve a variety of practical problems such as planning and scheduling , and industrial configuration .constraints can either be represented explicitly , by a table of allowed assignments , or implicitly , by specialized algorithms provided by the constraint solver .these algorithms may take as a parameter a _ description _ that specifies exactly which kinds of assignments a particular instance of a constraint should allow .such implicitly represented constraints are known as global constraints , and a lot of the success of cp in practice has been attributed to solvers providing them . the theoretical properties of constraint problems , in particular the computational complexity of different types of problem , have been extensively studied and quite a lot is known about what restrictions on the general _ constraint satisfaction problem _ are sufficient to make it tractable . in particular , many structural restrictions , that is , restrictions on how the constraints in a problem interact , have been identified and shown to yield tractable classes of csp instances .however , much of this theoretical work has focused on problems where each constraint is explicitly represented , and most known structural restrictions fail to yield tractable classes for problems with global constraints , even when the global constraints are fairly simple .theoretical work on global constraints has to a large extent focused on developing efficient algorithms to achieve various kinds of local _ consistency _ for individual constraints .this is generally done by pruning from the domains of variables those values that can not lead to a satisfying assignment .another strand of research has explored conditions that allow global constraints to be replaced by collections of explicitly represented constraints .these techniques allow faster implementations of algorithms for _ individual constraints _ , but do not shed much light on the complexity of problems with multiple _ overlapping _ global constraints , which is something that practical problems frequently require . as such ,in this paper we investigate what properties of explicitly represented constraints structural restrictions rely on to guarantee tractability . identifying such properties will allow us to find global constraints that also possess them , and lift well - known structural restrictions to instances with such constraints .as discussed in , when the constraints in a family of problems have unbounded arity , the way that the constraints are _ represented _ can significantly affect the complexity .previous work in this area has assumed that the global constraints have specific representations , such as propagators , negative constraints , or gdnf / decision diagrams , and exploited properties particular to that representation .in contrast , we will use a definition of global constraints that allows us to discuss different representations in a uniform manner .furthermore , as the results we obtain will rely on a relationship between the size of a global constraint and the number of its satisfying assignments , we do not need to reference any specific representation . as a running example, we will use the connected graph partition problem ( cgp ) , defined below .the cgp is the problem of partitioning the vertices of a graph into bags of a given size while minimizing the number of edges that span bags .the vertices of the graph could represent components to be placed on circuit boards while minimizing the number of inter - board connections .[ prob : cgp ] we are given an undirected and connected graph , as well as .can be partitioned into disjoint sets with such that the set of broken edges has cardinality or less ?this problem is -complete , even for fixed .we are going to use the results in this paper to show a new result , namely that the cgp is tractable for every fixed .in this section , we define the basic concepts that we will use throughout the paper . in particular , we give a precise definition of global constraints , and illustrate it with a few examples .let be a set of variables , each with an associated set of domain elements .we denote the set of domain elements ( the domain ) of a variable by .we extend this notation to arbitrary subsets of variables , , by setting .an _ assignment _ of a set of variables is a function that maps every to an element .we denote the restriction of to a set of variables by .we also allow the special assignment of the empty set of variables .in particular , for every assignment , we have .let be a set of assignments of a set of variables .the _ projection _ of onto a set of variables is the set of assignments .note that when we have , but when and , we have . [ def : disjoint - union ]let and be two assignments of disjoint sets of variables and , respectively .the _ disjoint union _ of and , denoted , is the assignment of such that for all , and for all .global constraints have traditionally been defined , somewhat vaguely , as constraints without a fixed arity , possibly also with a compact representation of the constraint relation .for example , in a global constraint is defined as `` a constraint that captures a relation between a non - fixed number of variables '' .below , we offer a precise definition similar to the one in , where the authors define global constraints for a domain over a list of variables as being given intensionally by a function computable in polynomial time .our definition differs from this one in that we separate the general _ algorithm _ of a global constraint ( which we call its _ type _ ) from the specific description .this separation allows us a better way of measuring the size of a global constraint , which in turn helps us to establish new complexity results .[ def : glob - const ] a _ global constraint type _ is a parameterized polynomial - time algorithm that determines the acceptability of an assignment of a given set of variables .each global constraint type , , has an associated set of _ descriptions _, .each description specifies appropriate parameter values for the algorithm .in particular , each specifies a set of variables , denoted by .a _ global constraint _ ] is mapped to 1 , and each disallowed assignment is mapped to 0 .extension _ or _constraint relation _ of ] .we also say that such assignments _ satisfy _ the constraint , while all other assignments _ falsify _ it .when we are only interested in describing the set of assignments that satisfy a constraint , and not in the complexity of determining membership in this set , we will sometimes abuse notation by writing ] . as can be seen from the definition above , a global constraint is not usually explicitly represented by listing all the assignments that satisfy it . instead , it is represented by some description and some algorithm that allows us to check whether the constraint relation of ] decides , for any assignment of , whether whether it is _ not _ included in .observe that disjunctive clauses , used to define propositional satisfiability problems , are a special case of the negative constraint type , as they have exactly one forbidden assignment .we observe that any global constraint can be rewritten as a table or negative constraint .however , this rewriting will , in general , incur an exponential increase in the size of the description .as can be seen from the definition above , a table global constraint is explicitly represented , and thus equivalent to the usual notion of an explicitly represented constraint .an instance of the constraint satisfaction problem ( csp ) is a pair where is a finite set of _ variables _ , and is a set of _ global constraints _ such that for every \in c ] . a _ classic _ csp instance is one where every constraint is a table constraint. a _ solution _ to a csp instance is an assignment of which satisfies every global constraint , i.e. , for every \in c ] .we denote the set of solutions to by .the _ size _ of a csp instance is \in c } |\delta| ] . a _ tree decomposition _ of a hypergraph is a pair where is a tree and is a labelling function from nodes of to subsets of , such that 1 . for every , there exists a node of such that , 2 . for every hyperedge ,there exists a node of such that , and 3 . for every , the set of nodes induces a connected subtree of .let be a hypergraph .a _ width function _ on is a function that assigns a positive real number to every nonempty subset of vertices of .a width function is monotone if whenever .let be a tree decomposition of , and a width function on .the _ -width _ of is .the _ -width _ of is the minimal -width over all its tree decompositions . in other words ,a width function on a hypergraph tells us how to assign weights to nodes of tree decompositions of .let .the treewidth of a hypergraph is the -width of .let be a hypergraph , and .an edge cover for is any set of hyperedges that satisfies .the edge cover number of is the size of the smallest edge cover for .it is clear that is a width function .the generalized hypertree width of a hypergraph is the -width of .next , we define a relaxation of hypertree width known as fractional hypertree width , introduced by grohe and marx .let be a hypergraph , and .a _ fractional edge cover _ for is a function ] we have \in { \ensuremath{\gamma}} ] be a constraint. the _ projection of ] such that ) ] with . for a csp instance and define , where is the least set containing for every \in c ] .their algorithm is given as [ alg : enum - solutions ] , and is essentially the usual recursive search algorithm for finding all solutions to a csp instance by considering smaller and smaller sub - instances using constraint projections . solutions to show that [ alg : enum - solutions ] does indeed find all solutions , we will use the following property of constraint projections .[ lemma : solution - projection ] let be a csp instance . for every ,we have . given , let be arbitrary , and let \in c \mid x \cap { \ensuremath{\mathcal{v}}}(\delta ) \not=\emptyset\} ]we have that ] , ) ] the constraint ) ] we can decide in polynomial time whether a given assignment to a set of variables is contained in an assignment that satisfies ] such that . as an example , a catalogue that contains arbitrary egc constraints ( cf .[ example : egc ] ) does not satisfy [ def : part - assignment - checking ] , since checking whether an arbitrary egc constraint has a satisfying assignment is -hard . on the other hand , a catalogue that contains only egc constraints whose cardinality sets are intervals does satisfy [ def : part - assignment - checking ] . if a catalogue satisfies [ def : part - assignment - checking ], we can for any constraint \in \gamma ] for any , in polynomial time .[ def : intersection - vertices ] let be a csp instance .the set of _ intersection variables _ of any constraint \in p ] .[ def : ind - const ] let be a csp instance .for every \in c ] _ is ) = e'[\delta'] ] for any \in c ] in any instance , we have that for every , .if a class of instances has sparse intersections , and the instances are all over a constraint catalogue that allows partial assignment checking , then we can for every constraint ] in polynomial time . while this definition considers the instance as a whole , one special case of it is the case where every constraint has few solutions in the size of its description ,that is , there is a constant and the constraints are drawn from a catalogue such that for every \in \gamma ] .[ thm : decomp - to - classic ] let be a class of csp instances over a catalogue that allows partial assignment checking .if has sparse intersections , then we can in polynomial time reduce any instance to a classic csp instance with , such that has a solution if and only if does .let be an instance from such a class . for each \in c ] from [ def : ind - const ] .since is over a catalogue that allows partial assignment checking , and has sparse intersections , computing ) ] , we have that by [ def : constraint - projection , def : intersection - vertices ] , and the assignment that assigns the value to each \in c}{\ensuremath{\mathsf{iv}}}(\delta) ] for every \in c ] with }|_{{\ensuremath{\mathsf{iv}}}(\delta ) } = \theta|_{{\ensuremath{\mathsf{iv}}}(\delta)} ] . by [ def : intersection - vertices ], the variables not in do not occur in any other constraint in , so we can combine all the assignments } ] and we have }(v) ] allows partial assignment checking , and has only a polynomial number of satisfying assignments .the latter implies that for any instance of the cgp , is polynomial in the size of for every subset of .furthermore , we will show that for the constraint ] is any set of variables ( called a back door set ) such that we can decide in polynomial time whether a given assignment to a set of variables is contained in an assignment that satisfies ] such that .trivially , for every constraint ] .the key point about back doors is that given a catalogue , adding to each \in \gamma ] , as long as , adding to ] .a class of csp instances over has _ sparse back door cover _ if there exists a constant such that for every instance and constraint \in c ] , then there exists a back door set for ] such that \not\in \gamma_{pac} ] in that was not in by a constraint that does allow partial assignment checking , it follows that is over a catalogue that allows partial assignment checking .one consequence of [ lemma : backdoors ] is that we can sometimes apply [ thm : decomp - to - classic ] to a csp instance that contains a constraint for which checking if a partial assignment can be extended to a satisfying one is hard .we can do so when the variables of that constraint are covered by the variables of other constraints that do allow partial assignment checking but only if the instance given by those constraints has few solutions . as a concrete example of this, consider again the encoding of the cgp that we gave in [ example : cgp - as - csp ] .the variables of constraint are entirely covered by the instance obtained by removing .as the entire set of variables of a constraint is a back door set for it , and the instance has few solutions ( cf .[ sect : cgp - app ] ) , this class of instances has sparse back door cover .as such , the constraint could , in fact , be arbitrary without affecting the tractability of this problem .in particular , the requirement that allows partial assignment checking can be dropped .in this paper , we have investigated properties that many structural restrictions rely on to yield tractable classes of csp instances with explicitly represented constraints . in particular, we identify a relationship between the number of solutions and the size of a csp instance as being one such property . using this insight, we show that known structural restrictions yield tractability for any class of csp instances with global constraints that satisfies this property .in particular , the above implies that the structural restrictions we consider yield tractability for classes of instances where every global constraint has few satisfying assignments relative to its size . to illustrate our result, we apply it to a known problem , the connected graph partition problem , and use it to identify a new tractable case of this problem .we also demonstrate how the concept of back doors , subsets of variables that make a problem easy to solve once assigned , can be used to relax the conditions of our result in some cases . as for future work , one obvious research direction to pursueis to find a complete characterization of tractable classes of csp instances with sparse intersections .another avenue of research would be to apply the results in this paper to various kinds of valued csp .aschinger , m. , drescher , c. , friedrich , g. , gottlob , g. , jeavons , p. , ryabokon , a. , thorstensen , e. : optimization methods for the partner units problem . in : proc .lncs , vol . 6697 , pp .419 . springer ( 2011 ) aschinger , m. , drescher , c. , gottlob , g. , jeavons , p. , thorstensen , e. : structural decomposition methods and what they are good for . in : schwentick , t. , drr , c. ( eds . ) proc .lipics , vol . 9 , pp .1228 ( 2011 ) gaspers , s. , szeider , s. : backdoors to satisfaction . in : bodlaender , h.l . , downey , r. , fomin , f.v . , marx , d. ( eds . ) the multivariate algorithmic revolution and beyond , lncs , vol .7370 , pp .springer ( 2012 ) van hoeve , w.j . ,katriel , i. : global constraints . in : rossi ,f. , van beek , p. , walsh , t. ( eds . ) handbook of constraint programming , foundations of artificial intelligence , vol . 2 , pp .elsevier ( 2006 )
|
a wide range of problems can be modelled as constraint satisfaction problems ( csps ) , that is , a set of constraints that must be satisfied simultaneously . constraints can either be represented extensionally , by explicitly listing allowed combinations of values , or implicitly , by special - purpose algorithms provided by a solver . such implicitly represented constraints , known as global constraints , are widely used ; indeed , they are one of the key reasons for the success of constraint programming in solving real - world problems . in recent years , a variety of restrictions on the structure of csp instances that yield tractable classes have been identified . however , many such restrictions fail to guarantee tractability for csps with global constraints . in this paper , we investigate the properties of extensionally represented constraints that these restrictions exploit to achieve tractability , and show that there are large classes of global constraints that also possess these properties . this allows us to lift these restrictions to the global case , and identify new tractable classes of csps with global constraints .
|
atmospheric ozone effects by solar proton events ( spes ) associated with solar eruptive events have been studied since the 1970 s .ozone depletion occurs following the production of odd hydrogen- and nitrogen - oxides .production of ( e.g. , h , oh , ) and ( e.g. , n , no , , ) and depletion of by solar proton events has been studied through both satellite observations and computational modeling ( e.g. , ) . review much of the work in this area .the solar flare of 1 september 1859 was one of the most intense white - light flares ever observed .the flare itself was observed independently by and and lasted approximately 5 minutes .the flare was followed about 17 hours later by a magnetic storm at the earth which lasted about 2 hours .the storm was of such intensity that in the united states and europe fires were started after arcing from induced currents in telegraph wires .the storm was likely caused by energetic charged particles accelerated by one or several highly energetic coronal mass ejections ( cme ) from the sun at the time of the flare .the geomagentic activity associated with the accelerated particles lasted several days at least .studies of very energetic events associated with solar activity are important in understanding how such activity impacts various earth - based systems .an event as energetic as the 1859 one has not been modeled in this way before , and it may be that events of this magnitude and larger are not uncommon over the long term .our modeling was performed using the goddard space flight center two - dimensional atmospheric model that has been used previously for modeling spes , as well as much higher energy events such as gamma - ray bursts .we briefly describe the model here .more detail on the version of the model used and its reliability for high fluence events is given by and the appendix therein .the model s two spatial dimensions are altitude and latitude .the latitude range is divided into 18 equal bands and extends from pole to pole .the altitude range includes 58 evenly spaced logarithmic pressure levels ( approximately 2 km spacing ) from the ground to approximately 116 km .the model computes 65 constituents with photochemical reactions , solar radiation variations , and transport ( including winds and small scale mixing ) as described by and .a photolytic source term is computed from a lookup table and used in calculations of photodissociation rates of atmospheric constituents by sunlight .we have employed two versions of the atmospheric model .one is intended for long term runs ( many years ) and includes all transport mechanisms ( e.g. , winds and diffusion ) ; it has a time step of one day and computes daily averaged constituent values .the second is used for short term runs ( a few days ) and calculates constituent values throughout the day and night , but does not include full transport .this version has a time step of 225 seconds .no direct measurements of the proton fluence are available from 1859 , but an estimate of the fluence of this event based on measurements of nitrate enhancement in greenland ice cores is given by .they use nitrate enhancements associated with events of known proton fluence ( e the 1972 and 1989 flares ) to determine a scale factor between fluence and nitrate enhancement .this allows an estimate of fluence given a measured nitrate enhancement , with a range based on possible scale factor variation .we assume a fluence of protons with energies greater than 30 mev of for the 1859 event , corresponding to the middle of the range of estimated values in .given the known fluence of the october 1989 event ( ) the 1859 event was 6.5 times more energetic in protons .we use this value to scale up the computed atmospheric ionization profiles that were used by to study effects of the october 1989 event for use in this study . this scaling is , of course , uncertain , since there is no way to know the specific proton spectrum for the 1859 event since this sort of data was not available before about 1955 , but it is a `` best guess '' approach .a linear scaling seems appropriate , given that it has been shown for photon events of large fluence ( which have similar atmospheric effects ) that the production of nitrogen oxides ( ) scales linearly with fluence , and the deposition of nitrate is directly dependent upon the production .while x - rays from the flare would be important in the upper atmosphere ( above about 70 km ) , they do not penetrate to the stratosphere and so have little or no impact on ozone .we have therefore neglected any effects of x - rays .the scaled - up ionization profiles are input to the short term version of the atmospheric model as a source of and which then go on to deplete ozone through catalytic cycles .several previous spe studies have found that the proton flux is restricted to latitudes above about by the earth s magnetic field , the structure of which is modified by the spe .we have no way of knowing the precise latitude restriction of the proton flux in the 1859 case , but we adopt the previous limit as likely .we scale the whole range of ionization rates from the october 1989 event by a factor of 6.5 , including ionizations that result from protons with energies between 1 and 30 mev .the constituents , which are especially important above 50 km , are greatly impacted but have relatively short lifetimes and their effect is gone within several hours after the event is over . constituents above 50 km are increased by these lower energy protons and can be transported downwards to the upper stratosphere ( below 50 km ) during late fall and winter .the largest stratospheric impact will be by those protons with energies greater than 30 mev , because the produced by these high energy protons will be much deeper into the stratosphere where the lifetime of the family can be quite long ( months in the middle stratosphere to years in the lower stratosphere ) . while the main magnetic storm associated with the 1859 flare was observed to last about 2 hours , the particle event likely occurred over several days , perhaps as many as 10 days .most spes have durations of several days ; the 1989 spe from which we are extrapolating lasted 12 days .we have chosen to input ionization from the 1859 spe over 2 days in our model .this duration may be shorter than the actual event , though we note that the magnetic storm had a short duration .this is a convenient duration for practical purposes with the model we used .it is known that long term atmospheric effects ( e ozone depletion ) from such ionization are much more strongly dependent on total fluence than on duration .therefore , we believe a difference in duration is not likely to yield significant changes to our conclusions .an expanded study could check this assertion in the context of proton events .the total ionization is distributed over the 2 day duration uniformly ( i as a step function ) in the middle of a 7 day run of the short term model .results of this run are then read in to the long term model which is run for several years to return the atmosphere to equilibrium , pre - flare conditions .our primary results are changes in and in the stratosphere . is produced in the high latitude areas where the protons enter the atmosphere .figure [ fig : noy - perchg - shortterm ] shows the generated during and shortly after the event , as the percent difference in column between a run with the effect of the spe included and one without .the maximum localized increase in column is about 240% .figures [ fig : noy - altlat ] and [ fig : o3-altlat ] show the percent difference in profile and , respectively , between a run with the effect of the spe included and one without , as a function of altitude and latitude , two months after the event .this is the point in time when the globally averaged ozone depletion is largest ( see figure [ fig : o3-glob - perchg ] ) .note that the increased extends primarily upward in altitude from about 30 km and is most widespread in altitude at latitudes above about .also , the change is contained primarily within a band around 40 km altitude and restricted to latitudes above .figure [ fig : noy - perchg - longterm ] shows the percent difference in column between a run with the effect of the spe included and one without for four years after the event . as is apparent from this plot , is transported to some degree to mid and low latitudes , but remains primarily concentrated in the high latitude regions as the atmosphere recovers to its pre - event equilibrium .figure [ fig : o3-perchg ] shows percent difference in column between a run with the effect of the spe included and one without for four years after the event .the maximum localized decrease in column density is about 14% and occurs in the high latitude areas where the increase is largest .one may notice in figures [ fig : noy - perchg - shortterm]-[fig : o3-perchg ] that there is asymmetry between northern and southern hemisphere regions .this is because levels of and vary seasonally , especially in the polar regions , due to variations in the presence and intensity of sunlight .photolysis reactions play a critical role in the balance of constituents here and strongly affect the total values. a detailed discussion of these effects can be found in for the case of a gamma - ray burst .figure [ fig : o3-glob - perchg ] shows the globally averaged percent difference in ozone .the maximum decrease is about 5% , which occurs two months after the event .maximum global ozone depletion is delayed as spreads and interacts with over a larger area .this is quantitatively similar to the globally averaged anthropogenically - caused ozone depletion currently observed , which is predicted to diminish slowly over several decades .by contrast , this naturally - caused ozone depletion from the 1859 solar flare is nearly gone in about four years .since we have estimated the intensity of the 1859 spe using data from nitrate deposition in ice cores , a consistency check may be done by computing the nitrate rainout in the model .we have approached such a comparison in two ways .first , the maximum localized enhancement over background in the model is about 14% , while the 1859 spike in figure 1 of is about 200% over background .this is obviously a large disagreement .however , because nitrate deposition can be spotty and takes place over a period of months , we have also computed absolute deposition values .adding up deposition in the model over three months following the flare , within the latitude band centered at north , yields a value of .a similar computation following , using the fluence value assumed above and a value of 30 for the conversion factor between fluence and nitrate deposition , gives , a factor of about 1.7 smaller than the model value .we note that the difference between modeled and observed absolute deposition values is opposite that of the percent enhancement over background .that is , the percent enhancment in the model is smaller than that from the ice core data , while absolute values from the ice core data are smaller than in the model . given the many sources of uncertainty the absolute comparison is at least reasonably close , being less than a factor of two different .it is important to note that the percent enhancement comparison looks only at the height of the peak above background , while the absolute deposition value is effectively the area under that peak .a difference in how the nitrate is deposited over time could help explain this discrepancy .if in the actual event most of the nitrate was deposited in a relatively short amount of time compared to the model then the height of the peak would be greater , even if the area under that peak is less , as seen in our comparisons .similar comparisons have been done before with this model . found that the model showed a smaller peak percentage enhancement of nitrates than did ice core results of ( around 10% from the model , compared to 400% from ice core ) .this discrepancy is of the same order as what we have found in this study and apparently indicates some scaling disagreement between the model computations and the observations .we find for the 1859 event an atmospheric impact appreciably larger than that of the most energetic flare in the era of satellite monitoring , that of october 1989 .localized maximum column ozone depletion ( see figure [ fig : o3-perchg ] ) is approximately 3.5 times greater than that of the 1989 event ( see figure 3 of ) .we note that this is a smaller factor than that relating the total energy of the two events ( 6.5 ) .ozone depletion has been seen in other contexts ( e.g. , gamma - ray burst impacts ) to scale less than linearly with total energy .causes for this weaker dependence include : increased removal of important depletion species such as no with increased levels of ; production of in isolated regions especially at lower altitudes which are normally shielded from solar uv which produces ; and the `` saturation '' of depletion , where most of the in a given region is removed and so can not be depleted any further .our nitrate deposition results do not show as dramatic an enhancement as the measurements by .this discrepancy does mirror a similar comparison between nitrate deposition as computed using this model and that measured in ice cores as described in .this discrepancy may reflect differences in transport or deposition efficiency . finally ,while the ozone depletion seen here is limited , even small increases in uvb can be detrimental to many life forms .flares of significantly larger energy have been observed on sun - like stars , and may occur from time to time through the long history of life on earth .( see for an overview of terrestrial extinctions . )such events would have more dramatic effects on the biosphere .knowledge of the impacts of such flares is important in understanding the history of life here and possibly elsewhere , in particular on terrestrial planets around stars which are more active than the sun .barth , c.a ., s.m bailey , & s.c .solomon ( 1999 ) , solar - terrestrial coupling : solar soft x - rays and thermospheric nitric oxide , _ geophys ., 26 _ , 1251 - 1254 .carrington , r.c .( 1860 ) , description of a singular appearance seen on the sun on september 1 , 1859 , _ mon . not .r. astron ., 20 _ , 13 - 15 .chapman , s. & j. bartels ( 1940 ) , _ geomagnetism _ , vol .328 - 337 , oxford univ . press , new york .crutzen , p. j. , et al .( 1975 ) , solar proton events : stratospheric sources of nitric oxide , _ science , 189 _ , 457 - 458 .cullen , j.j .neale , & m.p .lesser ( 1992 ) , biological weighting function for the inhibition of phytoplankton photsynthesis by ultraviolet radiation , _ science , 258 _ , 646 - 649 .ejzak , l.m .melott , m.v .medvedev , b.c .thomas ( 2007 ) , terrestrial consequences of spectral and temporal variability in ionizing photon events , _ astrophys .j. , 654 _ , 373 .fleming , e.l .jackman , r.s .stolarski , & d.b .considine ( 1999 ) , simulation of stratospheric tracers using an improved emprirically based two - dimensional model transport formulation , _j. geophys .res . , 104 _ , 23911 - 23934 .hallam , a. & p. wignall ( 2003 ) , _ mass extinctions and their aftermath _ , oxford univ . press ,oxford , uk .hodgson , r. ( 1860 ) , on a curious appearance seen in the sun , _ mon . not .r. astron ., 20 _ , 15 .heath , d. f. , et al .( 1977 ) , solar proton event : influence on stratospheric ozone , _ science , 197 _ , 886 - 889 .hines , c.o . , et al .( 1965 ) , _ physics of the earth s upper atmosphere _ , 40 pp . , prentice - hall , englewoods cliffs , new jersey .jackman , c.h .& r.d . mcpeters ( 1985 ) ,the response of ozone to solar proton events during solar cycle 21 : a theoretical interpretation , _ j. geophys .res . , 90 _ , 7955 - 7966 .jackman , c.h .douglass , r.b .rood , r.d .mcpeters , & p.e .meade ( 1990 ) , effect of solar proton events on the middle atmosphere during the past two solar cycles as computed using a two - dimensional model , _j. geophys .res . , 95 _ , 7417 - 7428 .jackman , c.h .( 1995 ) , two - dimensional and three - dimensional model simulations , measurements , and interpretation of the influence of the october 1989 solar proton events on the middle atmosphere , _j. geophys .res . , 100 _ , 11641 - 11660 .jackman , c.h .fleming , s. chandra , d.b .considine , & j.e .rosenfield ( 1996 ) , past , present , and future modeled ozone trends with comparisons to observed trends , _ j. geophys ., 101 _ , 28753 - 28767 .jackman , c.h .fleming , & f.m .vitt ( 2000 ) , influence of extremely large solar proton events in a changing atmosphere , _ j. geophys .res . , 105 _ , 11659 - 11670 .jackman , c.h .mcpeters , g.j .labow , e.l .fleming , c.j .praderas , & j.m . russell ( 2001 ) ,northern hemisphere atmospheric effects due to the july 2000 solar proton event , _ geophys .lett . , 28 _ , 2883 - 2886 .jackman , c.h . & r.d .mcpeters ( 2004 ) , the effect of solar proton events on ozone and other constituents , _ geophysical monograph , 141 _ , 305 - 319 .jackman , c.h .et al . ( 2005a ) , the influence of the several very large solar proton events in years 2000 - 2003 on the neutral middle atmosphere , _ advances in space research , 35 _ , 445 - 450 .jackman , c.h .( 2005b ) , neutral atmospheric influences of the solar proton events in october - november 2003 , _ j. geophys .res . , 110 _ , a09s27 , doi:10.1029/2004ja010888 .loomis , e. ( 1861 ) , on the great auroral exhibition of aug .28th to sept .4 , 1859 , and on auroras generally , _ am .j. sci . , 82 _ , 318 .mccracken , k.g .dreschhoff , e.j .zeller , d.f .smart , & m.a .shea ( 2001a ) , solar cosmic ray events for the period 1561 - 1994 1 .identification in polar ice , 1561 - 1950 , _ j. geophys .res . , 106 _ , 21585 - 21598 .mccracken , k.g .dreschhoff , d.f .smart , & m.a .shea ( 2001b ) , solar cosmic ray events for the period 1561 - 1994 2 .the gleissberg periodicity , _ j. geophys ., 106 _ , 21599 - 21609 .mcpeters , r. d. , c. h. jackman , & e. g. stassinopoulos ( 1981 ) , observations of ozone depletion associated with solar proton events , _ j. geophys .res . , 86 _ , 12071 - 12081 .reagan , j. b. , et al .( 1981 ) , effects of the august 1972 solar particle events on stratospheric ozone , _j. geophys ., 86 _ , 1473 - 1494 .rousseaux , m.c . , et al .( 1999 ) ozone depletion and uvb radiation : impact on plant dna damage in southern south america , _ proc .usa , 96 _ , 15310 - 15315 .schaefer , b.e .king , & c.p .deliyannis ( 2000 ) , superflares on ordinary solar - type stars , _ astrophys .j. , 529 _ , 1026 - 1030 .smith , d.s . & j. scalo ( 2007 ) , solar x - ray flare hazards on the surface of mars , _ planetary and space science _, in press .svestka , z. , & p. simon ( 1975 ) _ catalog of solar particle events 1955 - 1969 _ , d. reidel , hingham , mass .thomas , b. , et al .( 2005 ) , gamma - ray bursts and the earth : exploration of atmospheric , biological , climatic and biogeochemical effects , _ astrophys .j. , 634 _ , 509 - 533 .tsurutani , b.t , w.d .gonzalez , g.s .lakhina , & s. alex ( 2003 ) , the extreme magnetic storm of 1 - 2 september 1859 , _ j. geophys .res . , 108_(a7 ) , 1268 , doi:10.1029/2002ja009504 .zeller , e.j .dreschhoff , c.m .laird ( 1986 ) , nitrate flux on the ross ice shelf , antarctica and its relation to solar cosmic rays , _ geophys ., 13 _ , 1264 - 1267 .wmo ( world meteorological organization ) ( 2003 ) , scientific assessment of ozone depletion : 2002 , global ozone research and monitoring project - report no .47 , geneva .
|
we have modeled atmospheric effects , especially ozone depletion , due to a solar proton event which probably accompanied the extreme magnetic storm of 1 - 2 september 1859 . we use an inferred proton fluence for this event as estimated from nitrate levels in greenland ice cores . we present results showing production of odd nitrogen compounds and their impact on ozone . we also compute rainout of nitrate in our model and compare to values from ice core data .
|
figure 1 . * * * * an example of the filtering process we applied to an image .a ) the original image .b ) the image after processing with local oriented filters .the maximal orientation was calculated at each point .the image was converted to binary by considering " oriented " only the pixels that after being filtered at their maximal orientation exceeded a given threshold . in the figure , the maximal orientation is shown using a color code .figure 2 . scaling behaviors for different geometrical configurations .a ) the number of co - occurrences between two segments in the relative positions within the line that the orientation of the first segment spans is shown for different orientations of the second segment .this measure was averaged over all possible orientations of the first segment .the collinear configuration is the most typical case and displays a scale invariant behavior as indicated by the linear relationship in the log - log plot .b ) the strength of the correlations and the degree to which it can be approximated to a power law are more pronounced for the particular case in which the reference line segment is vertical .c ) the same measure when the two segments are at a line apart from the orientation of the first segment . in all three cases ,black corresponds to iso - orientation , red to respect to the first segment , green to , blue to and yellow to .d ) full cross - correlation as a function of distance for laplacian filtering ( red circles ) , oriented filters in the collinear vertical direction ( black circles ) and for both cases after shuffling the images .the laplacian filtered image is de - correlated , as can be seen from the fact that it shows the same structure as its shuffled version ( cyan circles ) .collinear configuration shows long - range correlations , which follow a power law of exponent 0.6 ( blue line , y = x ) and are not present when the image is shuffled ( green circles ) .figure 3 . the number of co - occurring pairs of segments as a function of their relative difference in orientation .these values where obtained after integrating the histogram of co - occurrences in space for different angular configurations .each point in the graph corresponds to the average and the standard deviation of the 16 different configurations obtained by choosing one of the 16 possible values for the first orientation and then setting .plot of the spatial dependence of the histogram of co - occurring pairs for different geometrical configurations .a ) the probability of finding a pair of iso - oriented segments as a function of their relative position ( b ) , a pair of segments at relative orientation of , ( c ) , ( d ) or ( e ) .f ) cocircularity solution for a particular example of two segments .the solutions to the problem of cocircularity are two orthogonal lines , whose main have values or . for the example given , (red segment) , (blue segment) and the two solutions ( green lines ) are and ( all angles from the vertical axis ) .quantitative analysis of the spatial maps .orientation of the axis where co - occurring pairs of oriented elements of relative orientation are maximized .the axis of maximal probability was calculated relative to the orientation of the segment in the center .this was done for the 16 possible orientations of ( and the corresponding values of , and we computed for each ( the mean and standard error .the solid line corresponds to the solution predicted by the cocircular rule .
|
to understand how the human visual system analyzes images , it is essential to know the structure of the visual environment . in particular , natural images display consistent statistical properties that distinguish them from random luminance distributions . we have studied the geometric regularities of oriented elements ( edges or line segments ) present in an ensemble of visual scenes , asking how much information the presence of a segment in a particular location of the visual scene carries about the presence of a second segment at different relative positions and orientations . we observed strong long - range correlations in the distribution of oriented segments that extend over the whole visual field . we further show that a very simple geometric rule , _ _ cocircularity _ _ , predicts the arrangement of segments in natural scenes , and that different geometrical arrangements show relevant differences in their scaling properties . our results show similarities to geometric features of previous physiological and psychophysical studies . we discuss the implications of these findings for theories of early vision . the laboratories of physics and , the rockefeller university , 1230 york avenue , new york ny 10021 address : functional neuroimaging laboratory , dept . of psychiatry , cornell university . one of the most difficult problems that the visual system has to solve is to group different elements of a scene into individual objects . despite its computational complexity , this process is normally effortless , spontaneous and unambiguous . the phenomenology of grouping was described by the gestalt psychologists in a series of rules summarized in the idea of good continuation . more quantitative psychophysical measurements have shown the existence of association fields or rules that determine the interaction between neighboring oriented elements in the visual scene . based on these rules and on the gestalt ideas , pairs of oriented elements that are placed in space in such a way that they extend on a smooth contour joining them will normally be grouped together . these psychophysical ideas have been steadily gaining solid neurophysiological support . neurons in primary visual cortex ( v1 ) respond when a bar is presented at a particular location and at a specific orientation . in addition , the responses of v1 neurons are _ modulated _ by contextual interactions , such as the joint presence of contour elements within the receptive field and in its surround . this modulation depends upon the precise geometrical arrangement of linear elements , in a manner corresponding to the specificity of linkage of cortical columns by long - range horizontal connections . thus neurons in v1 interact with one another in geometrically meaningful ways , and through these interactions , neuronal responses become selective for combinations of stimulus features that can extend far from the receptive field core . the rules of good continuation , the association field , and the connections in primary visual cortex provide evidence of interaction of pairs of oriented elements at the psychophysical , physiological and anatomical level . the nature of the interaction is determined by the geometry of the arrangement , including spatial arrangement and the orientation of segments within the visual scene . an important question is whether this geometry is related to natural geometric regularities present in the environment . it is well known that natural images differ from random luminance distributions but the structural studies of natural scenes have not yet addressed the existence of geometrical regularities . here , we address this issue , by studying whether particular pairs of oriented elements are likely to co - occur in natural scenes as a function of their orientation and relative location in space . our results are focused on two different aspects of the organization of oriented elements in natural scenes : scaling and geometric relationships . we will show that these two are interdependent . scaling measurements involve studying how the probability of finding a co - occurring pair changes as a function of the relative distance . a classic result in the analysis of natural scenes is that the luminance of pair of pixels is correlated and that this correlation is scale invariant . this indicates that statistical dependencies between pair of pixels do not depend on whether the observer zooms in on a small window or zooms out to a broad vista . the scale invariance results from stable physical properties such as a common source of illumination and the existence of objects of different sizes and similar reflectance properties . here we show that for particular geometries , the probability of finding a pair of segments follows a power law relation and thus is scale invariant . we show further that a very simple geometric rule , consistent with the idea of good continuation , predicts the arrangement of segments in natural scenes . images were obtained from a publicly available database ( _ http://hlab.phys.rug.nl/imlib/index.html _ ) of about 4000 uncompressed black and white pictures , 1536x1024 pixels in size and 12 bits in depth , with an angular resolution of ~1 minute of arc per pixel . this particular database was chosen because of the high quality of its pictures , especially the lack of motion and compression artifacts , which would otherwise overwhelm our statistics . to obtain a measure of local orientation we used the steerable filters of the and basis . using steerable filters , the energy value at any orientation can be calculated by extrapolating the responses of a set of basis filters . a filter is a second derivative of a gaussian and the filter is its hilbert transform . and filters have the same amplitude spectra , but they are out of phase ; that makes them a quadrature pair basis filters . the size of the filters used was 7x7 pixels . a measure of oriented energy was obtained by combining both sets of filters . this measure is repeated at every pixel of the image to obtain the energy function for each image ( _ n _ ) of the ensemble . to study the joint statistics of , we discretized the different orientations at 16 different values , as shown in the color representation of orientations of fig . 1 . with this information one can obtain a measure of the statistics of pairs of segments by calculating the correlation ( weighting the co - occurrences of segments by their energy ) where _ n _ is the total number of images and the integral is over each of the images of the ensemble . we were interested in measuring long - range correlations so we studied values of . the correlation matrix has dimensions 512x512x16x16 and each point results from averaging 4000 integrals over a 1536x1024 domain . to simplify the computations , for the general case , we decided to store at each pixel , for every image , the maximum energy value and its corresponding orientation . an energy threshold was arbitrarily set to match the visual perception of edges in a few images . pixels in an image were considered `` oriented '' if , `` non - oriented '' otherwise . this unique threshold value was applied to all images in the ensemble . thus , for each image , we extracted a binary field and an orientation field . from this binary field we can construct a histogram of co - occurrences : how many times an element at postion _ ( x , y ) _ was considered oriented with orientation and at position a segment was considered oriented with orientation . thus , formally , the histogram is obtained as , taking as the energy function if and ; in any other case . the computation is reduced to counting the co - occurrences in the histogram with . from the histogram we obtained a measure of statistical dependence . while choosing the threshold followed computational reasons , cortical neurons perform a thresholding operation and thus the measure of linear correlation ( weighting co - occurrences by their energy ) is not necessarily a more accurate measure of statistical dependence . the histogram was used for all the data shown in fig . 2a , 2b and 2c , fig . 3 , fig . 4 , and fig . 5 . for fig . 2d , for the particular case of collinear interactions we computed the full linear cross - correlation . this computation is considerably easier since it is done for fixed values of orientation and direction in space . the two measures shown ( laplacian correlation and collinear correlation ) were obtained according to the formulas : for laplacian filtering , and for collinear oriented filtering . a quantitative signature of scale invariance is given by a function of the form ( power law ) where _ c _ is the correlation , _ r _ the distance and _ a _ constant . if the scale is changed the function changes as where _ k _ is a constant . a power law is easily identified as a linear plot in the log - log graph , which is clear from the relation . the axis of maximal correlation ( fig . 5b ) was calculated as follows . for each pair of orientations , a measure of co - occurrence was calculated integrating across 16 different lines of angles of values over distances of [ -40,40 ] of the center of the histogram . thus , for an angle and orientations the measure of co - occurrence is : . we then calculated the direction of maximal correlation and we grouped all angles with common relative orientation , . we had 16 different values for each and from these 16 different values we calculated the mean and the standard error . to calculate the mean energy as a function of relative orientation ( fig . 3 ) we integrated the histogram in spatial coordinates for each pair of orientations in space , and , as before , the different pairs where grouped according to their relative difference in orientation to calculate a mean value and a standard deviation , and . the code was parallelized using mpi libraries and run over a small beowulf cluster of linux workstations . in general , horizontal and vertical directions had better statistics since there are more horizontal or vertical segments than oblique in the images ; these special orientations are also the ones most prone to artifacts from aliasing , staircasing , and the ensemble choice . since we are interested in this study in the correlations as a function of relative distance and orientations , all the quantitative measurments were performed averaging over all orientations . however , the results shown still held true for each individual orientation . all 4000 images used in this study were black and white , 1536x1024 pixels in size and 12 bits in depth . we used a set of filters to obtain a measure of orientation at each pixel of every image of the database . the filters were 7x7 pixels in size and thus provided a local measure of orientation . the output of the filter was high at pixels where contrast changed abruptly in a particular direction , typically by the presence of line segments or edges , but also corners , junctions or other singularities ( fig . 1 ) . if the output of the filters were statistically independent then we would expect a flat correlation as a function of . in polar coordinates , the two problems which we address are naturally separated : the scaling properties result from studying how the histogram depends on _ r _ ( distance ) whereas the geometry does it from the dependence of the histogram on , and . we studied the number of co - occurring pairs of segments as a function of their relative distance for different geometries ( fig 2a , b , c ) . the different geometric configurations correspond to the different orientations of the segments and their relative position within an image . we first studied the number of co - occurrences as a function of distance in the line spanned by the orientation of the reference segment , averaged across all possible orientations of the reference line ( fig 2a ) . when both segments have the same orientation we observe a scale invariant behavior , indicated by a linear relationship in the log - log plot ( see methods ) . also it can be seen from this plot that collinear co - occurrences are more frequent than any other configuration . fig . 2b shows the probability of co - occurrences is higher for the vertical orientation , and scale invariance extends over a broader range . the scaling properties are qualitatively different for segments positioned side - by - side , along a line orthogonal to the orientation of the first segment ( fig . 2c ) . iso - oriented pairs were again the most frequent , but their co - occurrence in the orthogonal direction to the orientation of the first segment ( fig 2c , black line ) does not appear to be scale invariant . this is reflected by the presence of a kink as opposed to a straight line ( power law ) in the log - log plot , indicating well - defined scales with different behaviour . it is worth comparing the scale of interactions one observes by using different kinds of filters . before filtering images , the luminance shows correlations , which follows a power law behavior . after applying a laplacian filter ( equivalent to a center - surround operator , which measures non - oriented local contrast ) , the image is mostly decorrelated ( fig . 2d , red circles ) . this is seen in the exponential decay of the correlations , and the fact that the correlations show similar behavior after a pixel - by - pixel shuffling of the image ( fig . 2d , cyan circles ) . the strength and scaling of the correlations across the collinear line changes radically when one uses an oriented filter . in this example , to make a direct comparison between the various filters , we weighted each pair of segments by their energy value ( linear cross - correlation , instead of applying a threshold as was done in the earlier calculations ) . this calculation was done for the vertical reference line orientation , which showed long - range correlations ( fig . 2b , black circles ) , over much longer distances than observed with the laplacian filter . moreover , these correlations were not present when measured in the shuffled images ( fig . 2 , green circles ) . it is clear from the above analysis that when one uses oriented filters one reveals strong correlations extending over large distances . the next question is how these correlations depend on the relative orientation of the line elements , and whether these dependencies have any underlying geometry . we first calculated the total number of co - occurrences as a function of the relative difference in orientation . co - occurrences decreased as the relative orientation between the pair of segments increased , being maximal when they were iso - oriented and minimal when they were perpendicular ( fig . 3 ) . the next observation concerns spatial structure . the probability of finding co - occurring pair of segments was not uniform but rather displayed a consistent geometric structure . if the two segments were iso - oriented , their most probable spatial arrangement was as part of a common line , the collinear configuration ( fig . 4a ) . as the relative difference in orientation between the two segments increased , two effects were observed . the main lobe of the histogram ( which in the iso - oriented case extends in the collinear direction ) rotated and shortened , and a second lobe ( where co - occurrences were also maximized ) appeared at from the first ( fig 4a - e ) . this effect progressed smoothly until the relative orientation of the two segments was , where the two lobes were arranged in a symmetrical configuration , lying at relative to the reference orientation . thus , pairs of oriented segments have significant statistical correlations in natural scenes , and both the average probability and spatial layout depend strongly upon their relative orientation . remarkably , the structure of the correlations followed a very simple geometric rule . a natural extension of collinearity to the plane is cocircularity . while two segments of different orientations can not belong to the same straight line , they may still be tangent to the same circle if they are tilted at identical but opposite angles to the line joining them . given a pair of segments tilted at angles and respectively , they should lie along two possible lines , at angles or , in order to be cocircular ( fig . 4f ) . this is the arrangement we observed in natural scenes . the measured correlations , given any relative orientation of edges , were maximal when arranged along a common circle . to quantify this we calculated the orientation of the axis where co - occurrences were maximal . we did that for different relative orientations and compared it to the value predicted by the cocircularity rule ( fig . 5 ) . this is particularly remarkable because the comparison is not a fit , since the cocircularity rule has no free parameters . we have shown that there are strong , long - range correlations between local oriented segments in natural scenes , that their scaling properties change for different geometries , and that their arrangement obeys the cocircularity rule . the filters we used for edge detection in our images were an oriented version of laplacian - like filters , in that they were local but had elongated , rather than circularly symmetric , center - surround structures . this change is analogous to the difference between filters in the lgn and simple cells in the primary visual cortex . thus , given that laplacian filtering decorrelates natural scenes , it was surprising to find the long - range correlations and scale invariant behavior of the collinear configuration . it is important to remark that our measure of correlation does not differ only on the type of filters used ( elongated vs. circular symmetric ) but also on the fact that we measured the correlations along a line containing the pair of segments . long contours are part of the output of the laplacian filters and thus the image should show correlations which might be hidden when integrating them across an area , essentially because a curve has zero area and thus the correlations along a curve are not significant when integrated over the two - dimensional field of view . the findings of long - range correlations of oriented elements extends the notion that the output of linear local oriented filtering of natural scenes can not be statistically independent and shows that those correlations might be very significant through global portions of the visual field for particular geometries . the cocircular rule has been used heuristically to establish a pattern of interactions between filters in computer vision , and psychophysical studies suggest that the human visual system utilizes a local grouping process , `` association field '' , with a similar geometric pattern . our finding provides an underlying statistical principle for the establishment of form and for the gestalt idea of good continuation , which states that there are preferred linkages endowing some contours with the property of perceptual saliency . an important portion of the classical euclidean geometry has been constructed using the two simplest planar curves , the line and the circle ; here we show that those are , in the same order , the most significant structures in natural scenes . we have reported the emergence of robust geometric and scaling properties of natural scenes . this raises the question of the underlying physical processes that generate these regularities . while our work was solely based on statistical analysis , we can speculate on the possible constraints imposed by the physical world . in a simplyfing view , we can think of a natural image as composed by object boundaries or contours , and textures . collineal pairs of segments are likely to belong to a common contour ; thus , our finding of scale invariance for collineal correlations is in agreement with the idea that scale - invariance in natural images is a consequence of the distribution of apparent sizes of objects . parallel segments , on the contrary , may be part of a common contour as well as a common texture , which would explain the two scaling regimes we observed . cocircularity in natural scenes probably arises due to the continuity and smoothness of object boundaries ; when averaged over objects of vastly different sizes present in any natural scene , the most probable arrangement for two edge segments is to lie on the smoothest curve joining them , a circular arc . these ideas , however , requiere an investigation which is beyond the scope of this paper . the geometry of the pattern of interactions in primary visual cortex parallels the interactions of oriented segments in natural scenes . long - range interactions tend to connect iso - oriented segments and interactions between orthogonal segments , which span a short range in natural scenes , may be mediated by short - range connections spanning singularities in the orientation and topographic maps in the primary visual cortex . the finding of a correspondence between the interaction characteristics of neurons in visual cortex and the regularities of natural scenes suggests a possible role for cortical plasticity early in life , in order for the cortex to assimilate and represent these regularities . this plasticity might be mediated by hebbian - like processes , reinforcing connections on neurons whose activity coincide , i.e. , their corresponding stimuli are correlated under natural visual stimulation . such plasticity could extend to adulthood to accommodate perceptual learning of novel and particular forms . while we find coincidences between the pattern of interactions in v1 and the distribution of segments in natural scenes , the sign of the interactions plays a crucial role . reinforcing or facilitation of co - occurring stimuli ( positive interaction ) results in hebbian - like coincidence detectors , while inhibiting the response results in barlow - like detectors of `` suspicious coincidences '' which ignore frequent co - occurrences . interestingly the hebbian idea and the decorrelation hypothesis represent two sides of the same coin . from our measurements of the regularities in natural scenes , and previous studies on the higher order receptive field properties in primary visual cortex , it appears that both type of operations exist . the response of a cell in v1 is typically inhibited when a second flanking segment is placed outside of its receptive field along an axis orthogonal to the receptive field orientation . this interaction is referred as side - inhibition , which is strongest when the flanking segment has the same orientation as the segment inside the receptive field . in the present study we found that iso - orientation is the most probable arrangement for side - by - side segments in natural scenes , which therefore constitutes an example , in the domain of orientation , of decorrelation through inhibition . this inhibition may mediate the process of texture discrimination . the property of end - inhibition has also been interpreted as a mechanism to remove redundancies and achieve statistical independence . the finding that responses of v1 neurons are sparse when presented with natural stimuli and models of normalization of neuronal responses in v1 tuned to the statistics of natural scenes also support the idea that the interactions in v1 play an important role in decorrelating the output from v1 . this is consistent with the general idea that one of the important functions of early visual processing is to remove redundant information and suggests that interactions in v1 may continue with the process of decorrelation which is achieved by laplacian and local oriented filtering . but the visual cortex also can act in the opposite way , reinforcing the response to the most probable configurations . this is seen in the collinear configuration , which is the one that elicits most facilitation , and therefore illustrates how v1 can enhance the regularities in natural scenes . the fact that those correlations are significant over the entire visual field and are highly structured suggests that this is not a residual , or second - order process . the opposing processes of enhancement of correlations and decorrelation may be mediated by different receptive field properties , which can exist within the same cell . the same flank can inhibit or facilitate depending on the contrast suggesting that v1 may be solving different computational problems at different contrast ranges or different noise - to - signal relationship . the dialectic behavior of visual cortex shows that the interplay between decorrelation ( extraction of suspicious coincidences ) and enhancement of particular set of regularities ( identification of form ) may be mediated by the same population of neurons . while the decorrelating process may be required to operate in the orientation domain to solve the problem of texture segmentation , particular sets of coincidences , which are repeated in the statistics , such as the conjunction of segments that form contours , need to be enhanced in the process of identification of form . we thank m. kapadia for suggesting connections of our work with neurophysiological data , and d.r . chialvo , r. crist , a.j.hudspeth and a. libchaber for constructive comments on the manuscript . we thank specially p. penev for stimulating input in early stages of the project . supported by nih grant ey 07968 and by the winston ( gc ) and mathers foundations ( mm ) and the burroughs wellcome fund ( ms ) . _ _ high - level vision : object recognition and visual cognition _ _ , cambridge , ma : the mit press . _ _ principles of gestalt psychology _ _ , new york : harcourt & brace . _ _ laws of organization in perceptual forms _ _ , london : harcourt , brace & jovanovitch _ vision res . _ * * 33**(2 ) , 173 - 193 . _ proc . natl . acad . sci . usa _ * * 91 * * , 1206 - 1209 . _ neuron _ * * 15**(4 ) , 843 - 856 . _ j. physiol . _ * * 160 * * , 106 - 154 . _ vision res . _ * * 16 * * , 1131 - 1139 . _ perception _ * * 14 * * , 105 - 126 . _ exp . brain res . _ * * 61 * * , 54 - 61 . _ j. neurophysiol . _ * * 57 * * , 1767 - 1791 . _ vision res . _ * * 30 * * , 1689 - 1701 . _ j. neurophysiol . _ * 67 * no . 4 , 961 - 980 . _ vision res . _ * * 18 * * , 2337 - 2355 . _ nature _ * * 378 * * , 492 - 496 . _ j. neurophysiol . _ * * 84 * * , 2048 - 62 . _ j. neurosci . _ * * 9 * * , 2432 - 2442 . _ j. neurosci . _ * * 17 * * , 2112 - 2127 . _ j. opt . soc . am . a _ * * 4 * * , 2379 - 2394 . _ phys . rev . lett . _ * * 73 * * , 814 - 817 . _ vision res . _ * * 37 * * , 3385 - 3398 . _ proc . r. soc . london b _ * * 265 * * , 359 - 366 . _ ieee trans . patt . anal . mach . intell . _ * * 13 * * , 891 - 906 . _ neural comp . _ 196 - 210 . _ j. neurosci . _ * * 16**(10 ) , 3351 - 3362 . * * 40 * * , s641 . _ ieee trans . patt . anal . mach . intell . _ * * 1**1 , 823 - 839 . _ vision res . _ * * 38 * * , 719 - 741 . _ neural comp . _ * * 10 * * , 903 - 940 . _ _ geometry and imagination _ _ , american mathematical society : providence . _ nature _ * * 399 * * , 655 - 661 . _ nat . neurosci . _ * * 3 * * , 264 - 269 . _ perception _ * * 1 * * , 371 - 394 . _ j.neurosci . _ * * 15**(2 ) , 1605 - 1615 . 35 . _ network : comput . neural syst . _ * 10 * _ nat . neurosci . _ * * 2 * * , 79 - 87 . _ science _ * * 287 * * , 1273 - 1276 . _ psychol.rev _ * * 61 * * , 183 - 193 . _ _ the coding of sensory messages _ _ , eds . thorpe , w.h . & mitchison , g.j . , cambridge : cambridge university press . _ _ adaptation and decorrelation in the cortex _ _ , eds . miall , c. , durbin , r.m . & mitchison , g.j . , cambridge , ma : addison - wesley . _ nature _ * * 381 * * , 607 - 610 . _ vision res . _ * * 37 * * , 3327 - 3338 . _ proc . natl . acad . sci . usa _ * * 96**(21 ) , 12073 - 12078 . _ nat . neurosci . _ * * 2 * * , 733 - 739 .
|
closeness centrality is a structural measure of the importance of a node in a network , which is based on the ensemble of its distances to all other nodes .it captures the basic intuition that the closer a node is to all other nodes , the more important it is .structural centrality in the context of social graphs was first considered in 1948 by bavelas .the classic definition measures the closeness centrality of a node as the inverse of the average distance from it and was proposed by bavelas , beauchamp , and sabidussi . on a graph with nodes , the centrality of is formally defined by where is the shortest - path distance between and in .this textbook definition is also referred to as _bavelas closeness centrality _ or as the _ sabidussi index _ . the classic closeness centrality of a node can be computed exactly using a single - source shortest paths computation ( such as dijkstra s algorithm ) . in general , however , we are interested not only in the centrality of a particular node , but rather in the set of all centrality values .this is the case when centrality values are used to obtain a relative ranking of the nodes . beyond that , the distribution of centralities captures important characteristics of a social network , such as its _ centralization _ .when we would like to perform many centrality queries ( in particular when we are interested in centrality values for all nodes ) on graphs with billions of edges , such as large social networks and web crawl graphs , the exact algorithms do not scale .instead , we are looking for scalable computation of approximate values , with small relative error .the node with maximum classic closeness centrality is known as the 1-median of the network . a near - linear - time algorithm for finding an approximate 1-medianwas proposed by indyk and thorup .their algorithm samples nodes at random and performs dijkstra s algorithm from each sampled node .they show that the node with minimum sum of distances to sampled nodes is with high probability an approximate 1-median of the network .the same sampling approach was also used to estimate the centrality values of all nodes and to identify the top centralities .when the distance distribution is heavy - tailed , however , the sample average is a very poor estimator of the average distance : the few very distant nodes that dominate the average distance are likely to be all excluded from the sample , resulting in a large expected error for almost all nodes .we present the first near - linear - time algorithm for estimating , with a small relative error , the classic closeness centralities of all nodes .our algorithm provides probabilistic guarantees that hold for all instances and for all nodes .computationally , our algorithm selects a small uniform sample of nodes and performs single - source shortest paths computation from each sampled node .we provide a high - level description , illustrated in figure [ epsh : fig ] , of how we use this information to estimate centralities of all nodes . from the single - source computations ,we know the distances from nodes in to all other nodes and therefore the exact value of for each , but we need to estimate the centrality of other nodes .as we mentioned , a natural way to use this information is _ sampling _ : estimate the centrality of a node using the sample average .as we argued , however , the expected relative error can be very large when the distribution of distances from the node to all other nodes is skewed. a second basic approach , which we propose here , is _ pivoting _ , which builds on techniques from approximate shortest - paths algorithms .we define the _ pivot _ of a node as the node in the sample which is closest to .we can then estimate the centrality of by that of its pivot , , which we computed exactly . by the triangle inequality ,the value of is within of .a large error , however , can be realized even on natural instances : the centrality of the center node in a star graph would be estimated with an error of almost , using average distance of approximately 2 instead of 1 .if we use the _ pivoting upper bound _ as our estimator , we obtain an estimate that is about three times the value of the true average .we can show , however , that this is just about the worst case : on all instances and nodes , the pivoting upper bound estimate is , with high probability , not much less than or much more than three times the value , that is , the estimate is within a factor of of the actual value . since the argument is both simple and illuminating , we sketch it here . when the sample has size , it is likely that the distance between and its pivot is one of the closest distances from , with very high probability , is one of the closest distances to .since is the average value of a set of values such that of them are at least as large as , we obtain that we next apply the triangle inequality to obtain finally , we combine and to obtain that our estimate is not likely to be much larger than . therefore , the pivoting estimator has a bounded error with high probability , regardless of the distribution of distances , a property we could not get with the sampling estimator .neither method , sampling or pivoting , however , is satisfactory to us , since we are interested in a _ small relative _ error , for _ all _ nodes , on all instances , and with ( probabilistic ) guarantees .our key algorithmic insight is to carefully combine the sampling and pivoting approaches . when estimating centrality for a node , we apply the pivoting estimate only to nodes that are `` far '' from , that is , nodes that have distance much larger than the distance to the pivot .the sampling approach is applied to the remaining `` closer '' nodes . by doing so, our hybrid approach obtains an estimate with a small relative error with high confidence , something that was not possible when using only one of the methods in isolation .moreover , the computation needed by our hybrid algorithm is essentially the same as with the basic approaches : single - source shortest paths computation for a small value of .our hybrid estimator is presented and analyzed in section [ hybrid : sec ] .the estimator is applicable to points in a general metric space and is therefore presented in this context .an efficient algorithm which computes the hybrid centrality estimate for all nodes in an undirected graphs is presented in section [ computecc : sec ] .the effectiveness of our hybrid estimate in practice depends on setting a threshold correctly between pivoting and sampling .our analysis sets a threshold with which we obtain guarantees with respect to worst - case instances , i.e. , for any network structure and distances distribution of a node . in our implementation, we experiment with different settings .we also propose a novel _ adaptive _ approach , which estimates the error for several ( or effectively all relevant ) choices of threshold values , on a node per node basis .the sweet spot estimate which has the smallest estimated error is then used .our error estimator for each threshold setting and our adaptive approach are detailed in section [ adaptive : sec ] . in applications , we are often interested in measuring centrality with respect to a particular topic or property which has a different presence at each node .nodes can also intrinsically be heterogeneous , with different activity or importance levels .these situations are modeled by an assignment of weights to nodes .accordingly , one can naturally define _ weighted _ classic closeness centrality of a node as in section [ weighted : sec ] , we present and analyze an extension of our algorithm designed for approximating weighted centralities . the approach is based on weighted sampling of nodes , which , for any weighting , ensures a good approximations ( small relative error ) of equation .the handling of weighted nodes is supported with almost no cost to scalability or accuracy when compared to unweighted instances . in section [directed : sec ] we consider directed networks .when the graph is strongly connected , meaning that all nodes can reach all other nodes , it is often natural to consider closeness centrality with respect to _ round - trip _ distances .the round - trip distance between two nodes is defined as the sum of the shortest - paths distances .we show that a small modification of our hybrid algorithm , which requires both forward and reverse single - source shortest - paths computations from each sampled node , approximates round - trip centralities for all nodes with a small relative error .this follows because our hybrid estimator and its analysis apply in any metric space , and round - trip distances are a metric .when the graph is not strongly connected , however , classic closeness centrality is not well defined : all nodes that have one or more unreachable nodes have centrality value of .we may also want to separately consider inbound or outbound centralities , based on outbound distances from a node or inbound distances to a node , since these can be very different on directed graphs . proposed modification of classic centrality to directed graphs are based on a combination of the average distance within the outbound or inbound reachability sets of a node , as well as on the cardinalities of these sets .we therefore consider scalable estimation of these quantities , proposing a sampling - based solution which provides good estimates when the distance distribution is not too skewed .section [ sec : related ] briefly describes other relevant related work , including other important centrality measures .the results of our experimental evaluation are provided in section [ experiments : sec ] , demonstrating the scalability and accuracy of our algorithms on benchmark networks with up to tens of millions of nodes .we present our hybrid centrality estimator , which applies for a set of points in a metric space .we use parameters and , whose setting determines a tradeoff between computation and approximation quality .we sample points uniformly at random from to obtain a set .we then obtain the distances from each point to all points .the estimators we consider are applied to this set of computed distances . specifically , we consider estimators ] . for points , we can compute the exact value of , since the exact distances are available to all . for are interested in estimating .we define the _ pivot _ of ( closest node in the sample ) : and the distance to the pivot . in the introduction we discussed three basic estimators : the _ sample average _ the _ pivot _ estimator , , and the _ pivoting upper bound _ we argued that neither one can provide a small relative error with high probability .the hybrid estimate ] for is thus = \sum_{\mathclap{i\in h(j ) } } d_{c(j ) i } + \sum_{\mathclap{i \in { \mathit{hc}}(j ) } } d_{ji } + \frac{|l(j)|}{|l(j)\cap c| } \sum_{\mathrlap{i \in l(j)\cap c } } d_{ji}.\ ] ] since , the denominator satisfies and thus the estimator is well defined .it is easy to verify that the estimate ] has a small relative error for any point : [ hybrid : thm ] using , the hybrid estimator has a normalized root mean square error ( nrmse ) of . using ,when applying the estimator to all points in , we get a maximum relative error of with high probability .we consider the error we obtain by using ] for a point .error can be accumulated on accounting for distances to or to .the first set , , includes all non - sample points that have distance greater than from . the accumulated error on the sumis bounded by for each point in .since the distance from to a point in is at least the relative error on all of is at most = 1 / ( 1/\epsilon-1 ) = \epsilon/(1-\epsilon) ] to be the fraction of points are of distance ; the remaining have distance .the sum of distances is and the variance is we now consider the maximum over choices of and of the ratio of the variance to the square of the mean , which is } \frac{1}{k } \frac{x + ( 1-x)s^2}{(x+(1-x)s)^2}.\ ] ] this is maximized at .the maximum is .this means that the coefficient of variation ( cv ) is about .balancing the sampling cv with the pivoting relative error of we obtain . in our implementation, we worked with parameter settings of .this setting means that the relative error on the pivoting component is at most .we can typically expect it to be much smaller , however .first , because distances in can be much larger than .second , the estimates of different points are typically not `` one sided '' ( the estimate is one sided when the pivot happens to be on or close to the shortest path from to most other points ) , so errors can cancel out . for the sampling component ,the analysis was with respect to a worst - case distance distribution , where all values lie at the extremes of the range , but in practice we can expect an error of . moreover ,when the population variance of is small , we can expect a smaller relative error .in section [ adaptive : sec ] we propose adaptive error estimation , which for each point , uses the sampled distances to obtain a tighter estimate on the actual error .we now consider closeness centrality on undirected graphs , with a focus on efficient computation , both in terms of running time and the ( run - time ) storage we use , specifically , we would like to compute estimates ] given these distances is linear .the issue with this approach is a run - time storage of .we first observe that both the basic sampling and the basic pivoting estimates can be computed using only run - time storage per node . with sampling , we accumulate , for each node , the sum of distances from the nodes in .we initialize the sum to for all and then when running dijkstra from , we add to each scanned node .the additional run - time storage used here is the state of dijkstra and additional storage per node .with pivoting , we initialize for all nodes . when running dijkstra from , we accumulate the sum of distances as .we also update when a node is scanned .when is updated , we also update the pivot . finally , for each node , we estimate by the precomputed .the pseudocode provided as algorithm [ bavelasu : alg ] computes the hybrid estimates for all nodes using additional storage per node .to do so with only storage , we use an additional run of dijkstra : for each node , we first compute its pivot and the distance .this can be done with a single run of dijkstra s algorithm having all sampled nodes as sources .we then run dijkstra s algorithm from each sampled node . for the sampled nodes , the sum is computed exactly ; for such cases , we have =s(u) ] .the computation of the estimate is based on identifying the three components of the partition of into , which is determined according to distances from the pivot .the pivot mapping computed in the additional run is used to determine this classification .the contributions to the sum estimates ] of sampled nodes are computed when we run dijkstra from .the contribution of is computed when we run dijkstra from the pivot of .when running dijkstra from a sampled node and visiting , we need to determine whether is in or in order to compute its contribution .if , we increase ] by ] , which depends on and , may not be available .we therefore add to ] , which tracks the cardinality .when the dijkstra runs terminate , we can compute ] by /p[v] ] is postponed : we place the pair in list ] and for each entry we use to classify and accordingly increase ] .list ] .this algorithm performs runs of dijkstra s algorithm and uses running storage that is linear in the number of nodes ( does not depend on ) .this means the algorithm has very little computation overhead over the basic estimators .network , integer , select uniformly at random nodes \gets \arg\min_{i=1,\ldots , k } d_{c_i v} ] \gets 0 ] ; \gets 0 ] ; \gets 0 ] ; ; ; thresh\gets 0 ] ] ; \gets d ] \overset{+}{\gets } ( z.d - d)^2 ] ; \overset{+}{\gets } 1 ] delete ] ; \overset{+}{\gets } 1 ] ; \gets \text{\sc list}[c[u ] ] \cup \{z\} ] ; \gets d/\epsilon ] ; \gets 0 ] \ , { \overset{+}{\gets}}\ , d ] ; ] \gets \text{\sc tailsum} ] -k + \text{\sc lcnum}[u] ] \gets\text{\sc hsum}[u ] + \text{\sc hcsum}[u ] + \text{\sc lcsum}[u]/p ]algorithm [ bavelasu : alg ] also computes , for each node , an estimate on the error of our estimate ] , we multiply by the magnitude of the set , which we know exactly . in cases when there are not enough or no samples ( when is empty ) , we instead compute the average squared difference over a `` suffix '' of the farthest nodes in .the sampling error applies to the remaining `` closer '' nodes and depends on the distribution of distances in , that is , on the population variance of , and on the sample size from this group , which is .we first estimate the population variance of the set of distances from to the set of nodes .this is estimated using the sample variance of the uniform sample , as we then divide the estimated population variance by the number of samples ( variable lcnum in the pseudocode ) to estimate the variance of the average of samples from the population . to estimate the variance contribution of the sampling component to the sum estimate ] is estimated by summing these two components : in order to get the most mileage from the single source shortest paths computations we performed , we would like to adaptively select the best `` threshold '' between pivoting and sampling , rather than work with a fixed value . for a node and a threshold value let the set contains all non - sampled nodes with distance from greater than , the set contains all sampled nodes with distance from greater than , and the set contains all nodes with distance from at most .we can then define an estimator with respect to a threshold , as in equation : in algorithm [ bavelasu : alg ] we used the threshold value for a node . herewe choose adaptively so as to balance the estimated error of the first and third summands .one way to achieve this is to apply algorithm [ bavelasu : alg ] simultaneously with several choices of .then , for each node , we take the value with the smallest estimated error .we propose here algorithm [ bavelaserr : alg ] , which maintains state per node but looks for the threshold sweet spot while covering the full range between pure pivoting and pure sampling .algorithm [ bavelaserr : alg ] computes estimates and corresponding error estimates as in algorithm [ bavelasu : alg ] .the estimates , however , are computed for values of the threshold which correspond to the distances from to each of the other sampled nodes . from these estimates ,the algorithm selects the one which minimizes the estimated error .the reason for considering only these threshold values ( for each pivot ) is that they represent all the possible assignments of sampled nodes to or .finally , we note that the run - time storage we use depends linearly in the sets of threshold values and therefore it can be advantageous , when run - time storage is constrained , to reduce the size further .one way to do this is , for example , to only use values of which correspond to discretized distances .select a set of sampled nodes , uniformly at random ; for , use \gets j ] \gets 0 ] \gets i ] ; \gets j ] \gets vvisited ] \gets d ] \gets d ] \gets distsumvisited-\text{\sc tailsum}[j , cvisited] ] \gets 0 ] ; - \delta[c(v),i])^2 ] ; \gets \text{\sc hcsumsqerr } \cdot ( n-1-k)/k ] ^ 2 ] ; ] ] \gets \text{\sc est} ]we now consider weighted classic closeness centrality with respect to node weights , as defined in equation .we limit our attention to estimating the denominator since the numerator can be efficiently computed exactly for all nodes by computing the sum once and , for each node , subtracting the weight of the node itself from the total .we show how to modify algorithm [ bavelasu : alg ] to compute estimates for for all nodes .we will also argue that the proof of theorem [ hybrid : thm ] goes through with minor modifications , that is , we obtain a small relative error with high probability .if the node weights are in , the modification is straightforward .we obtain our sample only from nodes with weight and account only for these nodes in our estimate of .we now provide details on the modification needed to handle general weights .the first component is the node sampling .we apply a weighted sampling algorithm ; in particular , we use stream sampling , which is a weighted version of reservoir sampling .we obtain a sample of exactly nodes so that the inclusion probability of each node is proportional to its weight .more precisely , computes a threshold value ( which depends on and on the distribution of values ) .a node is sampled with probability .these sampling probabilities are pps ( probability proportional to size ) , but with we obtain a sample of size exactly ( whereas independent pps only guarantees an expected size of ) . for each sampled nodewe define its _ adjusted weight _ , where is the varopt threshold .the weighted algorithm is very similar to algorithm [ bavelasu : alg ] , but requires the modification stated as algorithm [ weighted : alg ] .the contributions to ] ( accounted for in the tail sums computed in the array ) or in ] \overset{+}{\gets } \beta(c_i ) d_{c_i u} ] \overset{+}{\gets } d_{c_i u}^2 ( \tau-\beta(c_i))\tau ] ; \overset{+}{\gets } \beta(u) ] and /\overleftarrow{r}[v] ] ; \gets 0 ] ; \gets 0 ] perform pruned dijkstra from on prune dijkstra at \overset{+}{\gets } d_{vu} ] \gets t ] \gets 0 ] \gets\text{\sc count}[v] ] we extend the basic sampling approach to directed graphs using an algorithm of cohen that efficiently computes for each node a uniform sample of size from its reachability set ( for outbound centrality ) or from nodes that can reach it ( for inbound centrality ) .we modify the algorithm so that respective distances are computed as well .( we apply dijkstra s algorithm instead of generic graph searches . )this algorithm also computes distinct distances , but does so adaptively , so that they are not all from the same set of sources .the same algorithm also provides approximate cardinalities of these sets .this means that , when the distance distribution is not too skewed , we can obtain good estimates of the average distance to reachable nodes ( or from nodes our node is reachable from ) .algorithm [ directed : alg ] contains pseudocode for estimating outbound average distance ( ) and reachability ( ) for all nodes . by applying the same algorithm on instead of the reverse graph ,we can obtain estimates for the inbound quantities .the algorithm computes for each node a uniform random sample of size from its reachability set .it does so by running dijkstra s algorithm from each node in random order , adding to the sample of all nodes it reaches . since these searches are pruned at nodes whose samples already have nodes , no node is scanned more than times during the entire computation .the total cost is thus comparable to full ( unpruned ) dijkstra computations .this algorithm does not offer worst - case guarantees .however , on realistic instances , where centrality is in the order of the median distance , it performs well .\gets 0 ] ; \gets 0 ] \gets \text{\scrand()}/\beta[v] ] \overset{+}{\gets } \beta[u ] d_{vu} ] = r[u] ] ; \gets 0 ] ; \gets \text{\sc distsum}[v] ] ; \gets \frac{k-1}{t[v]}$ ] the algorithm applies a bottom- variant of the reachability estimation algorithm of cohen and also computes distances .the cardinality estimator is unbiased with coefficient of variation ( cv ) at most .the quality of the average distance estimates depends on the distribution of distances and we evaluate it experimentally . [ cols= " < , < , > , > , > , > , > , > , > , > , > , > , > " , ] we now consider centrality on arbitrary directed graphs. table [ tab : directed ] gives the results obtained by algorithm [ directed : alg ] . once again , we use and evaluate the algorithm with 1000 random queries .the `` exact '' column shows the estimated time for computing all outbound centralities using dijkstra computations .we then show the average relative error ( over the 1000 random queries ) and the total running time to compute all centralities using algorithm [ directed : alg ] .although this algorithm has no theoretical guarantees , its average relative error is consistently below 6% in practice .moreover , it is quite practical , taking less than three minutes even on a graph with almost 200 million edges .we presented a comprehensive solution to the problem of approximating , within a small relative error , the classic closeness centrality of all nodes in a network .we proposed the first near - linear - time algorithm with theoretical guarantees and provide a scalable implementation .our experimental analysis demonstrates the effectiveness of our solution .our basic design and analysis apply in any metric space : given the set of distances from a small random sample of the nodes to all other nodes , we can estimate , for each node , its average distance to all other nodes , with a small relative error .we therefore expect our estimators to have further applications .p. boldi , m. rosa , m. santini , and s. vigna .layered label propagation : a multiresolution coordinate - free ordering for compressing social networks . in _ proceedings of the 20th international conference on world wide web _ , pp . 587596 .2011 .m. holtgrewe , p. sanders , and c. schulz .engineering a scalable high quality graph partitioner . in _ 24th international parallel and distributed processing symposium ( ipdps10 ) _ , pp .ieee computer society , 2010 .j. leskovec , d. huttenlocher , and j. kleinberg .predicting positive and negative links in online social networks . in _ proceedings of the 19th international conference on world wide web _ , pp .acm , 2010 .j. leskovec , j. kleinberg , and c. faloutsos .graphs over time : densification laws , shrinking diameters and possible explanations . in _ proceedings of the eleventh acm sigkdd international conference on knowledge discovery in data mining _ , pp. 177187 .acm , 2005 .a. mislove , m. marcon , k. p. gummadi , p. druschel , and b. bhattacharjee .measurement and analysis of online social networks . in _ proceedings of the 7th acm sigcomm conference on internet measurement _ , pp .2942 . 2007j. yang and j. leskovec . defining and evaluating network communities based on ground - truth . in _ proceedings of the acm sigkdd workshop on mining data semantics _ , mds 12 , pp .3:13:8 , new york , ny , usa , 2012 .
|
closeness centrality , first considered by bavelas ( 1948 ) , is an importance measure of a node in a network which is based on the distances from the node to all other nodes . the classic definition , proposed by bavelas ( 1950 ) , beauchamp ( 1965 ) , and sabidussi ( 1966 ) , is ( the inverse of ) the average distance to all other nodes . we propose the first highly scalable ( near linear - time processing and linear space overhead ) algorithm for estimating , within a small relative error , the classic closeness centralities of all nodes in the graph . our algorithm applies to undirected graphs , as well as for centrality computed with respect to round - trip distances in directed graphs . for directed graphs , we also propose an efficient algorithm that approximates generalizations of classic closeness centrality to outbound and inbound centralities . although it does not provide worst - case theoretical approximation guarantees , it is designed to perform well on real networks . we perform extensive experiments on large networks , demonstrating high scalability and accuracy .
|
it is well known that sensitivity of optical interferometric displacement meters can be improved by using squeezed quantum states of the optical field . in particular ,in the case of the fabry - perot / michelson topology , used in contemporary laser interferometric gravitational - wave detectors ligo virgo , geo-600 , and tama , squeezed state inside the interferometer can be created by injection of squeezed vacuum into the interferometer dark port .depending on the squeeze angle , whether phase or amplitude fluctuations of light can be suppressed .the former tuning reduces the _ measurement noise _ , known also as _ shot noise _ , which spectral density is inversely proportional to the optical power circulating in the interferometer arms .however , it increases the _ back action _ , or _radiation - pressure _ noise , that is a random force acting on the test mass(es ) .this noise spectral density is directly proportional to .use of amplitude squeezed vacuum increases the measurement noise and reduces the back action one . in the contemporary laser interferometric gravitational - wave detectors ,the optical power is relatively low and thus of the two quantum noise sources , only measurement noise , that dominates at higher frequencies , affects the detectors sensitivity .the low frequency band is dominated by noise sources of non - quantum origin ( most notably by the seismic noise ) which are several orders of magnitude larger than quantum back action noise . in this case , the overall sensitivity can be improved by using light with squeezed phase fluctuations .this method is being implemented in geo-600 currently and will quite probably be implemented in ligo in a few years , thanks to the recent achievements in preparation of light squeezed in the working band of contemporary gravitational - wave detectors ( 10 - 10000hz ) . in the planned second generation detectors circulating power will be higher by several orders of magnitude , and technical noises should be reduced significantly .therefore , the second - generation detectors will be _ quantum noise limited _ : at higher frequencies , the sensitivity will still be limited by shot noise , but at lower frequencies one of the main sensitivity limitation will be radiation - pressure noise .the best sensitivity point , where these two noise sources become equal , is known as the standard quantum limit ( sql ) . in order to obtain sensitivity , better that the sql , frequency - dependent squeezed light , with phase squeezing at higher frequencies and amplitude squeezing at lower ones , can be used , as was first proposed by unruh and later discussed by several authors in different contexts . the first practical method for generating frequency - dependent squeezed lightwas proposed by kimble _et al _ .they have shown that the necessary dependence can be created by reflecting an ordinary frequency - independent squeezed vacuum ( before its injection into the interferometer ) from additional properly _ detuned _ filter cavities .this method is known as _ phase pre - filtering _ , because the resulting squeezed state in this case is characterized by the frequency - dependent squeeze angle and the constant squeeze factor .the filter cavities can also be located after the interferometer . in thisso - called _ phase post - filtering _ scheme , proposed in , the light exiting the interferometer through the dark port is reflected from the filter cavities and then goes to the homodyne detector .this scheme implements , in effect , frequency - dependent homodyne angle .one of the advantages of this scheme is that it does not require squeezing and thus can be used if a squeezed light source is not available . yetanother scheme , known as _ amplitude filtering _ , was proposed by corbitt , mavalvala , and whitcomb in .they suggested to use a _ resonance - tuned _ optical cavity with two partly transparent mirrors as a high - pass filter for the squeezed vacuum . in this scheme , at high frequencies , the phase squeezed vacuum gets reflected by the filter and enters the interferometer such that high - frequency shot noise is reduced ;while at low frequencies , ordinary vacuum passes through the filter and enters the interferometer , thus low - frequency radiation - pressure noise remains unchanged .later it was noted in paper , that in this scheme , some information about phase and amplitude fluctuation leak out from the end mirror of the filter cavity , thus degrading the sensitivity . in order to evade this effect ,an additional homodyne detection ( ahd ) capturing this information has to be used .this scheme was further developed in , where it was proposed to inject additional squeezed vacuum though the filter cavity end mirror and thus suppress also the low - frequencies radiation - pressure noise .recently it has been noted that combined amplitude - phase filtering scheme also is possible .in essence , it is the same amplitude filtering scheme with two partly transparent mirrors , but with the detuned filter cavity , which creates squeezed light with both squeeze amplitude and angle depending on frequency .the main technical problem of all these schemes arises due to the requirement that the filter cavities bandwidths should be of the same order of magnitude as the gravitational - wave signal frequency .the corresponding quality factors have to be as high as , where is the laser pumping frequency .therefore , long filter cavities with very high - reflectivity mirrors should be used . in particular , two filter cavities with the same length as the main interferometer arms ( 4 km ) , placed in the same vacuum chamber side - by - side with the latter ones , was discussed in the article . according to estimates made in this paper , the gain in the gravitational wave signals event rate up to two orders of magnitude is feasible is this case , providing squeezing and/or equivalent increase of the optical power circulating in the interferometer .this design is considered as one of the candidates for implementation in the third generation gravitational wave detectors . on the other hand ,it was noted in that using much less expensive scheme with single relatively short ( a few tens of meter , which is comparable with the length of the advanced ligo auxiliary mode - cleaner cavities ) filter cavity , it is possible to obtain a quite significant sensitivity gain . this scheme does not requireany radical changes in the detector design and probably can be implemented during the life cycle of the second generation detectors .the goal of the current paper is to find , which of the several proposed filter cavity options suits best for this scenario .this paper is organized as follows . in sec.[sec : num ] , the schemes to be optimized and the optimization procedure are described . in sec.[sec : results ] , the optimization results are presented and discussed .appendix [ app : noise ] contain the explicit equations for the quantum noises of the schemes considered in this paper .these equations are based mostly on the results obtained in the articles and provided here for the notation consistency and for the reader s convenience . in appendix[ app : lossless ] , the particular case of the lossless phase filter cavity is considered , which provides some insight into the relative performance of the two phase filtering schemes .the main notations and parameter values used in this paper are listed in table[tab : notations ] . [ cols="^,^,<",options="header " , ] we consider in this paper the following seven configurations , see also fig.[fig : topologies ] : 1 .the `` ordinary '' interferometer ( that is , without filter cavity ) with vacuum input ( no squeezing ) ; 2 . the `` ordinary '' interferometer with squeezed light injection into the dark port ; 3 .the phase post - filtering with vacuum input ( no squeezing ) ; 4 .the phase post - filtering with squeezed light injection into the dark port ; 5 .the phase pre - filtering ; 6 . the amplitude filtering ; 7 . the combined amplitude - phase filtering . the first two configurations , which do not contain filter cavity , are included into consideration in order to provide the baseline for the more advanced ones , and to compare the sensitivity gain provided by frequency - independent and frequency - dependent squeezing .the main interferometer parameters : arms length , mirrors mass , circulating optical power , and optical pump frequency , are assumed to be the same as planned for the advanced ligo , see table [ tab : notations ] .for the variants , which require squeezed light , we assume 10db squeezing . for the filter cavity ,we use the following convenient parameters : which togetether form its half - bandwidth in table [ tab : parameters ] , the parameters used in the optimization procedures for each of the configurations considered in this paper are listed .the number of the optimization parameters varies from 2 for the ordinary interferometer with vacuum input to 7 for the most sophisticated amplitude - phase filtering case . in order to avoid further increase ofthe parameters space ( which is already quite challenging from the computation time point of view ) , for some of the parameter fixed sub - optimal values , which provide smooth broadband shape of the quantum noise spectral density , are used .namely , ( i ) we suppose that the main interferometer is tuned in resonance . in the absence of squeezing and cavities, the interferometer detuning can provide some moderate sensitivity gain , but it destructively interferes with other advanced technologies ( see , _e.g. _ , ) .also , ( ii ) we suppose the squeeze angle ; this tuning provides minimum of the shot noise .we do not consider here technical noises , that is , the mirrors and the suspension thermal noises , seismics , gravity gradient noises _ etc _ , because it is virtually impossible now to predict their level at the later stages of the advanced ligo life cycle. it should be noted , however , that the methods considered here provide only relatively modest gain in the quantum noise spectral density , and they do not rely on any deep spectral minima in the quantum noise . therefore , even equally modest gain in the thermal noise which quite probably will be achieved in the next decade will allow to reach the quantum sensitivity limitations of these schemes . on the other hand , we take into account optical losses both in the main interferometer and in the filter cavity , as well as the finite quantum efficiency of photodetectors . for the main interferometer and the photodetectors losses , we adopt model of the frequency - independent effective quantum efficiency discussed in sec.2.3 of paper , and use moderately optimistic value of .. however ,estimates show , that for reasonable values of the signal recycling factor , this frequency dependence also can be neglected . ]the filter cavity losses appear in all equations only in combination with the filter cavity length .the longer is filter cavity , the less is the influence of losses , for the same value of .therefore , it is convenient to introduce the _ effective _ cavity length as where is some fixed value of the losses per bounce . in this paper , we assume , that . therefore , given , for example , a cavity with and , the effective length will be equal to . as the criteria of the optimization , signal - to - noise ratios ( snrs ) for the burst sources and for the neutron star - neutron star binary events are used .the first one characterizes broadband sensitivity , while the second is more sensitive to low - frequency noises .it is convenient to normalize the snr values in terms of those corresponding to some canonical interferometer . herethe ordinary interferometer with vacuum input , homodyne angle ( so - called classical optimization , which minimizes the shot noise ) , half - bandwidth , and ( no optical losses ) will be used as a canonical one .thus , the explicit equation for the optimization are the following : [ snrs ] where is the quantum noise spectral density to be optimized , \biggr|_{\gamma=2\pi\times100\,{\rm s}^{-1}}\ ] ] is the `` canonical '' interferometer quantum noise spectral density , is the sql value of the quantum noise spectral density , is the optomechanical coupling factor , , ( these two values are defined by the gravitational wave detector bandwidth ) , and ( see sec . 3.1.3 of .all spectral densities are normalized as equivalent fluctuations of gravitational - wave strain amplitude .the explicit expressions for the optimized spectral densities are provided in the appendix , see eqs.([s_vac ] , [ s_sqz ] , [ s_post ] , [ s_pre ] , [ s_cmwd ] ) .the results of the numerical optimization are presented in fig.[fig : snrs ] , where the snr values are plotted as functions of the effective cavity length .the most evident conclusions which follows from these plots are : ( i ) that sensitivity of the amplitude filtering scheme is inferior to ones of the both phase - filtering variants and ( ii ) that the results for the combined amplitude - phase filtering scheme are virtually indistinguishable from those for the phase pre - filtering one , except of the very short filter cavity cases , , where it provides slightly better sensitivity . however , ( iii ) for such a short filter cavities , the sensitivity is close to one provided by the ordinary frequency - independent squeezing , and the minor additional gain is probably not worth the hassles associated with the filter cavities implementation ., used in this paper in the filter cavity based schemes optimization ( see table [ tab : parameters ] ) , the post - filtering scheme demonstrates even slightly worse sensitivity for , than frequency - independent squeezing . ] on the other hand , ( iv ) for longer filter cavities , , the sensitivity gain can be very significant , providing the snr increase ( in comparison with frequency - independent squeezing ) of for broadband sources and to for low - frequency ones , this is equivalent to the event rate increase by a half order of magnitude and almost one order of magnitude , correspondingly . in fig.[fig : filter_parms ] , filter cavity parameters for the combined amplitude - phase filtering scheme are plotted as functions of .these plots show , why the sensitivity of this scheme is so close to the phase pre - filtering one .if , then the optimal transmittance of the filter cavity end mirror quickly drops to zero , while the filter cavity half - bandwidth becomes close to the filter cavity detuning .these tunings correspond to the phase filtering regime .that is , for the longer cavities , the optimization procedure switches to the pure phase filtering .considering two variants of phase filtering , for the parameters values used here , mostly the pre - filtering scheme demonstrates better results . at a first sight, it looks strange , because it is well known that the post - filtering allows to completely eliminate the back action noise , while the pre - filtering only reduces it by the factor equal to .however , the post - filtering scheme is more sensitive to the interferometer losses. it can be explained in the following way .the post - filtering scheme implements frequency - dependent homodyne angle , while the pre - filtering one frequency - dependent squeeze angle , compare eqs.([s_sqz ] , [ s_post_lossless ] , [ s_pre_lossless ] ) . in both cases , it allows to measure , at each given frequency , the least noisy quadrature of the output light .however , the homodyne angle affects also the optomechanical transfer factor of the interferometer . at lower frequencies , the post - filtering scheme measures a quadrature which is close to the amplitude one , thus decreasing the transfer factor and emphasizing the additional noise introduced by optical losses .this effect is absent in the pre - filtering scheme , compare the terms proportional to the loss factor in eqs . and . as a results ,the quantum noise of the post - filtering scheme increases at low frequencies more sharply , than of the pre - filtering one , see fig.[fig : plots ] . direct comparison of the residual back - action terms ( proportional to ) in the optimized quantum noise spectral densities eqs . and for these two scheme allows to conclude , that the post - filtering scheme should be better for the lower losses and not so deep squeezing case , and _vice versa_. however , the difference is subtle .more detailed analysis is required here , and the final decision in the post- vs pre - phase filtering choice has to be made with account for additional factors not considered in this paper , in particular , technical noises spectral dependence at low frequencies .this work was supported by nsf and caltech grant phy-0651036 .the paper has been assigned ligo document number p0900294 .the author is grateful to stefan danilishin and thomas corbitt for useful remarks .the following notations are used in this appendix : two - photon quadrature amplitude vectors are denoted by boldface letters , and their cosine and sine components by the corresponding roman letters with superscripts `` c '' and `` s '' , for example : these components obey the following commutation relations : = [ \hat{\rm a}^s(\omega ) , \hat{\rm a}^s(\omega ' ) ] = 0 \ , , \\[ \hat{\rm a}^c(\omega ) , \hat{\rm a}^s(\omega ' ) ] = 2\pi i\delta(\omega+\omega ' ) \ , .\end{gathered}\ ] ] in ground state , they correspond to two independent noises with the one - sided spectral densities equal to 1 . using spectral representation and caves - schumaker s two - photon formalism , the input - output relations of the noiseless gravitational - wave detector can be presentes as follows : where , are the interferometer input and the output fields and is the spectrum of gravitational - wave strain amplitude .injection of a squeezed state into the interferometer dark port is described by the following equation : where operator corresponds to vacuum input field of the squeezer .optical losses in the interferometer and the finite quantum efficiency of the photodetector can be taken into account by an imaginary beamsplitter that mixes output of ideal interferometer with an additional vacuum noise , with weights and : the photodetector output signal ( _ i.e. _ , the differential current of the homodyne detector ) is proportional to where \ ] ] is the sum quantum noise with the spectral density equal to \\ = \frac{h_{\rm sql}^2(\omega)}{2}\biggl\ { \frac{\cosh2r + \sinh2r\cos2(\phi+\theta ) + s_{\rm loss}^2}{\mathcal{k}(\omega)\cos^2\phi } - 2\frac{\cosh2r\sin\phi - \sinh2r\sin(\phi+2\theta)}{\cos\phi } \\ + \mathcal{k}(\omega)(\cosh2r - \sinh2r\cos2\theta ) \biggr\ } .\end{gathered}\ ] ] in the particular case of vacuum input , , this spectral density is equal to .\ ] ] input / output relations for the most general case of the detuned filter cavity with two input / output ports shown in fig.[fig : filter ] are the following : [ cmwd_aq ] where are the incident fields at the filter cavity input and end mirrors , are the corresponding reflected fileds , is the additional vacuum noise created by absorption in the filter cavity , are the reflectivity matrices , is the transmittance matrix and are the loss matrices .all these matrices have the following uniform structure : \\#3 & \mathcal{m}(\omega ) + \mathcal{m}^*(-\omega)\end{pmatrix } } , \ ] ] where and [ cmwd_filter ] note the following unitarity conditions : [ cmwd_symm_2 ] in the phase filtering cases , both the post - filtering one considered in this subsection and the pre - filtering one considered in the next one , the filter cavity has only one partly transparent mirror : in the phase post - filtering case , the interferometer output is reflected from the filter cavity , , and then goes to the photodetector : + \sqrt{1-\eta}\,\hat{\bf n}(\omega ) \,.\ ] ] therefore , in this case the sum noise is equal to , ^{-1 } \\ \times \phi^+(\phi)\bigl\ { \mathbb{r}_i(\omega)\mathbb{c}(\omega)\mathbb{s}(r,\theta)\hat{\bf z}(\omega)e^{i\beta(\omega ) } + \bigl [ \mathbb{a}_i(\omega)\hat{\bf y}(\omega ) + s_{\rm loss}\hat{\bf n}(\omega ) \bigr]e^{-i\beta(\omega ) } \bigr\ } , \end{gathered}\ ] ] and its spectral density , with account of eqs ., is equal to \mathbb{r}_i^+(\omega)\phi(\phi ) + 1 + s_{\rm loss}^2 \bigr\ } .\end{gathered}\ ] ] in the pre - filtering case , squeezed light is reflected from the filter cavity , and then goes to the interferometer dark port : inserting this into eq . , we obtain , that the sum quantum noise in this case is equal to ^{i\beta(\omega ) } \\ + s_{\rm loss}\hat{\bf n}(\omega)e^{-i\beta(\omega ) } \bigr\ } , \end{gathered}\ ] ] and its spectral density , with account of eqs . , is equal to \mathbb{c}^+(\omega)\phi(\phi ) + s_{\rm loss}^2 \bigr\ } .\ ] ] in this case , two squeezed states with different squeeze angles are injected into the filter cavity through two partly transparent mirrors : where are two independent vacuum fields .the field then goes to the interferometer dark port : which gives the following equation for the `` naive '' sum quantum noise of interferometer , that is the one which does not take into account entanglement between two outputs of the filter cavity : ^{i\beta(\omega ) } + s_{\rm loss}\hat{\bf n}(\omega)e^{-i\beta(\omega ) } \bigr\ } .\end{gathered}\ ] ] in order to use this entanglement , the field has to be detected by an additional homodyne detector .the output signal of this detector is proportional to \propto\hat{q}(\omega ) \\ = \phi(\zeta)\bigl[ \mathbb{t}(\omega)\mathbb{s}(r,\theta_i)\hat{\bf z}_i(\omega ) + \mathbb{r}_e(\omega)\mathbb{s}(r,\theta_e)\hat{\bf z}_e(\omega ) + \mathbb{a}_e(\omega)\hat{\bf y}(\omega ) + s_{\rm loss}\hat{\bf n}_a(\omega ) \bigr]\end{gathered}\ ] ] where is the homodyne angle of the additional detector and is the noise associated with this detector quantum efficiency which we assume to be also equal to . the optimal combination of both homodyne detectors outputs give the following residual spectral density : where [ see eqs .] \mathbb{c}^+(\omega)\phi(\phi ) + s_{\rm loss}^2 \bigr\ } \ , , \end{gathered}\ ] ] \phi(\zeta ) + 1 + s_{\rm loss}^2 \,,\ ] ] \phi(\zeta)e^{i\beta(\omega ) } \end{gathered}\ ] ] are spectral densities of the noises , , and their cross - correlation spectral density .in the ideal lossless phase filtering case , the transmittance and the loss matrices vanish , and the refelectivity matrix corresponds to unitary rotation : with account of eqs . , spectral density can be presented in the form similar to , but with frequency - dependent homodyne angle : + s_{\rm loss}^2 } { \mathcal{k}(\omega)\cos^2\phi_f(\omega ) } \\ - 2\frac{\cosh2r\sin\phi_f(\omega ) - \sinh2r\sin[\phi_f(\omega)+2\theta ] } { \cos\phi_f(\omega ) } + \mathcal{k}(\omega)[\cosh2r - \sinh2r\cos2\theta ] \biggr\ } , \end{gathered}\ ] ] where if and , then \\ - 2e^{2r}\tan2\beta_f(\omega ) + \mathcal{k}(\omega)e^{2r } \biggr\ } .\end{gathered}\ ] ] this spectral density can be minized by setting with a single filter cavity , this equation can be fulfilled only asymptotically at , by the following values of the filter cavity parameters : in this case , and .\ ] ] in similar way , spectral density , with account of eqs ., can be presented in the form similar to , but with frequency - dependent squeeze angle : + s_{\rm loss}^2}{\mathcal{k}(\omega)\cos^2\phi } \\ - 2\frac{\cosh2r\sin\phi - \sinh2r\sin[\phi+2\theta_f(\omega)]}{\cos\phi } + \mathcal{k}(\omega)[\cosh2r - \sinh2r\cos2\theta_f(\omega ) ] \biggr\ } , \end{gathered}\ ] ] where if and , then \biggr\ } .\end{gathered}\ ] ] this spectral density can be minimized by setting . with single filter cavity , this equation can be fulfilled only asymptotically at , by the following filter cavity parameters : in this case , and ^{-2r } + \frac{s_{\rm loss}^2}{\mathcal{k}(\omega ) } + \frac{2\mathcal{k}(\omega)(\omega/\gamma)^4\sinh2r } { 1+\mathcal{k}^2(\omega)[1+(\omega/\gamma)^2]^2 } \biggr\ } .\ ] ]
|
sensitivity of future laser interferometric gravitational - wave detectors can be improved using squeezed light with frequency - dependent squeeze angle and/or amplitude , which can be created using additional so - called filter cavities . here we compare performances of several variants of this scheme , proposed during last years , assuming the case of a single relatively short ( tens of meters ) filter cavity suitable for implementation already during the life cycle of the second generation detectors , like advanced ligo . using numerical optimization , we show that the phase filtering scheme proposed by kimble et al looks as the best candidate for this scenario .
|
the study of differential games with elliott kalton strategies in the viscosity solution framework is initiated by evans and souganidis where both players are allowed to take continuous controls .differential games where both players use switching controls are studied by yong . in ,differential games involving impulse controls are considered ; one player is using continuous controls whereas the other uses impulse control . in the final section of , the author mentions that by using the ideas and techniques of the previous sections one can study differential games where one player uses continuous , switching and impulse controls and the other player uses continuous and switching controls .the uniqueness result for the associated system of quasi - variational inequalities ( sqvi ) with bilateral constraints is said to hold under suitable non - zero loop switching - cost condition and cheaper switching condition .in all the above references , the state space is a finite - dimensional euclidean space .the infinite dimensional analogue of is studied by kocan _et al _ , where the authors prove the existence of value and characterize the value function as the unique viscosity solution ( in the sense of ) of the associated hamilton jacobi isaacs equation . in this paper , we study a two - person zero - sum differential game in a hilbert space where the minimizer ( player 2 ) uses three types of controls : continuous , switching and impulse . the maximizer ( player 1 ) uses continuous and switching controls .we first prove dynamic programming principle ( dpp ) for this problem . using dpp, we prove that the lower and upper value functions are ` approximate solutions ' of the associated sqvi in the viscosity sense .finally we establish the existence of the value by proving a uniqueness theorem for sqvi .we obtain our results without any assumption like non - zero loop switching - cost condition and/or cheaper switching - cost condition on the cost functions .this will be further explained in the concluding section .thus this paper not only generalises the results of to the infinite dimensional state space , it obtains the main result under fairly general conditions as well .the rest of the paper is organized as follows .we set up necessary notations and assumptions in the remaining part of this section .the statement of the main result is also given at the end of this introductory section .the dpp is proved in 2 . in this sectionwe also show that the lower / upper value function is an ` approximate viscosity solution ' of sqvi .section 3 is devoted to the proof of the main uniqueness result for sqvi and the existence of value .we conclude the paper in 4 with a few remarks .we first describe the notations and basic assumptions .the state space is a separable hilbert space .the continuous control set for player , , is , a compact metric space .the set ; ; is the switching control set for player .the impulse control set for the player 2 is , a closed and convex subset of the state space .the space of all -valued measurable maps on is the continuous control space for player and is denoted by : -valued measurable maps on ] , consists of the impulse times s and impulse vectors s .we use the notation similarly for switching controls and we write ( d^2)_{1 , j } & = \theta^2_j\quad \mbox { and } ~~ ( d^2)_{2 , j } = d^2_{j}.\end{aligned}\ ] ] now we describe the dynamics and cost functions involved in the game . to this end , let and . for and ,the corresponding state is governed by the following controlled semilinear evolution equation in : where and is the generator of a contraction semigroup on .( a1 ) we assume that the function is bounded , continuous and for all , , , note that under the assumption ( a1 ) , for each , , and there is a unique mild solution of ( [ s1e1 ] ) .this can be concluded for example , from corollary 2.11 , chapter 4 , page number 109 of .let be the running cost function , the switching cost functions , and the impulse cost function .( a2 ) we assume that the cost functions , , are nonnegative , bounded , continuous , and for all , , , , l(\xi_0 + \xi_1 ) & < & l(\xi_0 ) + l(\xi_1 ) , \ , \forall \ , \xi_0 , \xi_1 \in k,\\[.7pc ] \displaystyle\lim_{|\xi | \rightarrow \infty } l(\xi ) & = & \infty , \\[.9pc ] \displaystyle\inf_{d^i_1 \neq d^i_2 } c^i(d^i_1,d^i_2 ) & = & c^i_0>0 .-1pc\phantom{0}\end{aligned}\ ] ] the subadditivity condition is needed to prove lemma [ s3i1 ] which , in turn , is required to establish the uniqueness theorems ( and hence the existence of value for the game ) in 3 .this condition makes sure that , if an impulse is the best option at a particular state , then applying an impulse again is not a good option for the new state .let be the discount parameter .the total discounted cost functional is given by & = & \displaystyle \int_{0}^{\infty } { \rm e}^{-\lambda t } k(y_x(t),u^1(t),d^1(t),u^2(t),d^2(t ) ) ~{\rm d}t\\[1.1pc ] & & \displaystyle - \sum_{j \geq 0 } { \rm e}^{-\lambda \theta^1_j } c^1(d^1_{j-1},d^1_j ) \\[1.3pc ] & & \displaystyle + \sum_{j \geq 0 } { \rm e}^{-\lambda \theta^2_j } c^2(d^2_{j-1},d^2_j ) + \sum_{j \geq 1 } { \rm e}^{-\lambda \tau_j } l(\xi_j ) \end{array}\right\}.\ ] ] we next define the strategies for player 1 and player 2 in the elliott kalton framework .the strategy set for player 1 is the collection of all nonanticipating maps from to . the strategy set for player 2 is the collection of all nonanticipating maps from to . for a strategy of player 2 if , then we write that is , is the projection on the component of the map .similar notations are used for as well .hence , let denote the set of all switching controls for player starting at .then we define sets let denote the collection of all such that and be the collection of all such that .now using these strategies we define upper and lower value functions associated with the game .consider as defined in ( [ s1e4 ] ) .let be the restriction of the cost functional to the upper and lower value functions are defined respectively as follows : ,\\[.3pc]\label{s1e6 } v_-^{d^1,d^2}(x ) & = \inf_{\beta \in \delta^{d^2 } } \sup_{{\mathcal{c}}^{1,d^1 } } j_x^{d^1,d^2}[u^1(\cdot),d^1(\cdot),\beta(u^1(\cdot),d^1(\cdot))].\end{aligned}\ ] ] let and .if , then we say that the differential game has a value and is referred to as the value function . since all cost functions involved are bounded , value functions are also bounded . in view of ( a1 ) and ( a2 ) , the proof of uniform continuity of and is routine . hence both and belong to , the space of bounded uniformly continuous functions from to .now we describe the system of quasivariational inequalities ( sqvi ) satisfied by upper and lower value functions and the definition of viscosity solution in the sense of . for ,let ,\\[.3pc]\label{s1e8 } \hskip -4pc h^{d^1,d^2}_+(x , p ) & = \min_{u^1 \in u^1 } \max_{u^2 \in u^2 } [ \langle -p , f(x , u^1,d^1,u^2,d^2 ) \rangle -k(x , u^1,d^1,u^2,d^2 ) ] \end{aligned}\ ] ] and for , let (x ) & = \min_{\bar{d}^2\neq d^2 } [ v^{d^1,\bar{d}^2}(x)+ c^2(d^2,\bar{d}^2 ) ] , \\[.3pc]\label{s1e10 } m_+^{d^1,d^2}[v](x ) & = \max_{\bar{d}^1\neq d^1 } [ v^{\bar{d}^1,d^2}(x ) - c^1(d^1,\bar{d}^1)],\\[.3pc]\label{s1e11 } n[v^{d^1,d^2}](x ) & = \inf_{\xi \in k } [ v^{d^1,{d}^2}(x+\xi)+ l(\xi)].\end{aligned}\ ] ] the hji upper systems of equations associated with of the hybrid differential game are as follows : for & & \quad\ , v^{d^1,d^2}-m_-^{d^1,d^2}[v ] , v^{d^1,d^2}-n[v^{d^1,d^2 } ] \ , ) , v^{d^1,d^2}- m^{d^1,d^2}_+[v ] \ , \ } = 0 \end{array } \right\},\hskip -1pc\phantom{0 } \tag{{\rm hji1}+}\\[.6pc ] & \hskip -5.3pc \left .\begin{array } { @{}rcl } & & \max \ , \ { \ , \min \ , ( \lambda v^{d^1,d^2}+ \langle ax , dv^{d^1,d^2 } \rangle + h^{d^1,d^2}_+(x , dv^{d^1,d^2 } ) , \\[2 mm ] & & \quad\ , v^{d^1,d^2}-m_+^{d^1,d^2}[v ] \ , ) , \ ; \ ; v^{d^1,d^2}- m^{d^1,d^2}_-[v ] , v^{d^1,d^2 } - n[v^{d^1,d^2 } ] \ , \ } = 0 \end{array } \right\},\hskip -1pc\phantom{0 } \tag{{\rm hji2}+}\end{aligned}\ ] ] where and ] .(x ) , n[v_-^{d^1,d^2}](x)\} ] such that & \quad\ , + { \rm e}^{-\lambda t}v_{-}^{d^1,d^2}(y_x(t)).\end{aligned}\ ] ] 4 .let be such that strict inequality holds in ( ii ) .let .then there exists such that the following holds : + for each there exists with such that & \quad\ , + { \rm e}^{-\lambda t } v_-^{d^1,d^2}(y_x(t ) ) .\end{aligned}\ ] ] [ s2i4 ] the following results hold . 1. (x ) \leq v_+^{d^1,d^2}(x) ] .3 . let be such that strict inequality holds in ( ii ) .let .then there exists such that the following holds : + for each there exists ] in case 1 and (x) ] and solves y(0)=\hat{x } , \end{array}\ ] ] then as , & & \displaystyle + \int_0^t { \rm e}^{-\lambda s } [ \langle d\phi(y(s ) ) , v(s ) \rangle \!-\ ! \lambda \phi(y(s))]~{\rm d}s \!+\ !\small{o}(t ) , \end{array}\end{aligned}\ ] ] uniformly for all uniformly integrable on .we are now ready to prove that ( resp . ) is an approximate viscosity solution of ( hji ) ( resp . ( hji ) ) .[ s2i6 ] the lower value function is an approximate viscosity solution of ( hji ) and the upper value function is an approximate viscosity solution of ( hji ) .we prove that is an approximate viscosity solution of ( hji ) .the other part can be proved in an analogous manner .let be such that for all , , and .we first prove that is an approximate subsolution of ( hji1 ) .let and be such that has a local maximum at . without any loss of generality , we may assume that . if (\hat{x}) ] .it suffices to show that if possible , let .this implies that for every , there exists such that & \quad\ , -k(\hat{x},u^1,d^1,u^2,d^2 ) -c_r l(\psi ) \geq \frac{r}{2}.\end{aligned}\ ] ] by proposition 3.2 of , for small enough , there exists such that & \quad\ , -k(\hat{x},u^1(s),d^1,\beta^t(u^1(\cdot),d^1)(s ) ) -c_r l(\psi ) \geq \frac{r}{2}\end{aligned}\ ] ] for all ] .this yields & \quad\ , -k(y_{\hat{x}}(s),u^1(s),d^1,\beta^t(u^1(\cdot),d^1)(s ) ) -c_r l(\psi ) \geq \frac{r}{2}\end{aligned}\ ] ] for a.e . ] such that & \quad\ , + { \rm e}^{-\lambda t}v_{-}^{d^1,d^2}(y_{\hat{x}}(t)).\end{aligned}\ ] ] we now claim that . we may take to be lipschitz . & \leq & \small{o}(\delta ) + c_{\hat{x}}\delta + c_{\hat{x}}({\rme}^{-\lambda \delta}-1 ) .\end{array}\ ] ] therefore & \leq & \small{o}(\delta ) + c\delta + c({\rm e}^{-\lambda \delta}-1 ) . \end{array}\ ] ] this proves the claim that and hence from lemma [ c4.9 ] , it follows that as , & & \displaystyle + l(\psi ) \int_0^t { \rm e}^{-\lambda s } \| f(y_{\hat{x}}(s),u^{1,t}(s),d^1,{\beta}^t(u^{1,t}(\cdot),d^1)(s ) ) \|~{\rm d}s \\[1.3pc ] & & \displaystyle+\int_0^t { \rm e}^{-\lambda s } [ \langle d\phi(y_{\hat{x}}(s ) ) , f(y_{\hat{x}}(s),u^{1,t}(s),d^1,\\[1.5pc ] & & \quad\ , { \beta}^t(u^{1,t}(\cdot),d^1)(s ) ) \rangle - \lambda \phi(y_{\hat{x}}(s))]~{\rm d}s + \small{o}(t ) .\end{array}\!\right\}\ ] ] combining ( [ 1 ] ) , ( [ 2 ] ) and ( [ 3 ] ) , we obtain \dfrac{r}{2 } & \geq & { \rm e}^{-\lambda t } \phi(y_{\hat{x}}(t))-\phi(\hat{x } ) \\ & & \displaystyle + \int_0^t { \rm e}^{-\lambda s } k(y_{\hat{x}}(s),u^{1,t}(s),d^1,{\beta}^t(u^{1,t}(\cdot),d^1)(s))~{\rm d}s \!+\!\small{o}(t ) \\[1pc ] & \geq & \small{o}(t ) .\end{array}\end{aligned}\ ] ] this contradiction proves that is an approximate subsolution of ( hji1 ) . to prove that is an approximate supersolution of ( hji1 ) , let be a local minimum of . without any loss of generality, we may assume that . if (\hat{x}) ] , then we are done .assume that (\hat{x}),n[v_-^{d^1,d^2}](\hat{x})) ] we get a such that for all ( y).\ ] ] [ s3i2 ] assume ( a1 ) and ( a2 ). then the following results are true . 1. any supersolution of ( hji1 ) satisfies ] for all .let be a supersolution of ( hji1 ) .if possible , let (x_0).\ ] ] by continuity , the above holds for all in an open ball around .as in lemma 1.8(d ) , p. 30 in , we can show that there exists and a smooth map such that has local minimum at . since is a supersolution of ( hji1 ) , this will lead to (y_0),\ ] ] a contradiction .this proves ( i ) .the proof of ( ii ) is similar. next we present the proof of the uniqueness theorem .[ s3i3 ] assume ( a1 ) and ( a2 ) .let and be viscosity solutions of ( hji ) ( or ( hji ) ) .then .we prove the uniqueness for ( hji ) .the result for ( hji ) is similar .let and be viscosity solutions of ( hji ) .we prove for all . in a similar fashionwe can prove that for all . for ,define by ,\ ] ] where is fixed , are parameters , and .note that is lipschitz continuous with lipschitz constant 1 .we first fix .let and be such that \ ] ] and here is the tataru s distance defined by \ ] ] ( see for more details ) .by lemma [ s3i2 ] , we have ( y_\epsilon),\\[.3pc]\label{s3e2 } w^{d_\epsilon^1 , d_\epsilon^2 } ( y_\epsilon ) \leq { m_-}^{d^1_\epsilon , d^2_\epsilon } [ w ] ( y_\epsilon),\\[.3pc]\label{s3e3 } v^{d_\epsilon^1 , d_\epsilon^2 } ( x_\epsilon ) \geq { m_+}^ { d^1_\epsilon , d^2_\epsilon } [ v ] ( x_\epsilon ) .\end{aligned}\ ] ] if we have strict inequality in all the above three inequalities ( that is , ( [ s3e1 ] ) , ( [ s3e2 ] ) and ( [ s3e3 ] ) ) , then by the definition of viscosity sub and super solutions we will have where -\phi_2(y ) & = & \dfrac{| x_{\epsilon } -y|^2}{2\epsilon } + \kappa\langle y \rangle^{\bar{m } } + \ , \epsilon d(y , y_{\epsilon } ) .\end{array}\ ] ] in this case we can proceed by the usual comparison principle method as in . therefore it is enough to show that for a proper auxiliary function strict inequality occurs in ( [ s3e1 ] ) , ( [ s3e2 ] ) and ( [ s3e3 ] ) at the maximizer .we achieve this in the following three steps .there are two cases to consider ; either there is no sequence such that strict inequality holds in ( [ s3e1 ] ) or there is some sequence for which strict inequality holds in ( [ s3e2 ] ) . if there is no sequence such that ( y_{\epsilon_n } ) \mbox { for all },\ ] ] then we have equality in ( [ s3e1 ] ) for all in some interval . by the definition of and the assumptions ( a2 ) , for each , there exists such that and ( y_\epsilon ) = w^{d_\epsilon^1 , d_\epsilon^2 } ( y_\epsilon + \xi_\epsilon ) + l(\xi_\epsilon).\ ] ] then , & \quad\ , - \kappa [ \langle x_\epsilon + \xi_\epsilon \rangle^ { \bar m } + \langle y_\epsilon + \xi_\epsilon \rangle^ { \bar m } ] \ ] ] & \quad\ , -\kappa [ \langle x_\epsilon \rangle^{\bar{m}}+ \langle y_\epsilon \rangle^{\bar{m } } ] - 2 \kappa | \xi_\epsilon|\\[.5pc ] & = \phi^{d^1_\epsilon , d^2_\epsilon } ( x_\epsilon , y_\epsilon ) - 2 \kappa | \xi_\epsilon | \\[.5pc ] & \geq \phi^{d^1_\epsilon , d^2_\epsilon } ( x_\epsilon , y_\epsilon ) - 2 \kappa m_0.\end{aligned}\ ] ] hence we have we will be using this difference to define the new auxiliary function .observe that by \\[.5pc ] & \quad\ , + 2 ( 2 \kappa + \epsilon ) m_0 \eta \left ( \frac{x - x_{\epsilon}-\xi_\epsilon}{\sigma } , \frac{y - y_{\epsilon}-\xi_\epsilon}{\sigma}\right),\end{aligned}\ ] ] where is the constant coming from lemma 3.1 and is a smooth function with the following properties : 1 . , 2 . , 3 . and if , 4. .now by the definition of , & \quad\ , -\epsilon[d(x_\epsilon+ \xi_\epsilon , x_\epsilon)+d(y_\epsilon+\xi_\epsilon , y_\epsilon)]\\[.4pc ] & \quad\ , + 2(2 \kappa+ \epsilon ) m_0\\[.4pc ] & \geq \phi^{d_\epsilon^1 , d_\epsilon^2 } ( x_{\epsilon } , y_{\epsilon } ) + 2\kappa m_0\end{aligned}\ ] ] and attains its maximum in the ball around at ( say ) the point by lemma ( [ s3i1 ] ) , we now know that , for all , ( \hat { y}_\epsilon ) .\ ] ] if there is a sequence such that ( [ s3e4a ] ) holds for all , then , along this sequence , we proceed to step 2 with and .thus we always have a sequence along which we have ( \hat{y}_{\epsilon_n}),\ ] ] with being a maximizer of .since is a finite set , without any loss of generality , we may assume that for all .in the next step we check what happens to the inequality ( [ s3e2 ] ) at the maximizer of the new auxiliary function . now for each fixed , we proceed as follows .there are two cases either ( \hat{y}_{\epsilon_n } ) ] .if ( \hat{y}_{\epsilon_n}),\ ] ] then by the definition of , there exists such that we know that hence , & \quad\ , - w^{d_0 ^ 1 , d_{n_1}^2 } ( \hat { y}_{\epsilon_n } ) - c^2 ( d^2_0 , d_{n_1}^2 ) \\ & \leq 0.\end{aligned}\ ] ] hence we get now if strict inequality holds in ( \hat{y}_{\epsilon_n})$ ] , then we are done ; else we repeat the above argument and get such that now & \geq w^{d_0 ^ 1 , d_{n_1}^2 } ( \hat{y}_{\epsilon_n } ) - c^2_{0 } \\[.3pc ] & \geq w^{d_0 ^ 1 , d_0 ^ 2 } ( \hat{y}_{\epsilon_n } ) - 2 c^2_{0}.\end{aligned}\ ] ] proceeding in similar fashion , after finitely many steps , boundedness of will be contradicted and hence for some , we must have ( \hat{y}_{\epsilon_n})\ ] ] and on the other hand , if ( \hat{y}_{\epsilon_n}),\ ] ] then we proceed by taking . for each fixed , we proceed as follows : if ( \hat{x}_{\epsilon_n}),\ ] ] then we proceed as in step 2 and obtain such that ( \hat{x}_{\epsilon_n})\ ] ] and if ( \hat{x}_{\epsilon_n}),\ ] ] then we proceed by taking .thus , for every , is a maximizer of and ( \hat{y}_{\epsilon_n } ) , \\[.4pc]\label{s3e9 } w^{d_n^1 , d_n^2 } ( \hat{y}_{\epsilon_n } ) < m_-^{d^1_n , d^2_n } [ w ] ( \hat{y}_{\epsilon_n}),\\[.4pc]\label{s3e10 } v^{d_n^1 , d_n^2 } ( \hat{x}_{\epsilon_n } ) > m_+^ { d^1_n , d^2_n } [ v ] ( \hat{x}_{\epsilon_n}).\end{aligned}\ ] ] also by using ( [ * ] ) and ( [ * * ] ) , we have that now we define test functions and as follows : \\[.3pc ] & \quad\ , -2 ( 2 \kappa + \epsilon_n ) m_0 \eta \left ( \frac { x- x_{\epsilon_n}-\xi_{\epsilon_n}}{\sigma } , \frac { \hat{y}_{\epsilon_n } - y_{\epsilon_n}-\xi_{\epsilon_n}}{\sigma } \right ) , \\[.3pc ] \phi_{2n } ( y ) & = v^{d_{n}^1 , d_{n}^2 } ( \hat { x}_{\epsilon_n } ) - \frac{| \hat{x}_{\epsilon_n}- { y } |^2 } { 2 \epsilon_n } - \kappa [ \langle \hat{x}_{\epsilon_n}\rangle ^ { \bar m } + \langle { y}\rangle ^ { \bar m } ] \\[.3pc ] & \quad\ , + 2 ( 2\kappa + \epsilon_n ) m_0 \eta \left ( \frac{\hat{x}_{\epsilon_n } - x_{\epsilon_n } - \xi_{\epsilon_n } } { \sigma } , \frac { { y } - y_{\epsilon_n } - \xi_{\epsilon_n } } { \sigma}\right).\end{aligned}\ ] ] observe that & \quad\ , - \frac{2(2 \kappa + \epsilon_n ) m_0 } { \sigma } \ , d_x \eta \left ( \frac{\hat{x}_{\epsilon_n } - x_{\epsilon_n}- \xi_{\epsilon_n } } { \sigma } , \frac { \hat{y}_{\epsilon_n } - y_{\epsilon_n}-\xi_{\epsilon_n}}{\sigma}\right),\\ d \phi_{2n } ( \hat{y}_{\epsilon_n } ) & = \frac{\hat{x}_{\epsilon_n } - \hat{y}_{\epsilon_n}}{\epsilon_n } - \kappa \bar{m } \langle \hat{y}_{\epsilon_n } \rangle ^ { m-2 } \hat{y } _ { \epsilon_n}\\[.4pc ] & \quad\ , + \frac { 2 ( 2\kappa + \epsilon_n ) m_0}{\sigma } \ , d_y \eta \left ( \frac{\hat{x}_{\epsilon_n } - x_{\epsilon_n}-\xi_{\epsilon_n}}{\sigma } , \frac { \hat{y}_{\epsilon_n } - y_{\epsilon_n}-\xi_{\epsilon_n } } { \sigma}\right).\end{aligned}\ ] ] note that attains its maximum at and attains its minimum at .hence , as , we have & \leq h_+^ { d^1_{n } , d^2_{n } } ( \hat{y}_{\epsilon_n } , d \phi_{2n } ( \hat{y}_{\epsilon_n}))\!-\ ! h_+^ { d^1_{n } , d^2_{n } } ( \hat{x}_{\epsilon_n } , d \phi_{1n } ( \hat{x}_{\epsilon_n } ) ) \!+\! \small{o}(1)\\[.5pc ] & \leq l | \hat{x}_{\epsilon_n } - \hat{y}_{\epsilon_n}| \left ( 1 + \left|\frac{\hat{x}_{\epsilon_n } - \hat{y}_{\epsilon_n}}{\epsilon_n}\right|\right ) + \small{o}(1 ) \\[.5pc ] & \quad\ , + \| f \|_\infty \left[\kappa \bar{m } ( \langle \hat{x}_{\epsilon_n } \rangle ^{\bar{m}-1 } + \langle \hat{y}_{\epsilon_n } \rangle ^{\bar{m}-1 } ) + \frac { 4 ( 2\kappa+\epsilon ) m_0 } { \sigma } \right].\end{aligned}\ ] ] note that we have used to get the above inequality .now as , it follows that & \quad\ , + \frac { 2 \| f \|_\infty \kappa \bar{m } } { \lambda } + \frac { 4 \| f \|_\infty \kappa m_0 } { \sigma \lambda}\\[.5pc ] & \leq \frac { 2 \| f \|_\infty \kappa \bar{m } } { \lambda } + \frac { 4 \| f \|_\infty \kappa m_0 } { \sigma \lambda } + o(1 ) ; \;\mbox{as}\ ; \epsilon_n \downarrow 0.\end{aligned}\ ] ] for any and & \leq \phi^{d^1_0 , d^2_0 } ( x_{\epsilon_n } , y_{\epsilon_n } ) + \epsilon_n [ d(x , x_{\epsilon_n})+d(x , y_{\epsilon_n } ) ] \\[.5pc ] & \leq \psi ^{d^1_0 , d^2_0 } ( x_{\epsilon_n } , y_{\epsilon_n } ) + o(1 ) ; \;\mbox{as}\ ; \epsilon_n \downarrow 0.\end{aligned}\ ] ] since attains its maximum at in the ball around , for an appropriate constant , we have & \leq \psi^ { d^1_{n } , d^2_{n } } ( \hat{x}_{\epsilon_n } , \hat{y}_{\epsilon_n } ) \;\ ; \text{using}\ ; ( \ref{***})\ ] ] & \leq \frac { 2 \| f \|_\infty \kappa \bar{m } } { \lambda } + \frac { 4 \| f\|_\infty \kappa m_0 } { \sigma \lambda } + c \kappa + o(1 ) ; \;\mbox{as}\ ; \epsilon_n \downarrow 0.\end{aligned}\ ] ] hence we get & \quad\ , \leq \frac { 2 \| f \|_\infty \kappa \bar{m } } { \lambda } + \frac { 4 \| f \|_\infty \kappa m_0 } { \sigma \lambda } + c \kappa + o(1 ) ; \;\mbox{as}\ ; \epsilon_n \downarrow 0.\end{aligned}\ ] ] now let and then , to obtain this completes the proof of uniqueness for ( hji ) . the above uniqueness result holds true if one is the viscosity solution and the other is an approximate viscosity solution .this is the content of the next theorem .[ s3i4 ] assume ( a1 ) and ( a2 ) .let and .let be a viscosity solution of ( hji ) ( resp .( hji ) ) and be an approximate viscosity solution of ( hji ) ( resp. ( hji ) ). then .the proof is similar to that of the previous theorem .the only change here is in ( [ h2 ] ) .since is an approximate viscosity solution , one gets & \quad\ , \geq - \epsilon \ , c_r,\end{aligned}\ ] ] where .note that , for fixed , and hence once we have this inequality ( instead of ( [ h2 ] ) ) , we mimic all other arguments in the proof of the previous theorem. now we can prove our main result stated in 1 , namely theorem [ s1i1 ] . under the isaacs min max condition , ( hji ) and ( hji )let us denote this equation by ( hji ) .as in , by perron s method , we can prove an existence of a viscosity solution for ( hji ) in , the class of bounded uniformly continuous functions .let be any such viscosity solution .now , by theorem [ s2i6 ] we know that lower and upper value functions , and are approximate viscosity solutions of ( hji ) .therefore , by theorem [ s3i4 ] , .this proves the main result. have studied two - person zero - sum differential games with hybrid controls in infinite dimension .the minimizing player uses continuous , switching , and impulse controls whereas the maximizing player uses continuous and switching controls .the dynamic programming principle for lower and upper value functions is proved and using this we have established the existence and uniqueness of the value under isaacs min max condition . for finite dimensional problems , similar result has been obtained by yong under two additional assumptions : ( y1 ) _ cheaper switching cost condition _ ( y2 ) _ nonzero loop switching cost condition _for any loop , with the property that & \mbox { either } \;\ ; d^1_{i+1 } = d^1_i , \ ;\mbox { or } \;\;d^2_{i+1 } = d^2_i \ , \ , \ , \ , \ ; \ ; \forall \,1 \leq i\leq j.\end{aligned}\ ] ] it holds that thus our result not only extends the work of to infinite dimensions but also proves the uniqueness of the viscosity solutions of upper and lower sqvi without the above two conditions ( y1 ) and ( y2 ) .also we have shown that under isaacs min max condition , the game has a value .moreover , we have given explicit formulation of dynamic programming principle for hybrid differential games and have also proved it which is not done in .the authors wish to thank m k ghosh for suggesting the problem and for several useful discussions .they also thank m k ghosh and mythily ramaswamy for carefully reading the manuscript and for useful suggestions .financial support from nbhm is gratefully acknowledged .crandall m g and lionsp l , hamilton jacobi equations in infinite dimensions , part vi : nonlinear a and tataru s method refined , evolution equations , control theory and biomathematics , lecture notes in pure and applied mathematics , _ dekker _ * 155 * ( 1994 ) 5189
|
a two - person zero - sum infinite dimensional differential game of infinite duration with discounted payoff involving hybrid controls is studied . the minimizing player is allowed to take continuous , switching and impulse controls whereas the maximizing player is allowed to take continuous and switching controls . by taking strategies in the sense of elliott kalton , we prove the existence of value and characterize it as the unique viscosity solution of the associated system of quasi - variational inequalities . = msam10 at 10pt [ theore]*theorem * [ theore]definition [ theore]lemma [ theore]remark [ theore]proof of theorem [ theore]proof of lemma
|
an optical isolator ( optical diode ) is an optical component that allow the light to pass in one direction but block it in the opposite direction .these devices are commonly used in laser technology to prevent the unwanted backreflections which might be harmful to optical instrumentation .the standard optical isolator , as first proposed by rayleigh , is composed of two polarizers with their transmission axes rotated by with respect to each other and a faraday rotator .the faraday rotator is made of a magnetoactive medium which is placed inside a strong magnet .the magnetic field induces a circular anisotropy in the material ( faraday effect ) , which makes the left and right circular polarizations experience a different refraction index . as a result , the plane of linear polarization travelling through the device is rotated by an angle equal to where is the induction of the applied magnetic field , is a length of the magnetoactive medium and is the verdet material constant . because the verdet constant depends strongly on the wavelength ,so does the rotation angle of the rotator .the standard optical diode shown in works as follows .the light travelling in the forward direction is first linearly polarized in the horizontal direction by the input polarizer .afterwards , the faraday element rotates the polarization by and finally , the light is transmitted through the output polarizer .the light travelling backwards is first linearly polarized at , the faraday rotator then rotates the polarization by another , meaning the light is now polarized in the vertical direction . because the input polarizer transmits only horizontal polarization , the light is extinguished .the main drawback of standard isolators is that they work efficiently only for a very narrow range of wavelengths , because of the dispersion of the faraday rotation angle . by the faraday rotator andpasses through the output polarizer .the light travelling backwards is first polarized at , then it is rotated by another in the faraday element .the resulting horizontal polarization is extinguished by the input vertical polarizer . ]the usual approach seen in commercial broadband isolators is to add the additional reciprocal rotator ( e.g. quartz rotator ) next to the faraday rotator .the former element is used to compensate for the dispersion of the faraday rotator . in the forward direction ,the rotation of the two elements add up to a total rotation of , whereas in the opposite direction the rotations subtract and no rotation is experienced by the plane of linear polarization .recently we proposed a novel broadband isolator , which could be realized with the bulk optics elements .we exploited the analogy in the mathematical description of a quantum two - state system driven by a pulsed laser field and an electromagnetic wave propagating through an anisotropic medium .the technique of composite pulses known from nuclear magnetic resonance ( nmr ) and quantum optics was applied by us to find conditions for broadband operation of the isolator . in this article, we propose an alternative realization of the optical isolator which could be suitably implemented in fibre optics .our approach is based on the adiabatic evolution of the stokes vector which allows for a broadband performance of the presented isolator .as high - power fibre lasers are attracting an increasing attention , the development of integrated optical elements for the manipulation of the state of light becomes a crucial point .an all - fibre architecture has the advantage of allowing for an efficient transmission of light without reflections losses on the way ( except from an input to a fibre ) ; such losses are a serious problem in bulk optics . as the broadbandhigh - power sources like superluminescent diodes ( sld ) or ti : sapphire oscillators are broadly used in optical coherence tomography ( oct ) , characterization of optical components and optical measurements , the issue of the efficient broadband isolation in fibres is of increasing importance .the composition of the manuscript is the following . in section [ stokes ]we present the mathematical description underpinning our approach .section [ design ] discusses the design of our broadband optical isolator .then we consider the practical realization of the proposed design in section [ practical ] .section [ results ] presents the performance of the diode and in the last section [ conclusion ] we summarize the conclusions .consider the propagation of a plane electromagnetic wave through an anisotropic dielectric medium along the -axis .we assume that there are no polarization dependent losses .then the equation of motion is given by the torque equation macmaster1961 , schmieder1969 , kubo1980 , kubo1983 , kubo1985 ] is a birefringence vector of the medium .one can write down in matrix form as the matrix is given as \,.\]]we shall make use of the adiabatic evolution of the stokes vector . for this purpose ,we need the eigenvalues of , which read .the eigenvector that corresponds to the zero eigenvalue is extremely simple : will call this eigenvector polarization dark state " in analogy to the stimulated raman adiabatic passage ( stirap ) process in quantum optics gaubatz1990,bergmann1998,bergmann2001 . assuming that the evolution is adiabatic and that the stokes polarization vector is initially aligned with the polarization dark state " , then the stokes vector will follow this adiabatic state throughout the medium . the evolution of the polarization dark state " depends on the initial polarization and the spatial ordering of the components of birefringence vector .it will be discussed in detail in the next section . in analogy to the quantum - optical stirap condition for adiabatic evolution requires the integral of the length of birefringence vector over the propagation distance to be large , i.e. optical isolator we are going to present requires two crossed polarizers and two achromatic optical elements : a reciprocal ( standard ) quarter wave plate and a non - reciprocal quarter wave plate . below we will describe the design in the framework of formalism presented above .let us first analyze the operation of the achromatic reciprocal quarter wave - plate .this problem was recently studied in .the reciprocity of the wave plate comes from the reciprocity of the birefringence vector .this means that when the light travels through the wave plate in the reverse direction , the sign of the birefringence vector is also reversed . bearing this in mind, we analyze the evolution of the polarization dark state .let us assume that initially the light is linearly polarized in the horizontal direction , ] , provided that only is present at the end ( e.g. through a large spin rate of the fibre ) .the process is fully reversible , meaning that if we change the ordering of the birefringence vector components ( now precedes and ) , and the stokes vector is initially aligned along -axis , it adiabatically evolves into state ] is transformed into the right circular polarization state ] it evolves into the linear _ vertical _ polarization ] and $ ] refer to the input horizontal and output vertical polarizers , respectively .furthermore , is the intensity of light entering the isolator , whereas and are the intensities measured after the diode in the forward and backward directions .the isolation was calculated using the standard formula depicts the results of our calculations for the three fibre lengths . in a we presented the intensity of light in the forward direction and in b the isolation .one can notice the exceptionally high level of isolation over the whole range of wavelengths considered .what is interesting , the level of isolation is almost constant irrespective of the length of the isolator .the price we have to pay for broadband isolation is the transmission window decreasing with decreasing length of the isolator .as the length of the setup decreases the adiabatic condition is weakened and , thus , the transmission becomes worse .the transmission was calculated with an assumption that the fibre is lossless . in practice , the light travelling through a fibreis attenuated .however , because the length of our isolator is relatively short , the losses should be negligible .the isolation of the best commercial broadband fibre diodes is no greater than 32 db and the range of isolation is around 150 nm .as seen in b , the isolation of our diode remains greater than 70 db for a range as wide as 500 nm .because of the robustness of adiabatic techniques , the isolation would be also insensitive to variations in the temperature and the length of the fibre .in this manuscript we proposed a novel design of the fibre optical isolator , which operates over a broad range of wavelengths .the adiabatic evolution has been successfully applied to obtain a robust broadband performance of the optical diode under study .the isolator can be further enhanced by inducing birefringence of higher magnitude .this is possible with stress - induced birefringence , as in our simulations we used a moderate value of the former . to obtain higher circular birefringence with the faraday effectone can apply stronger magnetic field or use a different magnetoactive medium to assure higher value of the verdet constant .increasing the value of birefringence would also result in the decrease of the length of the device .this work is supported by the bulgarian nsf grant dmu-03/103 .
|
we propose a broadband optical diode , which is composed of one achromatic reciprocal quarter - wave plate and one non - reciprocal quarter - wave plate , both placed between two crossed polarizers . the presented design of achromatic wave plates relies on an adiabatic evolution of the stokes vector , thus , the scheme is robust and efficient . the possible simple implementation using fibre optics is suggested . _ keywords _ : broadband optical isolator , fibre optics , adiabatic evolution .
|
an often - held opinion on intrinsic dimensionality of data sampled from submanifolds of the euclidean space is expressed in thus : `` ... the goal of estimating the dimension of a submanifold is a well - defined mathematical problem .indeed all the notions of dimensionality like e.g. topological , hausdorff , or correlation dimension agree for submanifolds in . ''we will argue that it may be useful to have at one s disposal a concept of intrinsic dimension of data which behaves in a different fashion from the more traditional concepts .our approach is shaped up by the following five goals .we want a high value of intrinsic dimension to be indicative of the presence of the curse of dimensionality .the concept should make no distinction between continuous and discrete objects , and the intrinsic dimension of a discrete sample should be close to that of the underlying manifold .the intrinsic dimension should agree with our geometric intuition and return standard values for familiar objects such as euclidean spheres or hamming cubes .we want the concept to be insensitive to high - dimensional random noise of moderate amplitude ( on the same order of magnitude as the size of the manifold ) .finally , in order to be useful , the intrinsic dimension should be computationally feasible .for the moment , we have managed to attain the goals ( 1),(2),(3 ) , while ( 4 ) and ( 5 ) are not met .however , it appears that in both cases the problem is the same , and we outline a promising way to address it . among the existing approaches to intrinsic dimension , that of comes closest to meeting the goals ( 2),(3),(5 ) and to some extent ( 1 ) , cf .a discussion in .( lemma 1 in seems to imply that ( 4 ) does not hold for moderate noise with , i.e. , . )we work in a setting of metric spaces with measure ( -spaces ) , i.e. , triples consisting of a set , , furnished with a distance , , satisfying axioms of a metric , and a probability measure .this concept is broad enough so as to include submanifolds of ( equipped with the induced , or minkowski , measure , or with some other probability distribution ) , as well as data samples themselves ( with their empirical , that is normalized counting , measure ) . in section [ s : conc ] , we describe this setting and discuss in some detail the phenomenon of concentration of measure on high dimensional structures , presenting it from a number of different viewpoints , including an approach of soft margin classification .the curse of dimensionality is understood as a geometric property of -spaces whereby features ( -lipschitz , or non - expanding , functions ) sharply concentrate near their means and become non - discriminating .this way , the curse of dimensionality is equated with the phenomenon of concentration of measure on high - dimensional structures , and can be dealt with an a precise mathematical fashion , adopting ( 1 ) as an axiom . the intrinsic dimension , , is defined for -spaces in an axiomatic way in section [ s : axiom ] , following . to deal with goal ( 2 ) , we resort to the notion of a distance , , between two -spaces , and , measuring their similarity .this forms the subject of section [ s : gromov ] .our second axiom says that if two -spaces are close to each other in the above distance , then their intrinsic dimension values are also close . in this article, we show that if a dataset is sampled with regard to a probability measure on a manifold , then , with high confidence , the distance between and is small , and so and are close to each other. the goal ( 3 ) can be made into an axiom in a more or less straightforward way .we give a new example of a dimension function satisfying our axioms .we show that the gromov distance between a low - dimensional manifold and its corruption by high - dimensional gaussian noise of moderate amplitude is close to in the gromov distance. however , this property does not carry over to the samples unless their size is exponential in the dimension of ( unrealistic assumption ) , and thus our approach suffers from high sensitivity to noise ( section [ s : noise ] . )another drawback is computational complexity : we show that computing the intrinsic dimension of a finite sample is an -complete problem ( sect .[ s : complexity ] . ) however , we believe that the underlying cause of both problems is the same : allowing _ arbitrary _ non - expanding functions as features is clearly too generous . restricting the class of features to that of low - complexity functionswhose capacity is manageable and rewriting the entire theory in this setting opens up a possibility to use statistical learning theory and offers a promising way to solve both problems , which we discuss in conclusion .as in , we model datasets within the framework of spaces with metric and measure ( -spaces ) .so is called a triple , consisting of a ( finite or infinite ) set , a metric on , and a probability measure defined on the family of all borel subsets is the smallest family of subsets of closed under countable unions and complements and containing every open ball , , . ] of the metric space .the setting of -spaces is natural for at least three reasons .first , a finite dataset sitting in a euclidean space forms an -space in a natural way , as it comes equipped with a distance and a probability measure ( the empirical measure , where denotes the number of elements in ) .second , if one wants to view datasets as random samples , then the domain , equipped with the sampling measure and a distance , also forms an -space . andfinally , theory of -spaces is an important and fast developing part of mathematics , the object of study of asymptotic geometric analysis , see and references therein ._ features _ of a dataset are functions on that in some sense respect the intrinsic structure of . in the presence of a metric , they are usually understood to be _ 1-lipschitz , _ or _non - expanding , _functions , that is , having the property we will denote the collection of all real - valued 1-lipschitz functions on by .the curse of dimensionality is a name given to the situation where all or some of the important features of a dataset sharply concentrate near their median ( or mean ) values and thus become non - discriminating . in such cases , is perceived as intrinsically high - dimensional .this set of circumstances covers a whole range of well - known high - dimensional phenomena such as for instance sparseness of points ( the distance to the nearest neighbour is comparable to the average distance between two points ) , etc .it has been argued in that a mathematical counterpart of the curse of dimensionality is the well - known _ concentration phenomenon _ , which can be expressed , for instance , using gromov s concept of the _ observable diameter _ .let be a metric space with measure , and let be a small fixed threshold value .the _ observable diameter _ of is the smallest real number , , with the following property : for every two points , randomly drawn from with regard to the measure , and for any given -lipschitz function ( a feature ) , the probability of the event that values of at and differ by more than is below the threshold : <\kappa.\ ] ] informally , the observable diameter is the size of a dataset as perceived by us through a series of randomized measurements using arbitrary features and continuing until the probability to improve on the previous observation gets too small .the observable diameter has little ( logarithmic ) sensitivity to .the _ characteristic size _ of as the median value of distances between two elements of .the concentration of measure phenomenon refers to the observation that `` natural '' families of geometric objects often satisfy a family of spaces with metric and measure having the above property is called a _lvy family_. here the parameter usually corresponds to dimension of an object defined in one or another sense . for the euclidean spheres of unit radius , equipped with the usual euclidean distance and the ( unique ) rotation - invariant probability measure , one has , asymptotically as , , while .[ fig : obs - diam ] shows observable diameters ( indicated by inner circles ) corresponding to the threshold value of spheres in dimensions , along with projections to the two - dimensional screen of randomly sampled 1000 points .[ 0.271 ] , .,title="fig : " ] [ 0.271 ] , .,title="fig : " ] [ 0.271 ] , .,title="fig : " ] [ 0.271 ] , .,title="fig : " ] some other important examples of lvy families include : + hamming cubes of two - bit -strings equipped with the normalized hamming distance and the counting measure .the law of large numbers is a particular consequence of this fact , hence the name geometric law of large numbers sometimes used in place of concentration phenomenon ; + groups of special unitary matrices , with the geodesic distance and haar measure ( unique invariant probability measure ) ; + spaces equipped with the guassian measure with standard deviation , + any family of expander graphs ( , p. 197 ) with the normalized counting measure on the set of vertices and the path metric .any dataset whose observable diameter is small relative to the characteristic size will be suffering from dimensionality curse .for some recent work on this link in the context of data engineering , cf . and references therein .one of many equivalent ways to reformulate the concentration phenomenon is this : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a typical `` high - dimensional '' structure , if is a subset containing at least half of all points , then the measure of the -neighbourhood of is overwhelmingly close to already for small values of . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ more formally , one can prove that a family of -spaces is a lvy family if and only if , whenever a borel subset is picked up in every in such a way that , one has for every .this reformulation allows to define the most often used quantitative measure of concentration phenomenon , the _ concentration function _ , , of an -space , cf .one sets and for all , where runs over borel subsets of .clearly , a family of -spaces is lvy if and only if the concentration functions converge to zero pointwise for all .another such quantitative measure is the _ separation distance _ .let .the value of -separation distance of the -space is the supremum of all for which there are borel sets at a distance from each other which are both sufficiently large : by setting in addition , one gets the _ separation function _ of , , which is a non - increasing function from the interval ] with the property that whenever is a borel subset , one has where is the lebesgue measure on ] is the inverse image of under .introduce the distance between measurable functions on ] with the lebesgue measure and the distance normalized by , the hamming cubes with the normalized hamming ( ) distance and normalized counting measure , etc .let be a function defined for every member of and assuming values in .we call an _ intrinsic dimension function _ if it satisfies the following axioms : 1 .( _ [ ax : conc]axiom of concentration _ ) a family of members of is a lvy family if and only if .( _ axiom of smooth dependence on datasets _ ) if and , then .( _ axiom of normalization _ ) if there exist constants and an with for all .one says that the functions and asymptotically have the same order of magnitude . ] for some ( hence every ) family with the property one has .the first axiom formalizes a requirement that the intrinsic dimension is high if and only if a dataset suffers from the curse of dimensionality .the second axiom assures that a dataset well - approximated by a non - linear manifold has an intrinsic dimension close to that of .the role of the third axiom is just to calibrate the values of the intrinsic dimension . as explained in , the axioms lead to a paradoxical conclusion : every dimension function defined for all -spaces must assign to the trivial one - point space the value .this paradox is harmless and does not lead to any contradictions , furthermore one can avoid it is by restricting the class to -spaces of a given characteristic size ( i.e. , the median value of distances between two points ) , which does not lead to any real loss in generality .in we gave an example of a dimension function , the _ concentration dimension _ of : ^ 2}.\ ] ] here is another dimension function .the quantity ^ 2}\ ] ] defines an intrinsic dimension function on the class of all -spaces for which the above integral is proper ( including , in particular , all spaces of bounded diameter ) .we call it the _ separation dimension_. _ ( cf . fig .[ fig : sepdimham ] . ) _ of the hamming cube , equipped with the normalized hamming distance and normalized counting measure , , odd . ] by judiciously choosing a normalizing constant , one can no doubt make the separation dimension of fit the values of much closer .in fact , practically every concentration invariant from theory of -spaces leads to an example of an intrinsic dimension function , and the chapter 3 of is a particularly rich source of such invariants .most existing approaches to intrinsic dimension of a dataset have to confront the problem that , strictly speaking , the value of dimension of a finite dataset is zero , because it is a discrete object . on the contrary , as examplified by the hamming cube ( fig .[ fig : sepdimham ] ) , our dimension functions make no difference between discrete and continuous -spaces . moreover , the dimension of randomly sampled finite subsets approaches the dimension of the domain .the following is a consequence of theorem [ th : sampling ] and axiom 2 of dimension function .let be a dimension function , and let be a non - atomic -space .for every , there is a value such that , whenever is a set of cardinality randomly sampled from with regard to the measure , one has with confidence [ c : sampling ] jointly with theorems [ th : gamma ] and [ th : sampling ] , the above corollary implies the following result which we state in a qualitative version .let be a dimension function , and let be a non - atomic metric space with measure .let be a probability distribution on with the marginals equal to on and the bernoulli distribution on .then for every there are natural numbers and with the following property .assume .let training datapoints be sampled from according to the distribution an an i.i.d . fashion . then with confidence , for every -lipschitz function the empirical error satisfies where is the empirical measure supported on the sample . in other words , an intrinsically high - dimensionaldataset does not admit large margin classifiers .for the moment , we do nt have any example of a dimension function that would be computationally feasible other than for well - understood geometrical objects ( spheres , cubes ... ) . fix a value . determining the value of the separation function for finite metric spaces ( with the normalized counting measure ) is an -complete problem . to a given finite metric space associate a graph with as the vertex set and two vertices being adjacent if and only if . now the problem of determining is equivalent to solving the largest balanced complete bipartite subgraph problem which is known to be -complete , cf .gt24 in .another deficiency of our approach in its present form is its sensitivity to noise .we will consider an idealized situation where data is corrupted by high - dimensional gaussian noise , as follows .let be a probability measure on the euclidean space .assume that is supported on a compact submanifold of of lower dimension .if has density ( that is , is absolutely continuous with regard to the lebesgue measure ) , a dataset being sampled in the presence of gaussian noise means where is the density of the gaussian distribution .equivalently , is sampled with regard to the convolution of with the -dimensional gaussian measure : in which form the assumption of absolute continuity of becomes superfluous .one can think of the -space with the euclidean distance as a _ corruption _ of the original domain .we will further assume that the amplitude of the corrupting noise is on the same order of magnitude as the size of , that is , , or .here is a result in the positive direction .[ th : corruption ] let be a compact topological manifold supporting a probability measure . consider a family of embeddings of into the euclidean space , as a submanifold in such a way that the euclidean covering numbers , , grow as .let be corrupted by the gaussian noise of constant amplitude , that is , .then the gromov distance between the image of in and its corruption by tends to zero as . for an ,let be a finite -net for .denote by the orthogonal projection from to the linear subspace spanned by .let denote the push - forward of the measure to , that is , for every borel one has .the mass transportation distance between and is bounded by , and by proposition [ p : mass ] the gromov distance between and is bounded by .a similar argument gives the same upper bound for the gromov distance between the gaussian corruption of and that of .the -space can be parametrized by the identity mapping of itself ( because the measure is non - atomic and has full support ) , while the projection parametrizes the space by its very definition . if , then .conversely , let .the fibers , are -dimensional affine subspaces , and the measure induced on each fiber by the measure approaches the gaussian measure with regard to the mass transportation distance as .the function obtained from by integration over all fibers , belongs to , and since , the concentration of measure for gaussians ( p. 140 in ) implies that for some absolute constant , the functions and differ by less than on a set of -measure .in particular , if is large enough , the gromov distance between and its gaussian corruption will not exceed , whence the result follows since was arbitrary . under the assumptions of theorem [ th :corruption ] , the value of any dimension function for the corruption of converges to as .unfortunately , this result does not extend to finite samples , because the required size of a random sample of in the presence of noise is unrealistically high : the covering numbers of go to infinity exponentially fast ( in ) , and theorem [ th : sampling ] becomes useless . as an illustration , consider the simplest case possible .let be a singular one - point manifold , and let be sampled from in the presence of gaussian random noise of moderate amplitude , that is , where .assume the cardinality of the sample to be constant , .then the gromov distance between and tends to a positive constant ( ) as .it is a well - known manifestation of the curse of dimensionality that , as , the distances between pairs of points of strongly concentrate near the median value , which in this case will tend to .thus , a typical random sample will form , for all practical purposes , a discrete metric space of diameter .in particular , will contain numerous -lipschitz functions that are highly non - constant , and the gromov distance from to the one - point space is seen to tend to the value . for manageable sample sizes ( up to millions of points ) the above will already happen in moderate to high dimensions .for , a random sample as above of points will contain , with confidence , a -separated subset containing % of all points ( that is , every two points of are at a distance from each other ) .consequently , , and the separation dimension will not exceed .( at the same time , . )we conclude : the proposed intrinsic dimension of discrete datasets of realistic size is unstable under random high - dimensional noise of moderate amplitude .the following interesting version of intrinsic dimension was proposed by chvez _ who called it simply _ intrinsic dimensionality_. let be a space with metric and measure .denote by the mean of the distance function on the space with the product measure .assume .let be the standard deviation of the same function .the intrinsic dimensionality of is defined as the intrinsic dimensionality satisfies : * a weaker version of axiom 1 : if is a lvy family of spaces with bounded metrics , then , * a weaker version of axiom 2 : if and , then , * axiom 3 .[ th : chavez ] for a proof , as well as a more detailed discussion , see , where in particular it is shown on a number of examples that the dimension chvez _ et al ._ and our dimension can behave in quite different ways between themselves ( and of course from the topological dimension ) .the approaches to intrinsic dimension listed below are all quite different both from our approach and from that of chvez _ et al ._ , in that they are set to emulate various versions of _ topological _( i.e. essentially local ) dimension .in particular , all of them fail both our axioms 1 and 2 . _ correlation dimension , _ which is a computationally efficient version of the box - counting dimension , see . _ packing dimension _ , or rather its computable version as proposed and explored in . _ distance exponent _ , which is a version of the well - known minkowski dimension . an algorithm for estimating the intrinsic dimension based on the takens theorem from differential geometry . a non - local approach to intrinsic dimension estimation based on entropy - theoretic results is proposed in , however in case of manifolds the algorithm will still return the topological dimension , so the same conclusions apply .we have proposed a new concept of the intrinsic dimension of a dataset or , more generally , of a metric space equipped with a probability measure .dimension functions of the new type behave in a very different way from the more traditional approaches , and are closer in spirit to , though still different from , the notion put forward in ( cf . a comparative discussion in ) . in particular , high intrinsic dimension indicates the presence of the curse of dimensionality , while lower dimension expresses the existence of a small set of well - dissipating features and a possibility of dimension reduction of to a low - dimensional feature space .the intrinsic dimension of a random sample of a manifold is close to that of the manifold itself , and for standard geometric objects such as spheres or cubes the values returned by our dimension are `` correct ''. two main problems pinpointed in this article are prohibitively high computational complexity of the new concepts , as well as their instability under random high - dimensional noise .the root cause of both problems is essentially the same : the class of all -lipschitz functions is just too broad to serve as the set of admissible features .the richness of the spaces explains why computing concentration invariants of an -space is hard : roughly speaking , there are just too many feature functions on the space that are to be examined one by one .the abundance of lipschitz functions on a discrete metric space is exactly what makes the gromov distance from a random gaussian sample to a manifold large . at the same time, there is clearly no point in taking into account , as a potential feature , say , a typical polynomial function of degree on the ambient space , because such a function may contain up to monomials .since we can not store , let even compute , such a function , why should we care of it at all ? a way out , as we see it , consists in refining the approach and modelling a dataset as a pair , consisting of an -space _ together with a class of admissible features , _ , whose statistical learning capacity measures ( vc - dimension , covering numbers , rademacher averages , etc . ) are limited .this will accurately reflect the fact that in practice one only uses features that are computationally cheap , and will allow a systematic use of vapnik - chervonenkis theory .all the main concepts of asymptotic geometric analysis will have to be rewritten in the new framework , and this seems to be a potentially rewarding subject for further investigation .a theoretical challenge would to be obtain noise stability results under the general statistical assumptions of .finally , the gromov distance between two -spaces , and , is determined on the basis of comparing the features of and rather than the spaces themselves , which opens a possibility to try and construct an approximating principal manifold to by methods of unsupervised machine learning by optimizing over suitable sets of lipschitz functions , as in .the concept of dimension in mathematics admits a very rich spectrum of interpretations .we feel that the topological versions of dimension have been dominating applications in computing to the detriment of other approaches .we feel that the concept of dimension based on the viewpoint of asymptotic geometric analysis could be highly relevant to analysis of large sets of data , and we consider this article as a small step in the direction of developing this approach .the author is grateful to three anonymous referees of this paper for a number of suggested improvements .research was supported by nserc discovery grant and university of ottawa internal grants .blanchard , g. , kawanabe , m. , sugiyama , m. , spokoiny , v. , & mller , k .-( 2006 ) . in search of non - gaussian component of a high - dimensional distribution .journal of machine learning research , 7 , 247282 .hein , m. , & audibert , j .- y .( 2005 ) . intrinsic dimensionality estimation of submanifolds in . in : l. de raedt and s. wrobel ( eds . ) , proc .22nd intern .conf . on machine learning ( icml ) ( pp .289296 ) , amc press .hein , m. & maier , m. ( 2007 ) .manifold denoising as preprocessing for finding natural representations of data . in proc. twenty - second aaai conference on artificial intelligence ( vancouver , b.c . ) , pp .16461649 .traina , c. , jr . , traina , a.j.m . & faloutsos , c. ( 1999 ) .distance exponent : a new concept for selectivity estimation in metric trees .technical report cmu - cs-99 - 110 , computer science department , carnegie mellon university .
|
we perform a deeper analysis of an axiomatic approach to the concept of intrinsic dimension of a dataset proposed by us in the ijcnn07 paper . the main features of our approach are that a high intrinsic dimension of a dataset reflects the presence of the curse of dimensionality ( in a certain mathematically precise sense ) , and that dimension of a discrete i.i.d . sample of a low - dimensional manifold is , with high probability , close to that of the manifold . at the same time , the intrinsic dimension of a sample is easily corrupted by moderate high - dimensional noise ( of the same amplitude as the size of the manifold ) and suffers from prohibitevely high computational complexity ( computing it is an -complete problem ) . we outline a possible way to overcome these difficulties . intrinsic dimension of datasets , concentration of measure , curse of dimensionaity , space with metric and measure , features , gromov distance , random sample of a manifold , high - dimensional noise
|
the classical two - sample problem concerning i.i.d .observations has been extensively studied in the literature .we propose in this paper to extend this problem to the case of two contaminated samples when a noise is added to each sample .more precisely , we consider two samples , and , from the following two models [ convol ] x = y+z , & and & u= v+w , where and ( resp . and ) are two independent random variables .it is also assumed that and are independent .however this paper concerns independent as well as paired variables and since and can be dependent .we keep this hypothesis through the paper putting ( the more general case being easily obtained ) .we assume that all moments of and exist and are known .we are interested in testing the equality of the distribution of and .our aim is to construct an omnibus test for the general non parametric hypothesis [ hyp ] h_0 : l_y= l_v & against & h_1 : l_y_v , where and refers to the distribution of and . for that we extend the one - sample smooth test inspired of neyman ( 1937 )( see also rayner and best , 1989 , for a general introduction ) to the two - sample case under ( [ convol ] ) .for the one sample problem , the smooth test is an omnibus approach which consists in coming down to parametric hypotheses. then the smooth statistic is composed of different elements each able to detect a departure from the null hypothesis .this approach can be naturally extended to the two sample case , as in rayner and best ( 2001 ) ( see also chervoneva and iglewicz , 2005 ) .in addition , ledwina ( 1994 ) introduced a data driven procedure permitting to select automatically the number of elements of the statistic .the automatic selection is based on the schwarz ( 1978 ) criterion .janik - wrblezca and ledwina ( 2000 ) first used this technique combined with rank statistic for the two sample problem .recently ghattas et al .( 2011 ) obtained a data driven test for the two paired sample problem .various extensions of the data driven smooth test have been proposed , particularly in the context of survival data in krauss ( 2009 ) when samples are right censored , or in the context of detection of changes in antoch et al .( 2008 ) reducing the problem to a two sample subproblem . from ( [ convol ] )it is clear that the unknown moments of ( resp . ) can be expressed in terms of moments of and ( resp . and ) .the proposed smooth test is based on the difference between the first moments of and .the order determines the number of components of the test statistic .we then adapt the data driven approach permitting to select automatically this number .we first consider the case where varies between and , for a fixed integer .then we let tend to infinity more slowly than the sample size . for asymptotic results we make an assumption on the smallest eigenvalue of the sample covariance matrix .but in practice , the data driven procedure is effective in the first case with fixed large enough , as shown in our simulations .finally , we apply our method to the uefa champion s league data from meintanis ( 2007 ) . before describing our test procedurewe offer a few examples that illustrate the situation ( [ convol ] ) . _evaluation by experts_. during an assessment , such as sensory analysis , it is very common that experts are biased in their judgments .this bias is commonly observed and assessed during training and can be assumed to be known in distribution .typically , one can assume a normal distribution with mean and variance associated with each expert .in this case , if we want to compare the distribution of two products evaluated by two experts , we are reduced to the situation ( [ convol ] ) where and coincide with the two experts scoring , and being their errors ._ ruin theory ._ another situation that can be encountered in ruin theory is the random sum of claims , , where are i.i.d .random variables with known exponential distribution .the number of claims can be decomposed into a fixed known value , , and a random value , , representing an aggregation of different claims .thus if we observe two sums x = _ i=1^n_1+n_1e_i & and & u= _ i=1^n_2+n_2v_i , where and are i.i.d ., one problem is to compare the randomness structure and , that is to test the equality of the distributions of these two variables .this problem coincides with ( [ convol ] ) since it is equivalent to testing the equality of the distributions of and ._ mixture model_. the deconvolution problem is also related to a mixture problem since a particular case of ( [ convol ] ) is the location mixture situation of the form f_x(x)=f_y(x - m ) f_z(dm ) , & & f_u(x)=f_v(x - m ) f_w(dm ) , with the location parameter , , the unknown mixed densities and , the known mixing densities .this situation can be encountered when finite mixture distributions have known components , and when the purpose is to compare their associated sub - populations associated with these components .we can also reverse the roles of and and be interested in the comparison of two linear mixed models with gaussian noise and unknown random effects . _extreme values_. contaminated model can be also viewed as a model for extremal values considering the convolutions x = y+z , & & u = v+w , where and are bernoulli with small parameter representing the occurrence of an extreme event .often the non - extreme distributions of and are well observed and known and we can be interested in the comparison of the extreme distributions of and . assuming that one knows when these rare events occur , they are observed with a known noise as in ( [ convol ] ) . _ scale mixture_. finally , it is current to observe the product of two variables , say x = yz , & & u = vw .for instance , that is the case for zero inflated distributions , when and are bernoulli random variables and and are discrete random variables . without loss of generality , by translating all variables , we can use a log - transformation to recover ( [ convol ] ) .many other cases can be envisaged as and with and normally distributed .the paper is organized as follows . in section 2we introduce the method based on polynomial expansions for testing the equality of the two contaminated densities . in section 3 we propose a simple data driven procedure that we extend to the case where the number of components of the statistic tends to infinity , with additional assumptions . in section 4 , finite - sample properties of the proposed test statistics are examined through monte carlo simulations .the analysis of a the champion s league data set is provided in section 5 . section 6 contains a brief discussion .consider simultaneously two ( possibly paired ) samples and following ( [ convol ] ) and such that all moments exist and characterize the associated distributions .is is assumed that the moments of and are known . from ( [ convol ] )we have the following two expansions for all integer ( x^i ) = _j=0^i c_ij(y^j)z_i - j , & and & ( u^i ) = _j=0^i c_ije(v^j)w_i - j , [ basic ] with , , and .write and .the null hypothesis coincides with , , and our testing procedure reduces to the parametric testing problem : , when gets large. we shall let tend to infinity , with a speed depending of the sample size , and its choice will be done automatically by a data driven method .inverting ( [ basic ] ) we get a_i= ( p_i(x ) ) & and & b_i = ( q_i(u ) ) , [ basic2 ] where and are polynomials of degree .for instance the first three terms are p_1(x ) & = & x - z_1 , + p_2(x ) & = & x^2 - 2z_1p_1(x)-z_2 , + p_3(x ) & = & x^3 -3z_1p_2(x)-3z_2p_1(x)-z_3 . to construct the test statistics we consider the vector of differences v_s(k)&= & ( p_i(x_s ) -q_i(u_s))_1 i k , and we put j_n(k)&=&_s=1^nv_s(k ) . under , has mean zero and finite variance - covariance matrix where denotes the expectation under and is the transposition of .next , let us define the empirical version of under , that is the matrix in the following , we assume that is a positive - definite matrix so that the corresponding inverse matrix and its square root exist . note that this condition is satisfied a.s. for large enough since the estimator is consistent .we consider the test statistic where denotes the euclidian norm on .application of the central limit theorem shows that under , converges in distribution to a random variable with degrees of freedom as tends to infinity .the strategy is to select an appropriate degree ; that is , a correct number of components in the test statistics .in addition , observe that the null hypothesis can be rewritten as where .suppose that the maximum likelihood estimator of equals the empirical mean of the sample of the s , that is , as it is the case for instance when the distribution of belongs to an exponential family .then , is the score statistic and the schwarz criteria is well adapted to get an automatic selection of .in this section , the data - driven method introduced by ledwina ( 1994 ) ( see also inglot et al . 1997 ) is used to optimize the parameter in our test statistic .it is based on a modified version of schwarz s bayesian information rule .the optimal value of , denoted by , is such that where can be either fixed , equal to , or increasing such that .once is determined , the test statistic is applied with .more precisely , we use for our testing problem the statistic .hereafter , the asymptotic distribution of the test statistic is derived under the null hypothesis for cases where is fixed or unbounded .[ theo0 ] assume that is fixed . under ,when tends to infinity , converges in distribution to a random variable with 1 degree of freedom .the proof is fairly standard and follows ledwina ( 1994 ) .we will detail a more general proof in the case where is unbounded ( see theorem [ theo2 ] ) . in our simulations , we fixed large enough , in the sense that its value was neither reached by , either under the null ( for empirical level calculations ) or under alternatives ( for empirical power calculations ) .let us denote by and the probability and the expectation under . write the smallest eigenvalue of .we now let tend to infinity under the following two conditions : ( a1 ) .( a2 ) there exists some positive constant such that for all , where . the condition ( a1 )can be compared to results obtained in the framework of random matrices .for instance , bai and yin ( 1993 ) ( see also silverstein , 1985 , for the particular gaussian case ) considered the case where the entries are independent and identically distributed with finite fourth moment ( this moment condition may be compared with ( a2 ) ) .they shown that almost surely when .then when the random series is bounded we get and can be chosen as .assumption ( a2 ) states that the fourth moment is bounded on average .it is similar to assumption 2 stated in ledoit and wolf ( 2004 ) .more precisely , ledoit and wolf used a condition on the eighth moment which is somewhat more restrictive .[ theo2 ] let assumptions ( a1 ) and ( a2 ) hold .thus , under , converges in distribution to a random variable with 1 degree of freedom . * proof * the proof is partly inspired by janic - wrblewska and ledwina ( 2000 ) .first note that the greatest eigenvalue of is the inverse of its smallest eigenvalue .then we have , where stands for the spectral norm . under ,it is clear that converges to a random variable with one degree of freedom .then we have to prove that tends to 1 as tends to infinity , or equivalently that tends to 0 .let us set .by definition of , we have using the standard norm s inequalities for matrices and vectors we get that we combine with markov inequality to obtain using the independence of the pairs , we get _ 0 ( j_n(k)^2)&=&_0 ( _ s=1^n_t=1^n v_s(k)v_t(k ) ) + & = & _ s=1^n _ 0(v_s(k)v_s(k ) ) + & = & _ 0(v_1(k)^2 ) .we now remark that _ 0(v_1(k)^2 ) & = & k ( _ i=1^k _ 0(z_i^2 ) ) + & & k ( _ i=1^k _ 0 ( z_i^2 ) ^2)^1/2 + & & k m^1/2 .finally , we have theorem [ theo2 ] obtains as soon as we have shown that is a decreasing sequence , which is clear since matrices are embedded by construction , that is , the submatrix obtained from the first lines and first columns of coincides ( in distribution ) with .finally , the test procedure is consistent against any alternative having the form h_1(q ) : q a_i = b_i , i=1 , , q-1 , and a_q b_q , where and are given by ( [ basic2 ] ) [ prop2 ] under , tends to infinity ( in probability ) as . *proof * first note that .we now prove that .for we have . by the law of large numbers, the variable converges in probability to a non - null vector .since is a positive definite matrix we have and the test statistics tends to in probability under . by similar arguments, tends to under .then for all .it follows that and then tends to as .it is well known that the sample covariance matrix performs poorly in the high dimensional setting . for applications in this context, we could change the sample covariance by a more suitable one . in ledoit and wolf ( 2004 ) a linear shrinkageis proposed , , where stands for the identity matrix and for the sample covariance ( like the one used in our paper ) . won et al .( 2009 ) proposed a non linear shrinkage for gaussian variance matrices .writing the sample covariance matrix , their estimator has the form , where the s are constrained estimated eigenvalues .another approach is the thresholding procedure proposed in cai and liu ( 2011 , see also el karoui , 2008).writing the sample covariance matrix , a universal thresholding estimator is with , with a proper choice of the threshold .cai and liu ( 2010 ) proposed the more adaptative thresholds , with tuning parameter and some fourth moment estimators s . in our problem should be the estimator of the variance of .[ [ models - and - alternatives ] ] models and alternatives + + + + + + + + + + + + + + + + + + + + + + + we present empirical powers of the test through several models. we will denote by the poisson distribution with mean , the normal distribution with mean and standard error , the binomial distribution on $ ] with probability of success , the chi - squared distribution with degree .we consider four models under and seven associated alternatives as follows : model mod1 : , , , . + alternative a11 : , .model mod2 : , , , .+ alternative a12 : , .model mod3 : , , , .+ alternative a13 : , .model mod4 : , , , .+ alternative a21 : and , + alternative a22 : and , + alternative a23 : and , + alternative a24 : and . for all models and alternativeswe consider i.i.d .data generated from two convolution models satisfying ( [ convol ] ) .it is assumed that and have known distribution . [[ empirical - levels ] ] empirical levels + + + + + + + + + + + + + + + + we compute the test statistic based on a sample size and for a theoretical level .the empirical level of the test is defined as the percentage of rejection of the null hypothesis over replications of the test statistic under the null hypothesis .we have fixed arbitrarily large enough since the selected order does not exceed in all our simulations .empirical levels are reported in table [ level ] for a fixed asymptotic level equal to 5% .it can be seen that all values are close to the asymptotic limit , also for small sample size ..empirical levels for mod1 , mod2 , mod 3 and mod4 with sample sizes [ cols="^,^,^,^,^",options="header " , ] these data have been explored assuming a marshall - olkin distribution in meinatis ( 2007 ) and with a bivariate generalized exponential distribution in kundu and gupta ( 2009 ) . in meintanis ( 2007 )the conclusion was that the champions - league data may well have arisen from a marshall - olkin distribution .in kundu and gupta ( 2009 ) the generalized exponential distribution can not be rejected for the marginals and the bivariate generalized exponential distribution can be used for these data .we consider here another model through contaminated poisson distributions .[ [ first - model ] ] first model + + + + + + + + + + + first we assume an additive noise [ model1 ] x = y+z & and & u = v+w , with and having poisson distributions and and being dependent random noise with .this model can be viewed as a mixed model with and as paired random effects .these effects can be considered as discrete or continuous as in meintanis ( 2007 ) or kundu and gupta ( 2009 ) .we assume that and have mean ( estimated ) and .the observed variances are larger than the means thereby believe there is a phenomenon of overdispersion . obviously under ( [ model1 ] )we have and .we apply our procedure to test the equality of the distributions of and .+ _ conclusion : _ the first statistic is retained and we obtain a p - value equal to . hence there is no evidence that the two additive paired random effects differ .[ [ second - model ] ] second model + + + + + + + + + + + + we also consider a multiplicative noise yielding to the following scale mixture [ model2 ] x = yz & and & u = vw , with and having poisson distributions and and being real positive dependent random scale factor with . again this model can be viewed as a mixed model with random paired effects .the observed values are discretized but we can assume that and are discrete or continuous .we assume that and have mean ( estimated ) and and there is still a phenomenon of overdispersion assuming it is a standard poisson model . under ( [ model2 ] )the variances satisfy and .our purpose is to test , or equivalently . forthat we consider the transformation of ( [ model2 ] ) ( z ) = ( y ) + ( z ) & and & ( u ) = ( v)+(w ) .using our method we obtain a p - value equal to .again we see that the multiplicative paired random effects seem to have the same distribution .this paper discusses the problem of comparing two distributions contaminated by different noises .the test is very simple and allows to compare two independent as well as two paired contaminated samples .simulation studies suggest that the proposed method works well with an empirical level close to that expected .it may be noted that the test statistic is decomposed into moments of and .then it is clear that only the knowledge of the moments of an are required instead of their distributions .hence the test could be adapted when these distributions are unknown , if their moments can be estimated from independent samples .eventually , the multivariate case could be envisaged by using the following characteristic property : if and are two random vectors taking values in then we have h_0 : y = ^d v & & u1 , uy = ^d uv , and clearly multidimensional observations can be transformed into unidimensional ones by applying a sequence of vectors on and . for a fixed value of the problem consists in an univariate test and the statistic can be used . denoting by this statisticthe process converges to a gaussian process and a new test statistic can be envisaged by estimating the covariance operator of the process to get a null distribution . in practice the sequences of vectors can be randomly chosen in , but it can also be done by a quasi monte carlo method ( see for instance lecuyer , 2006 ) . to conclude , the multisample case can also be envisaged as follows : assume that we have convolutions simultaneously x(i ) = y(i ) + z(i ) , & & i=1 , , d observed from samples .write and the common value under the null hypothesis . then under the vector with components is centered and normally distributed .an adaptation of the data driven smooth test seems then possible .
|
in this paper we consider the problem of testing whether two samples of contaminated data , possibly paired , are from the same distribution . is is assumed that the contaminations are additive noises with known moments of all orders . the test statistic is based on the polynomials moments of the difference between observations and noises . . a data driven selection is proposed to choose automatically the number of involved polynomials . we present a simulation study in order to investigate the power of the proposed test within discrete and continuous cases . a real - data example is presented to demonstrate the method . * keyword * contaminated data ; data - driven ; two sample test
|
the objective of a classification problem is to classify a subject to one of several classes based on a -dimensional vector of characteristics observed from the subject . in most applications ,variability exists , and hence is random . if the distribution of is known , then we can construct an optimal classification rule that has the smallest possible misclassification rate . however , the distribution of is usually unknown , and a classification rule has to be constructed using a training sample .a statistical issue is how to use the training sample to construct a classification rule that has a misclassification rate close to that of the optimal rule . in traditional applications , the dimension of fixed while the training sample size is large . because of the advance in technologies , nowadays a much larger amount of information can be collected , and the resulting is of a high dimension .in many recent applications , is much larger than the training sample size , which is referred to as the large--small- problem or ultra - high dimension problem when for some .an example is a study with genetic or microarray data . in our examplepresented in section [ sec5 ] , for instance , a crucial step for a successful chemotherapy treatment is to classify human cancer into two classes of leukemia , acute myeloid leukemia and acute lymphoblastic leukemia , based on genes and a training sample of 72 patients .other examples include data from radiology , biomedical imaging , signal processing , climate and finance .although more information is better when the distribution of is known , a larger dimension produces more uncertainty when the distribution of is unknown and , hence , results in a greater challenge for data analysis since the training sample size can not increase as fast as .the well - known linear discriminant analysis ( lda ) works well for fixed--large- situations and is asymptotically optimal in the sense that , when increases to infinity , its misclassification rate over that of the optimal rule converges to one .in fact , we show in this paper that the lda is still asymptotically optimal when diverges to infinity at a rate slower than . on the other hand, showed that the lda is asymptotically as bad as random guessing when ; some similar results are also given in this paper .the main purpose of this paper is to construct a sparse lda and show it is asymptotically optimal under some sparsity conditions on unknown parameters and some condition on the divergence rate of ( e.g. , as ) . our proposed sparse lda is based on the thresholding methodology , which was developed in wavelet shrinkage for function estimation [ , donoho et al .( ) ] and covariance matrix estimation [ ] .there exist a few other sparse lda methods , for example , , and .the key differences between the existing methods and ours are the conditions on sparsity and the construction of sparse estimators of parameters .however , no asymptotic results were established in the existing papers . for high - dimensional in regression ,there exist some variable selection methods [ see a recent review by ] .for constructing a classification rule using variable selection , we must identify not only components of having mean effects for classification , but also components of having effects for classification through their correlations with other components [ see , e.g. , , ] .this may be a very difficult task when is much larger than , such as and in the leukemia example in section [ sec5 ] . ignoring the correlation , proposed the features annealed independence rule ( fair ) , which first selects components of having mean effects for classification and then applies the naive bayes rule ( obtained by assuming that components of are independent ) using the selected components of only . although no sparsity condition on the covariance matrix of is required , the fair is not asymptotically optimal because the correlation between components of is ignored .our approach is not a variable selection approach , that is , we do not try to identify a subset of components of with a size smaller than .we use thresholding estimators of the mean effects as well as bickel and levina s ( ) thresholding estimator of the covariance matrix of , but we allow the number of nonzero estimators ( for the mean differences or covariances ) to be much larger than to ensure the asymptotic optimality of the resulting classification rule . the rest of this paper is organized as follows . in section [ sec2 ] , after introducing some notation and terminology , we establish a sufficient condition on the divergence of under which the lda is still asymptotically close to the optimal rule .we also show that , when is large compared with ( ) , the performance of the lda is not good even if we know the covariance matrix of , which indicates the need of sparse estimators for both the mean difference and covariance matrix . our main result is given in section [ sec3 ] , along with some discussions about various sparsity conditions and divergence rates of for which the proposed sparse lda performs well asymptotically. extensions of the main result are discussed in section [ sec4 ] . in section [ sec5 ], the proposed sparse lda is illustrated in the example of classifying human cancer into two classes of leukemia , along with some simulation results for examining misclassification rates .all technical proofs are given in section [ sec6 ] .we focus on the classification problem with two classes .the general case with three or more classes is discussed in section [ sec4 ] .let be a -dimensional normal random vector belonging to class if , , where , and is positive definite .the misclassification rate of any classification rule is the average of the probabilities of making two types of misclassification : classifying to class 1 when and classifying to class 2 when .if , and are known , then the optimal classification rule , that is , the rule with the smallest misclassification rate , classifies to class 1 if and only if , where , , and denotes the transpose of the vector .this rule is also the bayes rule with equal prior probabilities for two classes .let denote the misclassification rate of the optimal rule . using the normal distribution, we can show that where is the standard normal distribution function .although , if as and if .since is the misclassification rate of random guessing , we assume the following regularity conditions : there is a constant ( not depending on ) such that and where is the component of . under ( [ conds])([condd ] ) , , and hence , and so that the rate of is the same as the rate of , where is the -norm of the vector . in practice , and are typically unknown , and we have a training sample , where is the sample size for class , , , all s are independent and is independent of to be classified .the limiting process considered in this paper is the one with .we assume that converges to a constant strictly between 0 and 1 ; is a function of , but the subscript is omitted for simplicity .when , may diverge to , and the limit of may be 0 , a positive constant , or . for a classification rule constructed using the training sample , its performance can be assessed by the conditional misclassification rate defined as the average of the conditional probabilities of making two types of misclassification , where the conditional probabilities are with respect to , given the training sample .the unconditional misclassification rate is ] .4 . consider the case where , and an ultra - high dimension , that is , for a . from the previous discussion ,condition ( [ cond1 ] ) holds if , and ( [ cond2 ] ) holds if . since , , which converges to 0 if . if is bounded , then is sufficient for condition ( [ bn ] ) .if , then the largest divergence rate of is and ( i.e. , the slda is asymptotically optimal ) when .when , this means .if the divergence rate of is smaller than then we can afford to have a larger than divergence rate for and .for example , if for a and for a and a positive constant , then diverges to at a rate slower than .we now study when condition ( [ cond1 ] ) holds .first , , which converges to 0 if .second , , which converges to 0 if is chosen so that ] .thus , condition ( [ cond1 ] ) holds if and < \alpha\leq(1-\gamma)/[2(1-g)] ] ( corresponds to a bounded ) .then , a similar analysis leads to the conclusion that condition ( [ cond2 ] ) holds if and < \alpha\leq[1-(1+\rho)\gamma]/[2(1-g)] ] and ^ 2 \leq\zetap e [ ( \hat\bfdelta- \bfdelta ) ' \bfsigma^{-1 } ( \hat\bfdelta- \bfdelta)] ] .since is bounded by a constant , the result follows from the fact that is bounded away from 0 when is bounded .when , , and , by the result in ( i ) , .if , then , by lemma [ lem1 ] and the condition , we conclude that .proof of lemma [ lem1 ] it follows from result ( [ normal ] ) that /2 } & \leq & \frac{\phi ( - \sqrt{\xi_n } ( 1-\tau_n ) ) } { \phi ( - \sqrt{\xi_n } ) } \\ & \leq & \frac{1 + \xi_n}{\xi_n ( 1 - \tau_n ) } e^{[\xi_n - \xi_n(1-\tau_n)^2]/2 } .\end{aligned}\ ] ] since and , the result follows from /2 = \xi_n \tau_n ( 1 - \tau_n/2 ) \rightarrow\gamma$ ] regardless of whether is 0 , positive , or .proof of theorem [ thm2 ] for simplicity , we prove the case of .the conditional misclassification rate of the lda in this case is given by ( [ rate ] ) with replaced by .note that , where is the identity matrix of order .let be the component of .then , and the component of is , and the component of is , , where , , , are independent standard normal random variables .consequently , \\ % & = & \sum_{j=1}^p \biggl ( - \frac{\zeta_j^2}{2 } + \frac{\varep_{1j}^2 - \varep_{2j}^2}{n } + \frac{\zeta_j \varep_{2j}}{\sqrt{n_1 } } \biggr)\\ & = & - \frac{\zetap}{2 } + \frac{1}{n } \sum_{j=1}^p ( \varep_{1j}^2 - \varep_{2j}^2 ) + \frac{1}{\sqrt{n_1 } } \sum_{j=1}^p \zeta_j \varep_{2j}\\ & = & - \frac{\zetap}{2 } + o_p \biggl ( \frac{\sqrt{p}}{n } \biggr ) + o_p \biggl ( \frac { \zetaps}{\sqrt{n } } \biggr)\end{aligned}\ ] ] and + o_p \biggl ( \frac { \zetaps}{\sqrt{n } } \biggr)\\ & = & \zetap+ \frac{4p}{n } [ 1 + o_p(1 ) ] , \end{aligned}\ ] ] where the last equality follows from under ( [ conds])([condd ] ) . combining these results , we obtain that } } + o_p(1).\ ] ] similarly , we can prove that ( [ r1 ] ) still holds if is replaced by . if , then the quantity in ( [ r1 ] ) converges to 0 in probability .hence , . since , .then , the quantity in ( [ r1 ] ) converges to in probability and , hence , , which is a constant between 0 and . since , and , hence , .when , it follows from ( [ r1 ] ) that the quantity on the left - hand side of ( [ r1 ] ) diverges to in probability .this proves that . to show , we need a more refined analysis .the quantity on the left - hand side of ( [ r1 ] ) is equal to } } = - \frac{\zetaps}{2 } ( 1-\tau_n ) , \ ] ] where } } \ ] ] and .note that } } \\ & = & \frac{({4p}/{n } ) [ 1 + o_p(1 ) ] } { \zetap+ ( { 4p}/{n } ) [ 1 + o_p(1 ) ] + \zetaps\sqrt{\zetap+ ( { 4p}/{n } ) [ 1 + o_p(1 ) ] } } \end{aligned}\ ] ] and } } = \frac { o_p ( \sqrt{p}/n)}{\zetap } + \frac{o_p ( 1/\sqrt{n } ) } { \zetaps}\\ & = & \frac{o_p ( \sqrt{p / n } ) } { \zetap}\end{aligned}\ ] ] under ( [ conds ] ) and ( [ condd ] ). then if is bounded , then for a constant and which diverges to in probability since . if , then for a constant and which diverges to in probability since .thus , in probability , and the result follows from lemma [ lem1 ] .proof of lemma [ lem2 ]( i ) it follows from ( [ normal ] ) that , for all , where and are positive constants .then , the probability in ( [ prob1 ] ) is because when , we conclude that , and thus ( [ prob1 ] ) holds .the proof of ( [ prob2 ] ) is similar since ( ii ) the result follows from results ( [ prob1 ] ) and ( [ prob2 ] ) .proof of theorem [ thm3 ] the conditional misclassification rate is given by from result ( [ bl ] ) , = \tilde\bfdelta { } ' \bfsigma^{-1 } \tilde\bfdelta[1+o_p(d_n ) ] .\ ] ] without loss of generality , we assume that , where is the -vector containing nonzero components of .let , where has dimension . from lemma [ lem2](ii ) , and , with probability tending to 1 , let .. this together with ( [ conds])([condd ] ) implies that and hence \\ & = & \zetap\bigl [ 1 + o_p \bigl(\sqrt{k_n}/\zetaps\bigr ) \bigr].\end{aligned}\ ] ] write where , , and are matrices with defined in lemma [ lem2](ii ) .then if and , where and have dimension , then since has dimension , and hence since , from result ( [ bl ] ) , .\ ] ] under condition ( [ conds ] ) , all eigenvalues of sub - matrices of and are bounded by . repeatedly using condition ( [ conds ] ) , we obtain that where and are given in ( [ cp ] ) .this proves that which also holds when is replaced by or .note that therefore , \\& = & - \frac{\zetaps}{2 } \biggl [ 1 + o_p \biggl ( \frac{\sqrt { c_{h , p}q_n } } { \zetaps\sqrt{n } } \biggr ) \\ & & \hphantom{- \frac{\zetaps}{2 } \biggl [ } { } + o_p \biggl(\frac{\sqrt{k_n}}{\zetaps } \biggr ) + o_p(d_n ) \biggr]\\ & = & - \frac{\zetaps}{2 } [ 1+o_p ( b_n ) ] .\end{aligned}\ ] ] this proves the result in ( i ) .the proofs of ( ii)(iv ) are the same as the proofs for theorem [ thm1](ii)(iv ) with replaced by .this completes the proof .the authors would like to thank two referees and an associate editor for their helpful comments and suggestions , and dr .weidong liu for his help in correcting an error in the proof of theorem 3 .
|
in many social , economical , biological and medical studies , one objective is to classify a subject into one of several classes based on a set of variables observed from the subject . because the probability distribution of the variables is usually unknown , the rule of classification is constructed using a training sample . the well - known linear discriminant analysis ( lda ) works well for the situation where the number of variables used for classification is much smaller than the training sample size . because of the advance in technologies , modern statistical studies often face classification problems with the number of variables much larger than the sample size , and the lda may perform poorly . we explore when and why the lda has poor performance and propose a sparse lda that is asymptotically optimal under some sparsity conditions on the unknown parameters . for illustration of application , we discuss an example of classifying human cancer into two classes of leukemia based on a set of 7,129 genes and a training sample of size 72 . a simulation is also conducted to check the performance of the proposed method . , , and .
|
consider the scenario where correlated sources are compressed by sensors in a distributed manner and then forwarded to a fusion center for joint reconstruction .this is exactly the classical distributed data compression problem , for which slepian - wolf coding is known to be rate - optimal .however , to attain this best compression efficiency , encoding at each sensor is performed under the assumption that all the other sensors are functioning properly ; as a consequence , inactivity of one or more sensors typically leads to a complete decoding failure at the fusion center .alternatively , each sensor can compress its observation using conventional point - to - point data compression methods without capitalizing the correlation among different sources so that the maximum system robustness can be achieved . in view of these two extreme cases, a natural question arises whether there exists a tradeoff between compression efficiency and system robustness in distributed data compression .one approach to realize this tradeoff is as follows .specifically , we decompose each into components , ordered according to their importance , and encode them in such a way that , given the outputs from any sensors , the fusion center can reconstruct the first components of each of the corresponding sources .the aforedescribed two extreme cases correspond to and , respectively .one can realize a flexible tradeoff between compression efficiency and system robustness by adjusting the amount of information allocated to different components .we shall refer to this problem as distributed multilevel diversity coding ( d - mldc ) since it reduces to the well - known ( symmetrical ) multilevel diversity coding ( mldc ) problem when almost surely for all .the concept of mldc was introduced by roche and more formally by yeung though research on diversity coding can be traced back to singleton s work on maximum distance separable codes .the symmetric version of this problem has received particular attention , and arguably the culminating achievement of this line of research is the complete characterization of the admissible rate region of symmetrical mldc by yeung and zhang .some recent developments related to mldc can be found in .the goal of the present paper to characterize the performance limits of d - mldc , which , we hope , may provide some useful insights into the tradeoff between compression efficiency and system robustness in distributed data compression .more fundamentally , we aim to examine the principle of superposition in the context of d - mldc .although superposition ( or more generally , layering ) is a common way to construct sophisticated schemes based on simple building blocks and often yields the best known achievability results , establishing the optimality of such constructions is rarely straightforward , especially when encoding is performed in a distributed manner .in fact , even for the centralized encoding setup studied in , the proof of the optimality of superposition is already highly non - trivial .this difficulty can be partly attributed to the fact that it is often a technically formidable task to extract layers from a generic scheme using information inequalities in a converse argument , even in cases where the use of layered constructions may appear rather natural . from this perspective, our work can be viewed as an initial step towards a better understanding of layered schemes for distributed compression of correlated sources .we shall propose a multilayer slepian - wolf coding scheme based on binning and superposition , and establish its optimality for d - mldc when .this scheme is also shown to be optimal for general under a certain symmetry condition , which generalizes the aforementioned result by yeung and zhang on symmetrical mldc .the main technical difficulty encountered in our proof is that it appears to be infeasible to characterize the admissible rate region of d - mldc by deriving inner and outer bounds separately and then making a direct comparison based on their explicit expressions . to circumvent this difficulty, we follow the approach in , where the analysis of the inner bound and that of the outer bound are conceptually intertwined .specifically , we analyze certain linear programs associated with the achievable rate region of the proposed scheme and leverage the induced lagrange multipliers to establish the entropy inequalities that are needed for a matching converse .since the problem considered here is more general than that in , the relevant linear programs and entropy inequalities are inevitably more sophisticated .it is worth mentioning that , in a broad sense , the strategy of determining an information - theoretic limit by connecting achievability and converse results to a common optimization problem ( not necessarily linear ) via duality has find applications far beyond mldc ( see , e.g. , ) .the rest of this paper is organized as follows .we state the basic definitions and the main results in section [ sec : formula ] .section [ sec : prove ] contains a high - level description of our general approach .the detailed proofs can be found in sections [ sec : main3 ] and [ sec : prove_sym ] .we conclude the paper in section [ sec : conclude ] ._ notation : _ random vectors and are sometimes abbreviated as and , respectively . for two integers , we define \triangleq\left\ { x\in \mathbb{z}:{{x}_{1}}\le x\le { { x}_{2 } } \right\} ] , let .we often do not distinguish between a singleton and its element .let } ] , be vector sources .we assume that ,\alpha} ] , are mutually independent whereas , for each , the components of ,\alpha} ] ) can be arbitrarily correlated .let ,\left[1:k\right]}(t)\right\}_{t=1}^\infty ] .an \right ) \right) ] ) maps the source sequence }^{n} ] , i.e. , , \\ u_{k,\left [ 1:k \right]}^{n } & \mapsto \quad { { s}_{k}},\end{aligned}\ ] ] 2 . decoders , where decoder ( ] , denoted by }^{n} ] is said to admissible if , for any , there exists an \right ) \right) ] , encoder ( ] with if \right)\in{{\mathcal{r}}_{k,\alpha } } ] ) generates its output by combining the bin indices associated with , ] is symmetrical entropy - wise if for all ] with .it is worth noting that the symmetrical mldc problem studied in corresponds to the special case where for all ] .[ th : main_sym ] if the distribution of ,\left [ 1:k \right]}} ] ) via the analysis of the corresponding linear programs ; 2 .establish a class of entropy inequalities based on the lagrange multipliers induced by the aforementioned linear programs ; 3 . derive a tight outer bound on by leveraging these entropy inequalities .each supporting hyperplane of is associated with a linear program , \\ \operatorname{s.t . }\quad & \sum\limits_{k\in v}{{{r}_{k,\alpha } } } \ge h\left ( { { u}_{v,\alpha } } |{{u}_{{v'},\alpha } } \right),\quad v\in { { \mathbb{v}}_{k,\alpha } } , v'\in { \mathbb{v}'_{k,\alpha } } \left [ v \right],\end{aligned}\ ] ] where \right)\in \mathbb{r}_{+}^{k} ] , are ordered . for this reason , we define moreover , to facilitate subsequent analysis , we introduce the following partition , we have , ] .therefore , indeed form a partition of .] of for each ] is an optimal lagrange multiplier of if and only if it is an optimal solution to the ( asymmetric ) dual problem of . ] \right) ] and optimal lagrange multiplier \right) ] is an optimal solution and is the unique optimal lagrange multiplier , where ,\\ & { { c}_{\{k\}|\emptyset,1}}\triangleq{{w}_{k}},\quad k\in \left [ 1:k \right].\end{aligned}\ ] ] [ lem : lpkk ] for linear program with , \right) ] is an optimal lagrange multiplier , where ,k } } \right),\quad k\in \left [ 1:k \right],\\ & { { c}_{\left [ 1:k \right]|\left[k+1:k\right],k } } \triangleq { { w}_{k}}-{{w}_{k+1}},\quad k\in \left [ 1:k \right ] , \\ & { { c}_{v|[1:k]\backslash v , k}}\triangleq 0,\quad \text{otherwise}.\end{aligned}\ ] ] the general case can be reduced to the case via suitable relabelling .in this step we aim to establish a class of entropy inequalities needed for a matching converse by exploiting the properties of optimal lagrange multipliers of , ] , ] and .the following lemma indicates that ( [ eq : star ] ) always holds when .[ lem : ineq2 ] let and \right) ] . according to ( [ eq : connection ] ) , }{{{c}_{v|{v}',\alpha } } } } , \quad k\in\left[1:k\right].\label{eq : identity}\end{aligned}\ ] ] it can be verified that }{{{c}_{v|{v}',\alpha } } } } h(x_k)\label{eq : invokeid}\\ & = \sum\limits_{v\in { { \mathbb{v}}_{k,\alpha } } } { \sum\limits_{{v}'\in { { \mathbb{v}}'_{k,\alpha } } \left [ v \right]}{{{c}_{v|{v}',\alpha } } } } \sum\limits_{k\in v}h(x_k)\nonumber\\ & \geq\sum\limits_{v\in { { \mathbb{v}}_{k,\alpha } } } { \sum\limits_{{v}'\in { { \mathbb{v}}'_{k,\alpha } } \left [ v \right]}{{{c}_{v|{v}',\alpha } } } } h(x_{v})\nonumber\\ & \geq\sum\limits_{v\in { { \mathbb{v}}_{k,\alpha } } } { \sum\limits_{{v}'\in { { \mathbb{v}}'_{k,\alpha } } \left [ v \right]}{{{c}_{v|{v}',\alpha } } } } h(x_{v}|x_{v'}),\nonumber\end{aligned}\ ] ] where ( [ eq : invokeid ] ) is due to ( [ eq : identity ] ) .this completes the proof of lemma [ lem : ineq2 ] .as shown by the following lemma , the existence of entropy inequalities ( [ eq : star ] ) implies that is an outer bound of .[ lem : outer ] if for any , there exist optimal lagrange multipliers \right) ] , such that ( [ eq : star ] ) holds , then let \right) ] .note that ,1}\right)-n\delta_{\epsilon},\quad v\in { { \mathbb{v}}_{k,1}},\label{eq : sub1}\end{aligned}\ ] ] where ( [ eq : fano ] ) follows by ( [ eq : constraint2 ] ) and fano s inequality . substituting ( [ eq : sub1 ] ) into ( [ eq : invokelemma1 ] ) and invoking ( [ eq : opt ] ) proves ( [ eq : induction ] ) for .now assume that ( [ eq : induction ] ) holds for .in view of ( [ eq : star ] ) , we have }{{{c}_{v|{v'},b-1}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)}}\nonumber\\ & \ge \sum\limits_{v\in { { \mathbb{v}}_{k , b}}}{\sum\limits_{{v'}\in { \mathbb{v}'_{k , b}}\left [ v \right]}{{{c}_{v|{v'},b}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)}}.\label{eq : firstcont}\end{aligned}\ ] ] it can be verified that ,\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)\nonumber\\ & = h\left ( u^n_{v , b},{{s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)-h\left ( u^n_{v , b}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v}},{{s}_{v ' } } \right)\nonumber\\ & \geq h\left ( u^n_{v , b},{{s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)-n\delta_{\epsilon},\quad v\in { { \mathbb{v}}_{k , b } } , { v'}\in { \mathbb{v}'_{k , b } } , \label{eq : fanofano}\end{aligned}\ ] ] where ( [ eq : fanofano ] ) follows by ( [ eq : constraint2 ] ) and fano s inequality .moreover , ,\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)&\geq h\left ( u^n_{v , b}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v ' } } \right)+h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n},{{s}_{v ' } } \right)\nonumber\\ & \geq h\left ( u^n_{v , b}|u^n_{v',b } \right)+h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n},{{s}_{v ' } } \right)\label{eq : markov}\\ & = h\left ( u_{v , b}|u_{v',b } \right)+h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n},{{s}_{v ' } } \right),\label{eq : fanothird}\end{aligned}\ ] ] where ( [ eq : markov ] ) is due to the fact that ,\left [ 1:b \right]}^{n},{{s}_{v'}}) ] in table [ tab : lp32lag ] for . ] is an optimal lagrange multiplier .the general case can be reduced to the case via suitable relabelling .the next result shows that ( [ eq : star ] ) holds when , which , together with ( [ eq : innerbound ] ) and lemma [ lem : outer ] , completes the proof of theorem [ th : main3 ] .[ lem : ineq3 ] let , \right) ] . note that ( [ eq : leftinequality ] ) follows from lemma [ lem : ineq2 ] .the proof of ( [ eq : rightinequality ] ) is relegated to appendix [ app : rightinequality ] .the proof of theorem [ th : main_sym ] also largely follows the general approach outlined in section [ sec : prove ] .however , due to the symmetry assumption , some simplifications are possible .when the distribution of ,\left [ 1:k \right]}} ] only through ; for this reason , we shall denote it as and rewrite in the following simpler form , \\ \operatorname{s.t . }\quad & \sum\limits_{k\in v}{{{r}_{k,\alpha } } } \ge { { h}_{\left| v \right|,\alpha } } , \quad v\in { { \mathbb{v}}_{k,\alpha } } .\end{aligned}\ ] ] [ def : lagrangesymmetric ] we say is an optimal lagrange multiplier of with if ,\label{eq : lag_sym_w } \\ & { { c}_{v,\alpha } } \ge 0,\quad v\in { { \mathbb{v}}_{k,\alpha } } , \label{eq : lag_sym_p}\end{aligned}\ ] ] where denotes the optimal value of . for ] .see appendix [ app : opt_in_region ] . for ] , define :\left| v \right|=\alpha , \left [ 1:l \right]\subseteq v \right\}.\end{aligned}\ ] ] recall that form a partition of . for and ] such that ; it is easy to verify that moreover , for and ] is an optimal solution , and every is an optimal lagrange multiplier . in view of lemma [ lem : opt_in_region ] , we have \right)\in { { \mathcal{r}}_{k,\alpha } } ] , therefore , \right) ] .see appendix [ app : ineq_sym_core ] .[ lem : ineq_sym ] for any , there exist , ] and . by symmetry, it suffices to consider .we shall first assume , which implies , ] , such that for all } ] , with and with it is easy to verify that such , ] . ] .the following result , together with ( [ eq : innerbound ] ) and lemma [ lem : ineq_sym ] , completes the proof of theorem [ th : main_sym ] .[ lem : symmouter ] if any , there exist , ] is symmetrical entropy - wise .let \right) ] is symmetrical entropy - wise , for any d - mldc system satisfying ( [ eq : constraint1 ] ) and ( [ eq : constraint2 ] ) , ,\beta } } h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:\beta \right]}^{n},u_{\left [ k+1:\beta \right],k}^{n } \right ) } \nonumber \\ & \quad + \frac{1}{n}\sum\limits_{v\in \omega _ { k,\beta } ^{\left ( { { l}^\mathbf{w}_{\beta } } \right)}}{{{c}_{v,\beta } } h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:\beta \right]}^{n } \right)}-\beta{{\delta } _ { \epsilon } } \sum\limits_{k=1}^{k}{{{w}_{k } } } , \quad \beta \in \left [ 1:k \right ] , \label{eq : th_sym_induce}\end{aligned}\ ] ] where tends to zero as .one can deduce ( [ eq : supportyhypersym ] ) from ( [ eq : th_sym_induce ] ) by setting and sending .the proof of ( [ eq : th_sym_induce ] ) for is the same as that of ( [ eq : induction ] ) .now assume that ( [ eq : th_sym_induce ] ) holds for .in view of ( [ eq : symmetricineq ] ) , we have ,\left [ 1:b-1 \right]}^{n } \right ) } \nonumber \\ & \ge \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{\left ( { { c}_{\left [ 1:k \right],b}}-{{c}_{\left [ 1:k \right],b-1}}\mathcal{i}\left\ { k\le { { l}^\mathbf{w}_{b-1 } } \right\ } \right)h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right ) } \nonumber \\ & \quad + \sum\limits_{v\in \omega _ { k , b}^{\left ( { { l}^\mathbf{w}_{b } } \right)}}{{{c}_{v , b}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right ) } , \label{eq : th_sym_induce1}\end{aligned}\ ] ] where ,b}}-{{c}_{\left [ 1:k \right],b-1}}\mathcal{i}\left\ { k\le { { l}^\mathbf{w}_{b-1 } } \right\}\geq 0 ] , according to ( [ eq : lag_sym_p_reprise ] ) , ( [ eq : moved ] ) , and the fact that ,b-1}}={{c}_{\left [ 1:k \right],b}} ] .moreover , ,b}}-{{c}_{\left [ 1:k \right],b-1}}\mathcal{i}\left\ { k\le { { l}^\mathbf{w}_{b-1 } } \right\ } \right)h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right ) } \nonumber \\ & \ge \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{\left ( { { c}_{\left [ 1:k \right],b}}-{{c}_{\left [ 1:k \right],b-1}}\mathcal{i}\left\ { k\le { { l}^\mathbf{w}_{b-1 } } \right\ } \right)h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right ) } \nonumber \\ & = \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{{{c}_{\left [ 1:k \right],b}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right ) } \nonumber \\ & \quad - \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b-1}}}{{{c}_{\left [ 1:k \right],b-1}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right)}\nonumber \\ & \geq \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{{{c}_{\left [ 1:k \right],b}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right ) } \nonumber \\ & \quad - \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b-1}}}{{{c}_{\left [ 1:k \right],b-1}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b-1 \right],\left [ b-1:k \right]}^{n } \right)}. \label{eq : th_sym_induce2}\end{aligned}\ ] ] combining ( [ eq : th_sym_induce1 ] ) and ( [ eq : th_sym_induce2 ] ) gives ,b-1}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b-1 \right],\left [ b-1:k \right]}^{n } \right)}+\sum\limits_{v'\in \omega _ { k , b-1}^{\left ( { { l}^\mathbf{w}_{b-1 } } \right)}}{{{c}_{v',b-1}}h\left ( { { s}_{v'}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right ) } \nonumber \\ & \ge \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{{{c}_{\left [ 1:k \right],b}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right)}+\sum\limits_{v\in \omega _ { k , b}^{\left ( { { l}^\mathbf{w}_{b } } \right)}}{{{c}_{v , b}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right)}.\label{eq : tbconti } \ ] ] note that }}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right)\nonumber\\ & = h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)\nonumber\\ & = h\left ( { { s}_{\left [ 1:b \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)\nonumber\\ & = h\left(u_{\left [ 1:b \right],\left [ 1:b \right]}^{n } , { { s}_{\left [ 1:b \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)\nonumber\\ & \quad - h\left(u_{\left [ 1:b \right],\left [ 1:b \right]}^{n}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ 1:b \right ] } } \right)\nonumber\\ & \ge h\left(u_{\left [ 1:b \right],\left [ 1:b \right]}^{n } , { { s}_{\left [ 1:b \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)-n{{\delta } _ { \epsilon } } \label{eq : invokefanoagain}\\ & = h\left(u_{\left [ 1:b \right],\left [ 1:b \right]}^{n}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)\nonumber\\ & \quad+h\left({{s}_{\left [ 1:b \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ 1:b \right],\left [ 1:b \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } , { { s}_{\left [ k+1:b \right ] } } \right)-n{{\delta } _ { \epsilon } } \nonumber\\ & = h\left(u_{\left [ 1:b \right],\left [ 1:b \right]}^{n}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n}\right)\nonumber\\ & \quad+h\left({{s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ 1:b \right],\left [ 1:b \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n}\right)-n{{\delta } _ { \epsilon } } \nonumber\\ & \geq nh_{k , b}+h\left({{s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n } , u_{\left [ k+1:b \right],\left [ b : k \right]}^{n}\right)-n{{\delta } _ { \epsilon } } , \quad k\in\left[1:{{l}^\mathbf{w}_{b}}\right],\label{eq : tbsub1}\end{aligned}\ ] ] where ( [ eq : invokefanoagain ] ) follows by ( [ eq : constraint2 ] ) and fano s inequality .similarly , we have ,\left [ 1:b-1 \right]}^{n } \right)\nonumber\\ & = h\left ( u_{v,\left [ 1:b \right]}^{n},{{s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right)-h\left ( u_{v,\left [ 1:\alpha \right]}^{n}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},{{s}_{v } } \right)\nonumber\\ & \ge h\left ( u_{v,\left [ 1:b \right]}^{n},{{s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right)-n{{\delta } _ { \epsilon } } \nonumber\\ & = h\left ( u_{v,\left [ 1:b \right]}^{n}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right)+h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{v,\left [ 1:b \right]}^{n } \right)-n{{\delta } _ { \epsilon } } \nonumber\\ & \ge nh_{\left|v\right|,b}+h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n}\right)-n{{\delta } _ { \epsilon } } , \quad v\in \omega _ { k , b}^{\left ( { { l}^\mathbf{w}_{b } } \right)}.\label{eq : tbsub2}\end{aligned}\ ] ] continuing from ( [ eq : tbconti ] ) , ,b-1}}h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n},u_{\left [ k+1:b-1 \right],\left [ b-1:k \right]}^{n } \right)}+\sum\limits_{v'\in \omega _ { k , b-1}^{\left ( { { l}^\mathbf{w}_{b-1 } } \right)}}{{{c}_{v',b-1}}h\left ( { { s}_{v'}}|u_{\left [ 1:k \right],\left [ 1:b-1 \right]}^{n } \right ) } \nonumber \\ & \ge n\sum\limits_{v\in { { \mathbb{v}}_{k , b}}}{{{c}_{v , b}}}h_{\left|v\right|,b } + \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{{{c}_{\left [ 1:k \right],b}}\cdot h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right ) } \nonumber \\ & \quad+\sum\limits_{v\in \omega _ { k , b}^{\left ( { { l}^\mathbf{w}_{b } } \right)}}{{{c}_{v , b}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n } \right ) } -n{{\delta } _ { \epsilon } } \sum\limits_{v\in { { \mathbb{v}}_{k , b}}}{{{c}_{v , b}}}\label{eq : invoketwoexpan}\\ & \geq n{{\overline{{f}}^{\mathbf{w}}_{{{b}}}}}+ \sum\limits_{k=1}^{{{l}^\mathbf{w}_{b}}}{{{c}_{\left [ 1:k \right],b}}\cdot h\left ( { { s}_{\left [ 1:k \right]}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n},u_{\left [ k+1:b \right],\left [ b : k \right]}^{n } \right)}\nonumber\\ & \quad+\sum\limits_{v\in \omega _ { k , b}^{\left ( { { l}^\mathbf{w}_{b } } \right)}}{{{c}_{v , b}}h\left ( { { s}_{v}}|u_{\left [ 1:k \right],\left [ 1:b \right]}^{n } \right ) } -n{{\delta } _ { \epsilon } } \sum\limits_{k=1}^{k}{{{w}_{k } } } , \label{eq : th_sym_induce_core2}\end{aligned}\ ] ] where ( [ eq : invoketwoexpan ] ) is due to ( [ eq : tbsub1 ] ) and ( [ eq : tbsub2 ] ) , and ( [ eq : th_sym_induce_core2 ] ) is due to ( [ eq : lag_sym_f ] ) , ( [ eq : full_sum ] ) as well as lemma [ lem : lp_sym ] . combining ( [ eq : th_sym_induce_core2 ] ) and the induction hypothesis proves ( [ eq : th_sym_induce ] ) for .= = = = [ a lemma may be used ] = = = = = [ lem : sym_outer ] if for any , there exist a suit of optimal lagrange multipliers , ] , then holds for symmetric d - mldc . = = = = = = = = = = = = = = = = =we have characterized the admissible rate region of d - mldc for the case and the case where the source distribution is symmetrical entropy - wise . in view of the intimate connection between mldc and its lossy counterpart known as multiple description coding , it is expected that the results in the present work may shed new light on the robust distributed source coding problem ( which is the lossy counterpart of d - mldc ) studied in .it follows by the definition of that when .when , we must have and consequently note that implies .it then follows by the definition of that in view of the fact that , , and , we must have when .note that where ( [ eq : duei ] ) is due to ( [ eq : valuei ] ) .it follows by symmetry that .invoking the definition of and completes the proof of lemma [ lem : psi ] .it suffices to consider .case 1a ( ) : where ( [ eq : duehan ] ) is due to han s inequality .case 1b ( and ) : case 1c ( ) : case 1d ( ) : cases 2a and 2b ( ) : cases 2c and 5b : cases 3a and 3b ( ) : cases 3d and 5d ( ) : cases 4a and 4c : case 4d ( ) : cases 5a and 5c : following result is needed for the proof of lemma [ lem : opt_in_region ] .[ lem : sym_div_ineq ] assume that for all ] such that .it suffices to consider the case .note that }}|{{x}_{\left [ { { i}_{2}}+1:j \right ] } } \right ) & = \sum\limits_{k=1}^{{{i}_{2}}}{h\left ( { { x}_{\left [ 1:{{i}_{2 } } \right]}}|{{x}_{\left [ { { i}_{2}}+1:j \right ] } } \right ) } \nonumber \\ & = \sum\limits_{k=1}^{{{i}_{2}}}{h\left ( { { x}_{k}}|{{x}_{\left [ { { i}_{2}}+1:j \right ] } } \right)}+\sum\limits_{k=1}^{{{i}_{2}}}{h\left ( { { x}_{\left [ 1:{{i}_{2 } } \right]\backslash \left\ { k \right\}}}|{{x}_{\left\ { k \right\}\bigcup \left[ { { i}_{2}}+1:j \right ] } } \right ) } \nonumber \\ & \ge h\left ( { { x}_{\left [ 1:{{i}_{2 } } \right]}}|{{x}_{\left [ { { i}_{2}}+1:j \right ] } } \right)+{{i}_{2}}h\left ( { { x}_{\left [ 1:{{i}_{2}}-1 \right]}}|{{x}_{\left [ { { i}_{2}}:j \right ] } } \right).\nonumber\end{aligned}\ ] ] therefore , } } |{{x}_{\left [ { { i}_{2}}:j \right ] } } \right)}{{{i}_{2}}-1}\le \frac{h\left ( { { x}_{\left [ 1:{{i}_{2 } } \right ] } } |{{x}_{\left [ { { i}_{2}}+1:j \right ] } } \right)}{{{i}_{2}}}.\end{aligned}\ ] ] one can readily complete the proof via induction .now we are ready to prove lemma [ lem : opt_in_region ] .consider an arbitrary .let ] and that the source distribution is symmetrical entropy - wise .moreover , we have ,\alpha } } \right)}{\alpha -l}&\ge \left| { { v}_{2 } } \right|\frac{h\left ( { { u}_{\left [ 1:\alpha -l \right],\alpha}}|{{u}_{\left [ \alpha -l+1:\alpha -\left| { { v}_{1 } } \right| \right],\alpha } } \right)}{\alpha -l } \nonumber \\ & \ge h\left ( { { u}_{\left [ 1:\left| { { v}_{2 } } \right| \right],\alpha}}|{{u}_{\left [ \left| { { v}_{2 } } \right|+1:\alpha -\left| { { v}_{1 } } \right| \right],\alpha } } \right ) \label{eq : invokelemma}\\ & = h\left ( { { u}_{\left [ \left| { { v}_{1 } } \right|+1:\left| v \right| \right],\alpha}}|{{u}_{\left [ \left| v \right|+1:\alpha \right],\alpha } } \right),\label{eqn : feasible_v2}\end{aligned}\ ] ] where ( [ eq : invokelemma ] ) is due to lemma [ lem : sym_div_ineq ] and the fact that . combining ( [ eqn : feasible_v1 ] ) and ( [ eqn : feasible_v2 ] ) proves ( [ eq : sufficeregion ] ) for the case , . note that ( [ eq : sufficeregion ] ) degenerates to ( [ eqn : feasible_v1 ] ) when and degenerates to ( [ eqn : feasible_v2 ] ) when .this completes the proof of lemma [ lem : opt_in_region ] .note that }{1 } } \\ & = \left ( \alpha -{{l}^\mathbf{w}_{\alpha } } \right)\sum\limits_{v\in { { \omega } ^{\left ( { { l}^\mathbf{w}_{\alpha } } \right)}_{k,\alpha } } } { { { c}_{v,\alpha } } } , \end{aligned}\ ] ] from which the desired result follows immediately .consider the following two cases . _( case 1 ) _ ] : it suffices to verify that when .indeed , this is a simple consequence of the fact that .consider the following three cases . _( case 1 ) _ : note that ,\alpha } } } = \left ( \alpha-1 -l_{\alpha -1}^{\mathbf{w } } \right){{w}_{l_{\alpha -1}^{\mathbf{w}}+1}}-\sum\limits_{k = l_{\alpha -1}^{\mathbf{w}}+1}^{l_{\alpha } ^{\mathbf{w}}}{{{w}_{k}}}-\left ( \alpha-1 -l_{\alpha } ^{\mathbf{w } } \right)\lambda _ { \alpha } ^{\mathbf{w}}\label{eq : tbslater1}\end{aligned}\ ] ] and we have ,\alpha } } } \right)\frac{1}{\alpha-1 -l_{\alpha -1}^{\mathbf{w}}}\nonumber \\ & = \left ( \lambda _ { \alpha } ^{\mathbf{w}}- \left ( \alpha-1 -l_{\alpha -1}^{\mathbf{w } } \right){{w}_{l_{\alpha -1}^{\mathbf{w}}+1}}+\sum\limits_{k = l_{\alpha -1}^{\mathbf{w}}+1}^{l_{\alpha } ^{\mathbf{w}}}{{{w}_{k}}}+\left ( \alpha-1 -l_{\alpha } ^{\mathbf{w } } \right)\lambda _ { \alpha } ^{\mathbf{w } } \right)\frac{1}{\alpha-1 -l_{\alpha -1}^{\mathbf{w}}}\label{eq : invoketb1 } \\ & = \left ( \left ( \alpha -l_{\alpha } ^{\mathbf{w } } \right)\lambda _ { \alpha } ^{\mathbf{w}}+\sum\limits_{k = l_{\alpha -1}^{\mathbf{w}}+1}^{l_{\alpha } ^{\mathbf{w}}}{{{w}_{k}}}-\left ( \alpha-1 -l_{\alpha -1}^{\mathbf{w } } \right){{w}_{l_{\alpha -1}^{\mathbf{w}}+1 } } \right)\frac{1}{\alpha-1 -l_{\alpha -1}^{\mathbf{w } } } \nonumber\\ & = \left ( \left ( \alpha-1 -l_{\alpha -1}^{\mathbf{w } } \right)\lambda _ { \alpha -1}^{\mathbf{w}}-\left ( \alpha-1 -l_{\alpha -1}^{\mathbf{w } } \right){{w}_{l_{\alpha -1}^{\mathbf{w}}+1 } } \right)\frac{1}{\alpha-1 -l_{\alpha -1}^{\mathbf{w}}}\label{eq : invoketb2 } \\ & = \lambda _ { \alpha -1}^{\mathbf{w}}-{{w}_{l_{\alpha -1}^{\mathbf{w}}+1}}\label{eq : almost1 } \\ & = { { c}_{\left [ 1:l_{\alpha -1}^{\mathbf{w } } \right],\alpha } } -{{c}_{\left [ 1:l_{\alpha -1}^{\mathbf{w } } \right],\alpha -1}},\nonumber\end{aligned}\ ] ] where ( [ eq : invoketb1 ] ) and ( [ eq : invoketb2 ] ) are due to ( [ eq : tbslater1 ] ) and ( [ eq : tbslater2 ] ) , respectively .it can be verified that where ( [ eq : use_ww_def ] ) follows from the fact that .combining ( [ eq : almost1 ] ) and ( [ eq : neverending ] ) proves . _( case 2 ) _ : note that we have ,\alpha } } } \right)\frac{1}{\alpha-1 -l_{\alpha -1}^{\mathbf{w}}}\nonumber \\ & = \left(\lambda _ { \alpha } ^{\mathbf{w}}-{\left ( \alpha-1 -l_{\alpha}^{\mathbf{w } } \right)\left({{w}_{l_{\alpha}^{\mathbf{w}}}}-\lambda _ { \alpha}^{\mathbf{w}}\right ) } \right)\frac{1}{\alpha -l_{\alpha } ^{\mathbf{w}}}\nonumber \\ & = \lambda _ { \alpha -1}^{\mathbf{w}}-{{w}_{l_{\alpha}^{\mathbf{w}}}}\label{eq : duetoonemoretb}\\ & = { { c}_{\left [ 1:l_{\alpha -1}^{\mathbf{w } } \right],\alpha } } -{{c}_{\left [ 1:l_{\alpha -1}^{\mathbf{w } } \right],\alpha -1}},\nonumber\end{aligned}\ ] ] where ( [ eq : duetoonemoretb ] ) is due to ( [ eq : onemoretb ] ) .the fact that follows by ( [ eq : neverending ] ) and ( [ eq : duetoonemoretb ] ) . _( case 3 ) _ : note that moreover , we have ,\alpha } } -{{c}_{\left [ 1:l_{\alpha -1}^{\mathbf{w } } \right],\alpha -1}}.\end{aligned}\ ] ] consider the following two cases . _( case 1 ) _ : where ( [ eq : invokepart1 ] ) is due to ( [ eq : partial_sum ] ) . _( case 2 ) _ : ,\alpha } } } + { { c}_{\left [ 1:{l^\mathbf{w}_{\alpha } } \right],\alpha } } + \sum\limits_{v\in \omega _{ k,\alpha } ^{\left ( l^\mathbf{w}_{\alpha } \right)}}{{{c}_{v,\alpha } } } \nonumber\\ & = \sum\limits_{k=1}^{l^\mathbf{w}_{\alpha}-1}{\left ( { { w}_{k}}-{{w}_{k+1 } } \right)}+\left ( { { w}_{l^\mathbf{w}_{\alpha}}}-\lambda^\mathbf{w}_{\alpha } \right)+\sum\limits_{v\in \omega _ { k,\alpha } ^{\left ( l^\mathbf{w}_{\alpha } \right)}}{{{c}_{v,\alpha } } } \nonumber\\ & = \sum\limits_{k=1}^{l^\mathbf{w}_{\alpha}-1}{\left ( { { w}_{k}}-{{w}_{k+1 } } \right)}+\left ( { { w}_{l^\mathbf{w}_{\alpha}}}-\lambda^\mathbf{w}_{\alpha } \right)+\lambda^\mathbf{w}_{\alpha}\label{eq : invokepart2}\\ & = { { w}_{1}},\nonumber\end{aligned}\ ] ] where ( [ eq : invokepart2 ] ) is due to ( [ eq : partial_sum ] ) .note that ( [ eq : def_mathbb_c1 ] ) , ( [ eq : need_verify_1 ] ) , and ( [ eq : otherwise ] ) obviously hold .moreover , ( [ eq : need_verify_2 ] ) is implied by ( [ eq : moved ] ) . therefore , it suffices to verify ( [ eq : need_verify_3 ] ) .consider an arbitrary integer ] : we have and } } \right\}=\mathcal{i}\left\ { k\in \left [ 1:i \right ] \right\},\quad v\in \omega _ { k,\alpha } ^{\left ( { { l}^\mathbf{w}_{\alpha } } \right ) } , i\in \left [ { { l}^\mathbf{w}_{\alpha -1}}+1:{{l}^\mathbf{w}_{\alpha } } \right],\label{eq : changeset}\end{aligned}\ ] ] where ( [ eq : invokesomelem ] ) is due to ( [ eq : partial_sum ] ) . continuing from ( [ eq : before_cases ] ) , ,\alpha } } } \mathcal{i}\left\ { k\in \left [ 1:i \right ] \right\}\label{eq : duetotwo}\\ & = \sum\limits_{v\in \omega _ { k,\alpha } ^{\left ( { { l}^\mathbf{w}_{\alpha } } \right)}:k\in v}{{{c}_{v,\alpha } } } + \frac{1}{{\lambda^\mathbf{w}_{\alpha } } } \sum\limits_{i = k}^{{{l}^\mathbf{w}_{\alpha } } } { { { c}_{\left [ 1:i \right],\alpha } } } \sum\limits_{v\in \omega _ { k,\alpha } ^{\left ( { { l}^\mathbf{w}_{\alpha } } \right)}:k\in v}{{{c}_{v,\alpha } } } \nonumber\\ & = { \lambda^\mathbf{w}_{\alpha } } + \sum\limits_{i = k}^{{{l}^\mathbf{w}_{\alpha } } } { { { c}_{\left [ 1:i \right],\alpha } } } \label{eq : duetofirst } \\ & = { { w}_{k}},\nonumber\end{aligned}\ ] ] where ( [ eq : duetotwo ] ) and ( [ eq : duetofirst ] ) are due to ( [ eq : changeset ] ) and ( [ eq : invokesomelem ] ) , respectively . _( case 2 ) _ ] when $ ] .c. tian , s. mohajer , and s. diggavi , `` approximating the gaussian multiple description rate region under symmetric distortion constraints , '' _ ieee trans .inf . theory _ ,55 , no . 8 , pp .38693891 , aug . 2009 .
|
in distributed multilevel diversity coding , correlated sources ( each with components ) are encoded in a distributed manner such that , given the outputs from any encoders , the decoder can reconstruct the first components of each of the corresponding sources . for this problem , the optimality of a multilayer slepian - wolf coding scheme based on binning and superposition is established when . the same conclusion is shown to hold for general under a certain symmetry condition , which generalizes a celebrated result by yeung and zhang . data compression , diversity coding , entropy inequality , lagrange multiplier , linear programming , rate region , slepian - wolf , superposition .
|
entanglement plays a very important role in quantum information processes ( see also references therein ) .even if different parts of the quantum system ( quantum register ) are initially disentangled , entanglement naturally appears in the process of quantum protocols .this `` constructive entanglement '' must be preserved during the time of quantum information processing . on the other hand ,the system generally becomes entangled with the environment .this `` destructive entanglement '' must be minimized in order to achieve a needed fidelity of quantum algorithms .the importance of these effects calls for the development of rigorous mathematical tools for analyzing the dynamics of entanglement and for controlling the processes of constructive and destructive entanglement .another problem which is closely related to quantum information is quantum measurement .usually , for a qubit ( quantum two - level system ) , quantum measurements operate under the condition , where is the temperature , is the transition frequency , is the planck constant , and is the boltzmann constant .this condition is widely used in superconducting quantum computation , when and . in this case, one can use josephson junctions ( jj ) and superconducting quantum interference devices ( squids ) , both as qubits and as spectrometers measuring a spectrum of noise and other important effects induced by the interaction with the environment .understanding the dynamical characteristics of entanglement through the environment on a large time interval will help to develop new technologies for measurements not only of spectral properties , but also of quantum correlations induced by the environment . in this paper, we develop a consistent perturbation theory of quantum dynamics of entanglement which is valid for arbitrary times .this is important in many real situations because ( i ) the characteristic times which usually appear in quantum systems with two and more qubits involve different time - scales , ranging from a relatively fast decay of entanglement and different reduced density matrix elements ( decoherence ) to possibly quite large relaxation times , and ( ii ) for not exactly solvable quantum hamiltonians ( describing the energy exchange between the system and the environment ) one can only use a perturbative approach in order to estimate the characteristic dynamical parameters of the system .note , that generally not only are the time - scales for decoherence and entanglement different , but so are their functional time - dependences .indeed , usually the off - diagonal reduced density matrix elements in the basis of the quantum register do not decay to zero for large times , but remain at the level of , where is a characteristic constant of interaction between a qubit and an environment .on the other hand , entanglement has a different functional time dependence , and in many cases decays to zero in finite time .another problem which we analyze in this paper is a well - known cut - off procedure which one must introduce for high frequencies of the environment in order to have finite expressions for the interaction hamiltonian between the quantum register and the environment .generally , this artificial cut - off frequency enters all expressions in the theory for physical parameters , including decay rates and dynamics of observables . at the same time, one does not have this cut - off problem in real experimental situations .so , it would be very desirable to develop a regular theoretical approach to derive physical expressions which do not include the cut - off parameter .we show that our approach allows us to derive these cut - off independent expressions as the main terms of the perturbation theory , which is of .the cut - off terms are included in the corrections of . at the same time , the low - frequency divergencies still remain in the theory , and need additional conditions for their removal .we describe the characteristic dynamical properties of the simplest quantum register which consists of two not directly interacting qubits ( effective spins ) , which interact with local and collective environments .we introduce a classification of the decoherence times based on a partition of the reduced density matrix elements in the energy basis into clusters .this classification , valid for general -level systems coupled to reservoirs , is rather important for dealing with quantum algorithms with large registers . indeed , in this case different orders of quantumness " decay on different time - scales .the classification of decoherence time - scales which we suggest will help to separate environment - induced effects which are important from the unimportant ones for performing a specific quantum algorithm .we point out that all the populations ( diagonal of density matrix ) always belong to the same cluster to which is associated the relaxation time .we present analytical and numerical results for decay and creation of entanglement for both solvable ( integrable , energy conserving ) and unsolvable ( non - integrable , energy - exchange ) models , and explain the relations between them .this paper is devoted to a physical and numerical discussion of the dynamical resonance theory , and its application to the evolution of entanglement .a detailed exposition of the resonance method can be found in .as the mathematical details leading to certain expressions used in the discussion presented in this paper are rather lengthy , we report them separately in .we consider two qubits and , each one coupled to a local reservoir , and both together coupled to a collective reservoir .the hamiltonian of the two qubits is where are effective magnetic fields , is the transition frequency , and is the pauli spin operator of qubit .the eigenvalues of are with corresponding eigenstates where .each of the three reservoirs consists of free thermal bosons at temperature , with hamiltonian the index labels the collective reservoir .the creation and annihilation operators satisfy = \delta_{j , j'}\delta_{k , k'} ] , c.f .( [ 35 ] ) .thus the dynamics of the vector having as components the density matrix elements , has the semi - group property in the time variable , with generator , {m_1 n_1}\\ \vdots\\ { } [ \rho_t]_{m_k n_k } \end{array } \right ] = { { \rm e}}^{tg_{\cal c } } \left [ \begin{array}{c } { } [ \rho_{0}]_{m_1 n_1}\\ \vdots\\ { } [ \rho_{0}]_{m_k n_k } \end{array } \right ] .\label{clustermarkov1}\ ] ] this is the meaning of the markov property of the resonance dynamics . while the fact that our resonance approximation is not in the form of the weak coupling limit ( lindblad ) may represent disadvantages in certain applications , it may also allow for a description of effects possibly not visible in a markovian master equation approach .based on results , one may believe that revival of entanglement is a non - markovian effect , in the sense that it is not detectable under the markovian master equation dynamics ( however , we are not aware of any demonstration of this result ) .nevertheless , as we show in our numerical analysis below , the resonance approximation captures this effect ( see figure [ f1 ] ) .we may attempt to explain this as follows .each cluster is a ( indpendent ) markov process with its own decay rate , and while some clusters may depopulate very quickly , the ones responsible for creating revival of entanglement may stay alive for much longer times , hence enabling that process .clearly , on time - scales larger than the biggest decoherence time of all clusters , the matrix is ( approximately ) diagonal , and typically no revival of entanglement is possible any more .we consider the hamiltonian , ( [ n2 ] ) , with parameters s.t . . this assumption is a non - degeneracy condition which is not essential for the applicability of our method ( but lightens the exposition ) .the eigenvalues of are given by ( [ 46 ] ) and the spectrum of is , with non - negative eigenvalues having multiplicities , , , respectively . according to ( [ 45 ] ) , the grouping of jointly evolving elements of the density matrix above and on the diagonal is given by there are five clusters of jointly evolving elements ( on and above the diagonal ) .one cluster is the diagonal , represented by . for and define ( spherical coordinates ) and for we set furthermore , let ^{1/2}\big| , \label{69}\\ y_3 & = & \big| \im \left [ 4\kappa_1 ^ 2\kappa_2 ^ 2 r^2 -{{\rm i}}(\lambda_1 ^ 2+\mu_1 ^ 2)^2 \sigma^2_g(b_1 )-4{{\rm i}}\kappa_1\kappa_2 \( \lambda_1 ^ 2+\mu_1 ^ 2 ) \r r_1 ' \right]^{1/2}\big|,\label{70}\end{aligned}\ ] ] ( principal value square root with branch cut on negative real axis ) where the following results are obtained by an explicit calculation of level shift operators .details are presented in .* discussion . *1 . the thermalization rate depends on energy - exchange parameters , only .this is natural since an energy - conserving dynamics leaves the populations constant .if the interaction is purely energy - exchanging ( ) , then all the rates depend _ symmetrically _ on the local and collective interactions , through . however , for purely energy - conserving interactions ( ) the rates are not symmetrical in the local and collective terms .( e.g. depends only on local interaction if . )the terms , are complicated nonlinear combinations of exchange and conserving terms .this shows that effect of the energy exchange and conserving interactions are correlated .we see from ( [ 57 ] ) , ( [ 58 ] ) that the leading orders of the rates ( [ 52])-([56 ] ) do not depend on an ultraviolet features of the form factors .( however , depends on the infrared behaviour . ) the coupling constants , e.g. in ( [ 52 ] ) multiply , i.e. , the rates involve quantities like ( see ( [ 57 ] ) ) the one - dimensional dirac delta function appears due to energy conservation of processes of order , and is ( one of ) the bohr frequencies of a qubit .thus energy conservation chooses the evaluation of the form factors at finite momenta and thus an ultraviolet cutoff is not visible in these terms .nevertheless , we do not know how to control the error terms in ( [ 52])-([56 ] ) homogeneously in the cutoff .the case of a single qubit interacting with a thermal bose gas has been extensively studied , and decoherence and thermalization rates for the spin - boson system have been found using different techniques , .we recover the spin - boson model by setting all our couplings in ( [ n3])-([n7 ] ) to zero , except for , and setting . in this case, the spectral density of the reservoir is linked to our quantity ( [ 57 ] ) by the relaxation rate is where is the transition frequency of qubit ( in units where ) , see ( [ n1 ] ) .the decoherence rate is given by where is the limit as of .these rates obtained with our resonance method agree with those obtained in by the standard bloch - redfield approximation . * remark on the limitations of the resonance approximation .* as mentioned in section [ sectevol ] , the dynamics ( [ 42 ] ) can only resolve the evolution of quantities larger than .for instance , assume that in an initial state of the two qubits , all off - diagonal density matrix elements are of the order of unity ( relative to ) .as time increases , the off - diagonal matrix elements decrease , and for times satisfying , the off - diagonal cluster is of the same size as the error in ( [ 42 ] ) .hence the evolution of this cluster can be followed accurately by the resonance approximation for times , where is the temperature . here , ( and other parameters ) are dimensionless .to describe the cluster in question for larger times , one has to push the perturbation theory to higher order in .it is now clear that if a cluster is initially not populated , the resonance approximation does not give any information about the evolution of this cluster , other than saying that its elements will be for all times .below we investigate analytically decay of entanglement ( section [ disentsect ] ) and numerically creation of entanglement ( section [ numres ] ) . for the same reasons as just outlined ,an analytical study of entanglement decay is possible if the initial entanglement is large compared to .however , the study of creation of entanglement is more subtle from this point of view , since one must detect the emergence of entanglement , presumably of order only , starting from zero entanglement .we show in our numerical analysis that entanglement of size 0.3 is created _ independently _ of the value of ( ranging from 0.01 to 1 ) .we are thus sure that the resonance approximation does detect creation of entanglement , _ even _ if it may be of the same order of magnitude as the couplings .whether this is correct for other quantities than entanglement is not clear , and so far , only numerical investigations seem to be able to give an answer . as an example where things can go wrong with the resonance approximation we mention that for small times , the approximate density matrix has _negative eigenvalues_. this makes the notion of concurrence of the approximate density matrix ill - defined for small times .we consider the system with hamiltonian ( [ n3])-([n8 ] ) and , , and .this energy - conserving model can be solved explicitly and has the * exact solution * {mn } = [ \rho_0]_{mn}\ { { \rm e}}^{-{{\rm i}}t(e_m - e_n ) } \ { { \rm e}}^{{{\rm i}}\kappa^2 a_{mn } s(t)}\ { { \rm e}}^ { -[\kappa^2 b_{mn } + \nu^2 c_{mn}]\gamma(t ) } \label{exact}\ ] ] where , ( b_{mn } ) = \left [ \begin{array}{cccc } 0 & 4 & 4 & 16\\ 4 & 0 & 0 & 4\\ 4 & 0 & 0 & 4\\ 16 & 4 & 4 & 0 \end{array } \right ] , ( c_{mn } ) = \left [ \begin{array}{cccc } 0 & 4 & 4 & 8\\ 4 & 0 & 8 & 4\\ 4 & 8 & 0 & 4\\ 8 & 4 & 4 & 0 \end{array } \right]\ ] ] and on the other hand , the _ main _ contribution ( the sum ) in ( [ 42 ] ) yields the * resonance approximation * to the true dynamics , given by {mm } & \doteq & [ \rho_0]_{mm } \mbox{\qquad }\label{l1}\\ { } [ \rho_t]_{1n } & \doteq & { { \rm e}}^{-{{\rm i}}t(e_1-e_n)}\ { { \rm e}}^{-2{{\rm i}}t\kappa^2 r } { { \rm e}}^{-t(\kappa^2+\nu^2)\sigma_f(0)}[\rho_0]_{1n } \qquad n=2,3\label{l2}\\ { } [ \rho_t]_{14 } & \doteq & { { \rm e}}^{-{{\rm i}}t(e_1-e_4)}\ { { \rm e}}^{-t(4\kappa^2 + 2\nu^2)\sigma_f(0)}[\rho_0]_{14}\label{l3}\\ { } [ \rho_t]_{23 } & \doteq & { { \rm e}}^{-{{\rm i}}t(e_2-e_3)}\ { { \rm e}}^{-2 t \kappa^2 \sigma_f(0)}[\rho_0]_{23}\label{l4}\\ { } [ \rho_t]_{m4 } & \doteq & { { \rm e}}^{-{{\rm i}}t(e_m - e_4)}\ { { \rm e}}^{2{{\rm i}}t\kappa^2 r}\ { { \rm e}}^{-t(\kappa^2+\nu^2)\sigma_f(0)}[\rho_0]_{m4 } \qquad m=2,3 \label{l5}\end{aligned}\ ] ]the dotted equality sign signifies that the left side equals the right side modulo an error term , homogeneously in .)-([l5 ] ) one calculates the in ( [ 42 ] ) explicitly , to second order in and .the details are given in .] clearly the decoherence function and the phase are nonlinear in and depend on the ultraviolet behaviour of .on the other hand , our resonance theory approach yields a representation of the dynamics in terms of a superposition of exponentially decaying factors . from ( [ exact ] ) and ( [ l1])-([l5 ] ) we see that the resonance approximation is obtained from the exact solution by making the replacements we emphasize again that , according to ( [ 42 ] ) , the difference between the exact solution and the one given by the resonance approximation is of the order , homogeneously in time , and where depends on the ultraviolet behaviour of the couplings .this shows in particular that up to errors of , the dynamics of density matrix elements is simply given by a phase change and a possibly decaying exponential factor , both linear in time and entirely determined by and .of course , the advantage of the resonance approximation is that even for not exactly solvable models , we can approximate the true ( unknown ) dynamics by an explicitly calculable superposition of exponentials with exponents linear in time , according to ( [ 42 ] ) .let us finally mention that one easily sees that so ( [ mm30 ] ) and ( [ mm31 ] ) may indicate that the resonance approximation is closer to the true dynamics for large times but nevertheless , our analysis proves that the two are close together ( ) _ homogeneously _ in .in this section we apply the resonance method to obtain estimates on survival and death of entanglement under the full dynamics ( [ n3])-([n7 ] ) and for an initial state of the form , where has nonzero entanglement and the reservoir initial states are thermal , at fixed temperature .let be the density matrix of two qubits .the _ concurrence _ is defined by \ } , \label{60}\ ] ] where are the eigenvalues of the matrix here , is obtained from by representing the latter in the energy basis and then taking the elementwise complex conjugate , and is the pauli matrix ] . if then the state is separable , meaning that can be written as a mixture of pure product states . if we call maximally entangled .let be an initial state of .the smallest number s.t . for all is called the _ disentanglement time _( also ` entanglement sudden death time ' , ) .if for all then we set .the disentanglement time depends on the initial state .consider the family of pure initial states of given by where are arbitrary ( not both zero ) .the initial concurrence is which covers all values between zero ( e.g. ) to one ( e.g. ) . according to ( [ 42 ] ) , the density matrix of at time is given by + o(\varkappa^2 ) , \label{m3}\ ] ] with remainder uniform in , and where and are given by the main term on the r.h.s .of ( [ 42 ] ) .the initial conditions are , , , and .we set \label{p}\ ] ] and note that and . in terms of ,the initial concurrence is .let us set \sigma_f(0 ) .\label{deltas}\ ] ] an analysis of the concurrence of ( [ m3 ] ) , where the and evolve according to ( [ 42 ] ) yields the following bounds on disentanglement time .bounds ( [ m17 ] ) and ( [ m18 ] ) are obtained by a detailed analysis of ( [ 60 ] ) , with replaced by , ( [ m3 ] ) .this analysis is quite straightforward but rather lengthy .details are presented in .* discussion . *1 . the result gives disentanglement bounds for the true dynamics of the qubits for interactions which are not integrable .the disentanglement time is _finite_. this follows from ( which in turn implies that the total system approaches equilibrium as ) .if the system does not thermalize then it can happen that entanglement stays nonzero for all times ( it may decay or even stay constant ) .the rates are of order .both and increase with decreasing coupling strength .bounds ( [ m17 ] ) and ( [ m18 ] ) are not optimal .the disentanglement time bound ( [ m17 ] ) depends on both kinds of couplings .the contribution of each interaction decreases ( the bigger the noise the quicker entanglement dies ) .the bound on entanglement survival time ( [ m18 ] ) does not depend on the energy - conserving couplings .consider an initial condition , where is the initial state of the two qubits , and where the reservoir initial states are thermal , at fixed temperature .suppose that the qubits are not coupled to the collective reservoir , but only to the local ones , via energy conserving and exchange interactions ( local dynamics ) .it is not difficult to see that then , if has zero concurrence , its concurrence will remain zero for all times .this is so since the dynamics factorizes into parts for and , and acting upon an unentangled initial state does not change entanglement . in contrast , for certain _ entangled _ initial states , one observes death and revival of entanglement : the initial concurrence of the qubits decreases to zero and may stay zero for a certain while , but it then grows again to a maximum ( lower than the initial concurrence ) and decreasing to zero again , and so on .the interpretation is that concurrence is shifted from the qubits into the ( initially unentangled ) reservoirs , and if the latter are not markovian , concurrence is shifted back to the qubits ( with some loss ) .suppose now that the two qubits are coupled only to the collective reservoir , and not to the local ones .braun has considered the explicitly solvable model ( energy - conserving interaction ) , as presented in section [ sectcomp ] with , . and which is a prouct does not change the concurrence . ] using the exact solution ( [ exact ] ) , braun calculates the smallest eigenvalue of the partial transpose of the density matrix of the two qubits , with and considered as non - negative parameters . for the initial product statewhere qubits 1 and 2 are in the states and respectively , i.e. , , \label{instate}\ ] ] it is shown that for small values of ( less than 2 , roughly ) , the negativity of the smallest eigenvalue of the partial transpose oscillates between zero and -0.5 for increasing from zero .as takes values larger than about 3 , the smallest eigenvalue is zero ( regardless of the value of ) . according to the peres - horodecki criterion ,the qubits are entangled exactly when the smallest eigenvalue is strictly below zero .therefore , taking into account ( [ s ] ) and ( [ gamma ] ) , braun s work shows that for small times ( small ) the collective environment ( with energy - conserving interaction ) induces first creation , then death and revival of entanglement in the initially unentangled state ( [ instate ] ) , and that for large times ( large ) , entanglement disappears . the main term of the r.h.s . of ([ 42 ] ) can be calculated explicitly , and we give in appendix a the concrete expressions . how does concurrence evolve under this approximate evolution of the density matrix ?\(1 ) _ purely energy - exchange coupling._in this situation we have .the explicit expressions ( appendix a ) show that the density matrix elements {mn}$ ] in the resonance approximation depend on ( collective ) and ( local ) through the symmetric combination only .it follows that the dominant dynamics ( [ 42 ] ) ( the true dynamics modulo an error term homogeneously in ) is _ the same _ if we take purely collective dynamics ( or purely local dynamics ( ) . _in particular , creation of entanglement under purely collective and purely local energy - exchange dynamics is _ the same _ , modulo . for instance , for the initial state ( [ instate ] ) , collective energy - exchange couplings can create entanglement of at most , since local energy - exchange couplings do not create any entanglement in this initial state .\(2 ) _ purely energy - conserving coupling . _ in this situation we have . the evolution of the density matrix elements is not symmetric as a function of the coupling constants ( collective ) and ( local ) .one may be tempted to conjecture that concurrence is independent of the local coupling parameter , since it is so in absence of collective coupling ( ) . however , for , concurrence _ depends _ on ( see numerical results below ). we can understand this as follows .even if the initial state is unentangled , the collective coupling creates quickly a little bit of entanglement and therefore the local environment does not see a product state any more , and starts processes of creation , death and revival of entanglement .\(3 ) _ full coupling . _ in this caseall of do not vanish .matrix elements evolve as complicated functions of these parameters , showing that the effects of different interactions are correlated .in the following , we ask whether the resonance approximation is sufficient to detect creation of entanglement . to this end , we take the initial condition ( [ instate ] ) ( zero concurrence ) and study numerically its evolution under the approximate resonance evolution ( appendices a , b ) , and calculate concurrence as a function of time .let us first consider the case of purely energy conserving collective interaction , namely and only .our simulations ( figure [ f1]a ) show that , a concurrence of value approximately 0.3 is created , independently of the value of ( ranging from 0.01 to 1 ) .it is clear from the graphs that the effect of varying consists only in a time shift .this shift of time is particularly accurate , as can be seen in fig .[ f1]b , where the three curves drawn in a ) collapse to a single curve under the time rescaling . in particular , the maximum concurrence is taken at times .we also point out that the revived concurrence has very small amplitude ( approximately 15 times smaller than the maximum concurrence ) and takes its maximum at . even though the amplitude of the revived concurrence is small as compared to ,the graphs show that it is _ independent _ of , and hence our resonance dynamics does reveal concurrence revival .when switching on the local energy conserving coupling , , we see in fig .[ f2]a , that the maximum of concurrence decreases with increasing .therefore , the effect of a local coupling is to reduce the entanglement .it is also interesting to study the dependence of the maximal value of the concurrence , , as a function of the energy - conserving interaction parameters .this is done in fig .[ f2]b , where is plotted as a function of the local interaction , for different fixed collective couplings .the graphs show that as the local coupling is increased to the value of the collective coupling , becomes zero .this means that if the local coupling exceeds the collective one , then there is no creation of concurrence .we may interpret this as a competition between the concurrence - reducing tendency of the local coupling ( apart from very small revival effects ) and the concurrence - creating tendency of the collective coupling ( for not too long times ) .if the local coupling exceeds the collective one , then concurrence is prevented from building up .looking at fig .[ f2 ] , it is clear that the effect of the local coupling is not only to decrease concurrence but also to induce a shift of time , similarly to the effect of the collective coupling . indeed , taking as a variable the rescaled concurrence , one can see that the approximate scaling is at work , see fig . [ f3 ] ._ we conclude that both local and collective energy conserving interactions produce a cooperative time shift of the entanglement creation , but only the local interaction can destroy entanglement creation .there is no entanglement creation for ._ let us now consider an additional energy exchange coupling . since these parameters appear in the resonance dynamics only in the combination , see appendix a , we set without loosing generality .we plot in fig .[ f4 ] the time evolution of the concurrence , at fixed energy - conserving couplings and , for different values of the energy exchange coupling . in this casewe have chosen which corresponds to , where is a transition frequency of the first qubit .we also used the conditions : , which lead to the renormalization of the interaction constants .the relations between and , and and are discussed in appendix b. figure [ f4 ] shows that the effect of the energy exchanging coupling is to shift slightly the time where concurrence is maximal and , at the same time , to decrease the amplitude of concurrence for each fixed time .this feature is analogous to the effect of local energy - conserving interactions , as discussed above .unfortunately , it is quite difficult in this case to extract the threshold values of at which the creation of concurrence is prevented for all times .the difficulty comes from the fact that for larger values of , the concurrence is very small and the negative eigenvalues on order do not allow a reliable calculation .this picture does not change much if a local energy - conserving interaction is added . in fig .[ f5 ] , we show respectively , the time shift of the maximal concurrence as a function of the energy - exchanging coupling ( a ) and the behavior of the maximal concurrence as a function of the same parameter for two different values of the local coupling .is appears evident that the role played by the energy - exchange coupling is very similar to that played by the local energy - conserving one .let us comment about concurrence revival .the effect of a collective energy - conserving coupling consists of creating entanglement , destroying it and creating it again but with a smaller amplitude .generally speaking , an energy - exchanging coupling , if extremely small , does not change this picture .nevertheless , it is important to stress that the damping effect the energy - exchange coupling has on the concurrence amplitude is stronger on the revived concurrence than on the initially created one .this is shown in fig .[ f6 ] , where the renormalized concurrence is plotted for different values . for these parameter values , only a very small coupling will allow revival of concurrence . in the calculation of concurrence ,the square roots of the eigenvalues of the matrix ( [ nn60 ] ) should be taken . as explained before , the non positivity , to order of the density matrix on the non positivity of the eigenvalues of the matrix .when this happens ( ) we simply put in the numerical calculations .this produces an approximate ( order ) concurrence which produces spurious effects , especially for small time , when concurrence is small .these effects are particularly evident in fig .[ f6 ] , for small time , where artificial oscillations occur , instead of an expected smooth behavior .in contrast to this behaviour , the revival of entanglement as revealed in figure 6 varies smoothly in , indicating that this effect is not created due to the approximation .we consider a system of two qubits interacting with local and collective thermal quantum reservoirs .each qubit is coupled to its local reservoir by two channels , an energy - conserving and an energy - exchange one .the qubits are collectively coupled to a third reservoir , again through two channels .this is thus a versatile model , describing local and collective , energy - conserving and energy - exchange processes .we present an approximate dynamics which describes the evolution of the reduced density matrix for all times , modulo an error term , where is the typical coupling strength between a single qubit and a single reservoir .the error term is controlled rigorously and for all times .the approximate dynamics is markovian and shows that different parts of the reduced density matrix evolve together , but independently from other parts .this partitioning of the density matrix into _ clusters _ induces a classification of decoherence times the time - scales during which a given cluster stays populated .we obtain explicitly the decoherence and relaxation times and show that their leading expressions ( lowest nontrivial order in ) is independent of the ultraviolet behaviour of the system , and in particular , independent of any ultraviolet cutoff , artificially needed to make the models mathematically well defined .we obtain analytical estimates on entanglement death and entanglement survival times for a class of initially entangled qubit states , evolving under the full , not explicitly solvable dynamics .we investigate numerically the phenomenon of entanglement creation and show that the approximate dynamics , even though it is markovian , _ does _ reveal creation , sudden death and revival of entanglement .we encounter in the numerical study a disadvantage of the approximation , namely that it is not positivity preserving , meaning that for small times , the approximate density matrix has slightly negative eigenvalues .the above - mentioned cluster - partitioning of the density matrix is valid for general -level systems coupled to reservoirs .we think this clustering will play a useful and important role in the analysis of quantum algorithms .indeed , it allows one to separate significant " from insignificant " quantum effects , especially when dealing with large quantum registers for performing quantum algorithms . depending on the algorithm , fast decay of some blocks of the reduced density matrix elementscan still be tolerable for performing the algorithm with high fidelity .we point out a further possible application of our method to novel quantum measuring technologies based on superconducting qubits .using two superconducting qubits as measuring devices together with the scheme considered in this paper will allow one to extract not only the special density of noise , but also possible quantum correlations imposed by the environment .modern methods of quantum state tomography will allow to resolve these issues .we take , , and .these conditions guarantee that the resonances do not overlap , see also . in the sequel , means equality modulo an error term which is homogeneous in .the main contribution of the dynamics in ( [ 42 ] ) is given as follows .{11 } & \doteq & \frac{1}{z}\frac{1}{\sqrt{e_1e_2}}\big\ { ( 1+{{\rm e}}^{-t\delta_2}e_2 + { { \rm e}}^{-t\delta_3}e_1 + { { \rm e}}^{-t\delta_4}e_1e_2)[\rho_0]_{11}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2 } + { { \rm e}}^{-t\delta_3}e_1 -{{\rm e}}^{-t\delta_4}e_1)[\rho_0]_{22}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2 -{{\rm e}}^{-t\delta_3}e_1 -{{\rm e}}^{-t\delta_4}e_2)[\rho_0]_{33}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2 } -{{\rm e}}^{-t\delta_3 } -{{\rm e}}^{-t\delta_4})[\rho_0]_{44}\big\ } \label{el11}\\ { } [ \rho_t]_{22 } & \doteq & \frac{1}{z}\sqrt{\frac{e_2}{e_1}}\big\ { ( 1-{{\rm e}}^{-t\delta_2}+{{\rm e}}^{-t\delta_3}e_1 -{{\rm e}}^{-t\delta_4}e_1)[\rho_0]_{11}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2^{-1 } + { { \rm e}}^{-t\delta_3}e_1 + { { \rm e}}^{-t\delta_4}e_1e_2^{-1})[\rho_0]_{22}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2 } -{{\rm e}}^{-t\delta_3 } + { { \rm e}}^{-t\delta_4})[\rho_0]_{33}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2^{-1 } -{{\rm e}}^{-t\delta_3 } -{{\rm e}}^{-t\delta_4}e_2^{-1})[\rho_0]_{44}\big\ } \label{el22}\\ { } [ \rho_t]_{33 } & \doteq & \frac{1}{z}\sqrt{\frac{e_1}{e_2}}\big\ { ( 1+{{\rm e}}^{-t\delta_2}e_2-{{\rm e}}^{-t\delta_3 } -{{\rm e}}^{-t\delta_4}e_2)[\rho_0]_{11}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2}-{{\rm e}}^{-t\delta_3}+{{\rm e}}^{-t\delta_4})[\rho_0]_{22}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2 + { { \rm e}}^{-t\delta_3}e_1^{-1 } -{{\rm e}}^{-t\delta_4}e_2e^{-1}_1)[\rho_0]_{33}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2 } + { { \rm e}}^{-t\delta_3}e_1^{-1 } -{{\rm e}}^{-t\delta_4}e_1^{-1})[\rho_0]_{44}\big\ } \label{el33}\\ { } [ \rho_t]_{44 } & \doteq & \frac{1}{z}\sqrt{e_1 e_2}\big\ { ( 1-{{\rm e}}^{-t\delta_2}-{{\rm e}}^{-t\delta_3 } + { { \rm e}}^{-t\delta_4})[\rho_0]_{11}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2^{-1}-{{\rm e}}^{-t\delta_3}-{{\rm e}}^{-t\delta_4}e_2^{-1})[\rho_0]_{22}\nonumber\\ & & \qquad+ ( 1-{{\rm e}}^{-t\delta_2 } + { { \rm e}}^{-t\delta_3}e_1^{-1 } -{{\rm e}}^{-t\delta_4}e^{-1}_1)[\rho_0]_{33}\nonumber\\ & & \qquad+ ( 1+{{\rm e}}^{-t\delta_2}e_2^{-1 } + { { \rm e}}^{-t\delta_3}e_1^{-1 } + { { \rm e}}^{-t\delta_4}e_1^{-1}e_2^{-1})[\rho_0]_{44}\big\}. \label{el44}\end{aligned}\ ] ] here , of course , the populations do not depend on any energy - conserving parameter .the cluster of matrix elements evolves as {42 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_1}^{(1 ) } } \frac{e_2y_+}{1+e_2(y_+)^2 } \big\ { [ \rho_0]_{31 } + y_+[\rho_0]_{42}\big\}\nonumber\\ & & + { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_1}^{(2 ) } } \frac{e_2 y_-}{1+e_2(y_-)^2 } \big\ { [ \rho_0]_{31 } + y_-[\rho_0]_{42}\big\ } \label{el42},\\ { } [ \rho_t]_{31 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_1}^{(1 ) } } \frac{1}{1+e_2(y_+)^2 } \big\ { [ \rho_0]_{31 } + y_+[\rho_0]_{42}\big\}\nonumber\\ & & + { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_1}^{(2 ) } } \frac{1}{1+e_2(y_-)^2 } \big\ { [ \rho_0]_{31 } + y_-{}[\rho_0]_{42}\big\}. \label{el31}\end{aligned}\ ] ] here , ^{1/2 } , \label{100}\ ] ] where and the cluster of matrix elements evolves as {21 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_2}^{(1 ) } } \frac{1}{1+e_1(y'_+)^2 } \big\ { [ \rho_0]_{21 } + y'_+[\rho_0]_{43}\big\}\nonumber\\ & & + { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_2}^{(2 ) } } \frac{1}{1+e_1(y'_-)^2 } \big\ { [ \rho_0]_{21 } + y_-[\rho_0]_{43}\big\ } \label{el21},\\ { } [ \rho_t]_{43 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_2}^{(1 ) } } \frac{e_1 y'_+}{1+e_1(y'_+)^2 } \big\ { [ \rho_0]_{21 } + y'_+[\rho_0]_{43}\big\}\nonumber\\ & & + { { \rm e}}^{{{\rm i}}t\varepsilon_{2b_2}^{(2 ) } } \frac{e_1 y'_-}{1+e_1(y'_-)^2 } \big\ { [ \rho_0]_{21 } + y'_-{}[\rho_0]_{43}\big\}. \label{el43}\end{aligned}\ ] ] here , is the same as , but with all indexes labeling qubits 1 and 2 interchanged ( , in all coefficients involved in above ) .also , is obtained from by the same switch of labels .finally , {32 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2(b_1-b_2 ) } } [ \rho_0]_{32 } \label{el32}\\ { } [ \rho_t]_{41 } & \doteq & { { \rm e}}^{{{\rm i}}t\varepsilon_{2(b_1+b_2 ) } } [ \rho_0]_{41 } \label{el41}\end{aligned}\ ] ] with + 2{{\rm i}}\nu^2\sigma_f(0)\nonumber\\ & & + ( \lambda^2+\mu^2 ) [ r_g(b_1)-r_g(b_2)]\\ \varepsilon_{2(b_1+b_2 ) } & = & { { \rm i}}(\lambda^2 + \mu^2 ) [ \sigma_g(b_1)+\sigma_g(b_2 ) ] + 4{{\rm i}}\kappa^2 \sigma_f(0 ) + 2{{\rm i}}\nu^2\sigma_f(0)\nonumber\\ & & -(\lambda^2+\mu^2 ) [ r_g(b_1)+r_g(b_2)].\end{aligned}\ ] ]the equations above contain four independent coupling constants describing the energy - conserving and the energy exchanging ( local and collective ) interaction , and eight different functions of the form factors and : , , , , , ( [ unk ] ) .these functions are not independent .first of all it is easy to see that the following relation holds : moreover , choosing for instance a form factor one has : integrals in in eq .( [ unk ] ) converge only when adding a cut - off .it is easy to show that , when one has : and we can assume .so , we end up with four independent divergent integrals , in terms of which we can write explicitly the decay rates : and the lamb shifts , suppose now that both lamb shifts , and decay constants are _ experimentally measurable _ quantities , and also assume ( due to symmetry ) that .interaction constants can be renormalized in order to give directly decay constants and lamb shifts : t. yu and j. h. eberly , qubit disentanglement and decoherence via dephasing , _ phys .b _ , * 68 * , 165322 - 1 - 9 ( 2003 ) , yu , t. , eberly , j.h . :finite - time disentanglement via spontaneous emission .* 93 * , no.14 , 140404 ( 2004 ) ; sudden death of entanglement ._ sience _ , * 323 * , 598 - 601 , 30 january 2009 ; sudden death of entanglement : classical noise effects . _optics communications _ , * 264 * , 393 - 397 ( 2005 ) .m. steffen , m. ansmann , r. mcdermott , n. katz , r.c .bialczak , e. lucero , m. neeley , e.m .weig , a.n .cleland , and j.m .martinis , state tomography of capacitively shunted phase qubits with high fidelity , _ phys .lett . _ * 97 * 050502 - 1 - 4 ( 2006 ) .n. katz , m. neeley , m. ansmann , r.c .bialczak , m. hofheinz , e. lucero , a. oconnell , reversal of the weak measurement of a quantum state in a superconducting phase qubit , _ phys .lett . _ , * 101 * , 200401 - 1 - 4 ( 2008 ) .m. merkli , i.m .sigal , g.p .berman : decoherence and thermalization .* 98 * no . 13 , 130401 , 4 pp ( 2007 ) ; resonance theory of decoherence and thermalization .phys . _ * 323 * , 373 - 412 ( 2008 ) ; dynamics of collective decoherence and thermalization .phys . _ * 323 * , no .12 , 3091 - 3112 ( 2008 ) .
|
we analyze rigorously the dynamics of the entanglement between two qubits which interact only through collective and local environments . our approach is based on the resonance perturbation theory which assumes a small interaction between the qubits and the environments . the main advantage of our approach is that the expressions for ( i ) characteristic time - scales , such as decoherence , disentanglement , and relaxation , and ( ii ) observables are not limited by finite times . we introduce a new classification of decoherence times based on clustering of the reduced density matrix elements . the characteristic dynamical properties such as creation and decay of entanglement are examined . we also discuss possible applications of our results for superconducting quantum computation and quantum measurement technologies . 5.7 in 8.5 in [ section ] [ resultcounter]theorem [ resultcounter]lemma [ resultcounter]proposition [ resultcounter]corollary [ resultcounter]definition [ resultcounter]remark
|
the extremes of stationary processes , especially of gaussian processes , have attracted significant interest for a long time .many results are described in the books and , with shorter versions in and . roughly speaking, these results can be categorized as follows : the exact distributions of the suprema have been calculated for several particular processes ; bounds on the supremum distribution have been obtained for a large number of processes ; the asymptotic behavior of the level crossing probability has been studied for a larger number of processes . almost without exception ,however , these results deal with the value of the supremum , while very little is known about the random location of the supremum .the present work arises from an obvious attempt to understand the effect of stationarity of the process on the distribution of the location of the supremum . therefore in this paper , we look at stationary stochastic processes in continuous , one - dimensional time and we will consider the location of its global supremum over a compact interval .it turns out that answering even this , apparently simple question leads to unexpected insights .we now discuss our setup more formally .let be a stationary process .if the sample paths of the process are upper semi - continuous , then the process is bounded from above on any compact interval ] .it is , of course , entirely possible that the supremum of the process in the interval ] , we will denote by }= \min \bigl\ { t\in[a ,b]\dvtx x(t ) = \sup_{a\leq s\leq b}x(s ) \bigr\}.\ ] ] that is , } ] is achieved .it is elementary to check that )} ] the law of } ] .if , we have the corresponding single variable notation .the following statements are obvious .[ llocationvar ] for any , }(\cdot ) = f_{{\mathbf{x}},t}(\cdot-\delta ) .\vspace*{-9pt}\ ] ] for any intervals \subseteq[a , b] b \subset[c , d] ] in the sequel applies equally well to the rightmost supremum location , for instance , by considering the time - reversed stationary process . in some cases we will find it convenient to assume that the supremum is achieved at a unique location .formally , for we denote by the largest value of the process in the interval } \bigr\}.\]]=0 it is easy to see that is a measurable set .the following assumption says that , on a set of probability 1 , the supremum over interval ] is the unique point at which the supremum over the interval ] of the leftmost location of the supremum in that interval for any upper semi - continuous stationary process , as well as conditions this density has to satisfy .only one of the statements of the theorem requires assumption [ assu ] , in which case the statement applies to the unique location of the supremum .see remark [ rkzerodensity ] in the sequel .[ tdensityrcll ] let be a stationary sample upper semi - continuous process .then the restriction of the law to the interior of the interval is absolutely continuous .the density , denoted by , can be taken to be equal to the right derivative of the cdf , which exists at every point in the interval . in this casethe density is right continuous , has left limits , and has the following properties : the limits exist .the density has a universal upper bound given by assume that the process satisfies assumption [ assu ]. then the density is bounded away from zero , the density has a bounded variation away from the endpoints of the interval .furthermore , for every , where is the total variation of on the interval , and the supremum is taken over all choices of .the density has a bounded positive variation at the left endpoint and a bounded negative variation at the right endpoint .furthermore , for every , and where for any interval , is the positive ( negative ) variation of on the interval , and the supremum is taken over all choices of .the limit if and only if for some ( equivalently , any ) , in which case similarly , if and only if for some ( equivalently , any ) , in which case choose .we claim that for every , for every and every , this statement , once proved , will imply absolute continuity of on the interval and , since can be taken to be arbitrarily small , also on .further , ( [ eabsctybound ] ) will imply that the version of the density given by satisfies bound ( [ edensitybound ] ) .we proceed to prove ( [ eabsctybound ] ) .suppose that , to the contrary , ( [ eabsctybound ] ) fails for some and .choose and such that for , by stationarity , we have }\leq s+{\varepsilon } ) > { \varepsilon}(1+\rho)\max \biggl ( \frac1t,\frac{1}{t - t } \biggr ) .\ ] ] further , let .we check next that }\leq s_j+ { \varepsilon } , j=1,2 \}=\varnothing.\ ] ] indeed , let be the event in ( [ edisjoint ] ) .note that the intervals and are disjoint and , by the choice of the parameters and , each of these two intervals is a subinterval of both ] .therefore , on the event we can not have } ) < x ( \tau_{{\mathbf{x}},[s_2-t , s_2-t+t ] } ) \ ] ] for otherwise } ] . for the same reason , on the event we can not have } ) > x ( \tau_{{\mathbf{x}},[s_2-t , s_2-t+t ] } ) .\ ] ] finally , on the event we can not have } ) = x ( \tau_{{\mathbf{x}},[s_2-t , s_2-t+t ] } ) \ ] ] for otherwise } ] .this establishes ( [ edisjoint ] ) .we now apply ( [ eeachs ] ) and ( [ edisjoint ] ) to the points .we have }\leq s_i+{\varepsilon}\ } \biggr ) \\ & = & \sum_{i=0}^{\lceil(b - a)/{\varepsilon}\rceil-1 } p ( s_i < \tau_{{\mathbf{x}},[s_i - t , s_i - t+t]}\leq s_i+{\varepsilon } ) \\ & > & \frac{b - a}{{\varepsilon } } { \varepsilon}(1+\rho ) \max \biggl(\frac1t,\frac { 1}{t - t } \biggr ) \\& > & \bigl ( \min ( t , t - t ) - \theta \bigr ) ( 1+\rho)\max \biggl(\frac1 t , \frac{1}{t - t } \biggr ) \\ & > & \biggl ( 1- \frac{\delta}{\min ( t , t - t ) } \frac{\rho}{1+\rho } \biggr ) ( 1+\rho ) \geq \biggl ( 1- \frac{\rho}{1+\rho } \biggr ) ( 1+\rho ) = 1\end{aligned}\ ] ] by the choice of .this contradiction proves ( [ eabsctybound ] ) . before proceeding with the proof of theorem [ tdensityrcll ], we pause to prove the following important lemma .[ lkey ] let . then for every , almost everywhere in . furthermore ,for every such and every , such that , \\[-8pt ] \nonumber & & \qquad\leq \int_{{\varepsilon}_1}^{{\varepsilon}_1+\delta } f_{{\mathbf{x}},t}(t ) \,dt + \int_{t-\delta-{\varepsilon}_2+\delta}^{t-{\varepsilon}_2 } f_{{\mathbf{x}},t}(t ) \,dt .\end{aligned}\ ] ] we simply use lemma [ llocationvar ] . for any borel set have }\in b ) \\ & = & \int_b f_{{\mathbf{x}},[-\delta , t-\delta]}(t ) \,dt = \int _ b f_{{\mathbf{x}},t}(t+\delta ) \,dt , \end{aligned}\ ] ] which shows that almost everywhere in . for ( [ edensityvarint ] ) , notice that by lemma [ llocationvar ] , \bigr ) \\ & & \qquad\quad{}- p \bigl ( \tau_{{\mathbf{x}},t-\delta}\in[0,{\varepsilon}_1 ) \bigr ) - p ( \tau_{{\mathbf{x}},t-\delta}\in(t-\delta-{\varepsilon}_2 , t-\delta ] \bigr ) \\ & & \qquad= p \bigl ( \tau_{{\mathbf{x}},t}\in({\varepsilon}_1,{\varepsilon}_1 + \delta ) \bigr ) + \bigl ( p \bigl ( \tau_{{\mathbf{x}},t}\in[0,{\varepsilon}_1 ) \bigr)- p \bigl ( \tau_{{\mathbf{x}},t-\delta}\in[0,{\varepsilon}_1 ) \bigr ) \bigr ) \\ & & \qquad\quad{}+ p \bigl ( \tau_{{\mathbf{x}},t}\in(t-\delta-{\varepsilon}_2+\delta , t- { \varepsilon}_2 ) \bigr ) \\ & & \qquad\quad{}+ \bigl ( p\bigl ( \tau_{{\mathbf{x}},t}\in(t-{\varepsilon}_2,t ] \bigr ) - p \bigl ( \tau_{{\mathbf{x}},[\delta , t]}\in(t-{\varepsilon}_2,t ] \bigr ) \bigr ) \\ & & \qquad\leq p \bigl ( \tau_{{\mathbf{x}},t}\in({\varepsilon}_1,{\varepsilon}_1 + \delta ) \bigr ) + p \bigl ( \tau_{{\mathbf{x}},t}\in(t-\delta-{\varepsilon}_2 + \delta , t-{\varepsilon}_2 ) \bigr ) \\ & & \qquad= \int_{{\varepsilon}_1}^{{\varepsilon}_1+\delta } f_{{\mathbf{x}},t}(t ) \,dt + \int _ { t-\delta-{\varepsilon}_2+\delta}^{t-{\varepsilon}_2 } f_{{\mathbf{x}},t}(t ) \,dt\end{aligned}\ ] ] as required .we return now to the proof of theorem [ tdensityrcll ] .our next goal is to prove that the cdf is right differentiable at every point in the interval . since we already know that is absolutely continuous on , the set has lebesgue measure zero .define next we claim that the set is at most countable . to see this ,we define for our claim about set will follow once we check that for any and , the set is finite . in fact , we will show that the cardinality of can not be larger than . if not , let and find points .choose so small that and let now and choose a sequence , such that .consider so large that , and let be an integer .we have and for each as in the sum .\ ] ] therefore , by lemma [ llocationvar ] as . letting , we conclude that similarly , for choose a sequence , such that . for large and have for each as in the sum above , .\ ] ] therefore , by lemma [ llocationvar ] , letting , once again , first and then , we conclude that now we use the estimate in lemma [ lkey ] as follows . by the definition of the point and the smallness of , using the fact that and that , by lemma [ lkey ] , the integrand above is a.e .nonnegative , we have by the estimate in that lemma that the integral above does not exceed applying the already proved ( [ edensitybound ] ) , we conclude that and this contradicts the assumption that we can choose .this proves that the set in ( [ eb ] ) is at most countable .we notice , further , that \\[-8pt ] \nonumber & = & \lim_{s\downarrow t } \frac{1}{s - t } \int_t^s f_{{\mathbf{x}},t}(w ) \,dw = \lim_{w\downarrow t ,w\in a^c\setminus b}f_{{\mathbf{x}},t}(w)\end{aligned}\ ] ] for every [ recall the set is defined in ( [ ea ] ) ]. now we are ready to prove that the right derivative of the cdf exists at every point in the interval .suppose , to the contrary , that this is not so .then there is and real numbers such that this implies that there is a sequence with for each such that we can and will choose so close to that .notice that by ( [ enotab ] ) , for every there is such that for let now , and consider so small that both and . observe that and for every point , each one of the intervals , contains at least one of the points in the finite sequence . by construction , apart from a set of points of measure zero , those points of the kind that fall in the odd - numbered intervals satisfy , and those points that fall in the even - numbered intervals satisfy .we conclude that a.e . in .therefore , for all small enough , and , since can be taken arbitrarily large , we conclude that we will see that this is , however , impossible , and the resulting contradiction will prove that the right derivative of the cdf exists at every point in the interval . indeed , recall that by lemma [ lkey ] , for all small enough , therefore , for such , since , by another application of lemma [ lkey ] , the integrand is a.e .nonnegative over the range of integration . applying ( [ edensityvarint ] ), we see that however , we already know that the density is bounded on any subinterval of that is bounded away from both endpoints . therefore , the upper bound obtained above shows that ( [ eaveragelarge ] ) is impossible. hence the existence of the right derivative everywhere , which then coincides with the version of the density chosen above .next we check that this version of the density is right continuous . to this endwe recall that we already know that the set in ( [ ea ] ) is empty .next , we rule out existence of a point such the limit of as over does not exist .suppose that , to the contrary , that such exists .this means that there are real numbers and a sequence with for each such that however , we have already established that such a sequence can not exist . as in ( [ enotab ] ) , we see that for every and since the set is at most countable , the restriction to in the above limit statement can be removed .this proves right continuity of the version of the density given by the right derivative of .the proof of existence of left limits is similar .next , we address the variation of the version of the density we are working with away from the endpoints of the interval .let .we start with a preliminary calculation .let .introduce the notation so that to estimate the two terms we will once again use lemma [ lkey ] . since for large enough , for such , we have the upper bound we now once again use ( [ edensityvarint ] ) to conclude that for all large , we have so that similarly , by lemma [ lkey ] , for large enough , and we obtain , for such , using ( [ edensityvarint ] ) this can , in turn , be bounded from above both by and by therefore , overall , we have proved that \\[-8pt ] \nonumber & & \qquad\leq\min \bigl ( f_{{\mathbf{x}},t}(t_1 ) , f_{{\mathbf{x}},t}(t_1- ) \bigr ) + f_{{\mathbf{x}},t}(t_2 ) .\end{aligned}\ ] ] to relate ( [ evarboundint ] ) to the total variation of the density over the interval , we notice first that by the right continuity of the density , it is enough to consider the regularly spaced points , where for some write and observe that uniformly in .therefore , by ( [ evarboundint ] ) now bound ( [ etvaway ] ) follows from the obvious fact that furthermore , the proof of ( [ etvleft ] ) and ( [ etvright ] ) is the same as the proof of ( [ etvaway ] ) , with each one using one side of the two - sided calculation performed above for ( [ etvaway ] ) .next , the boundedness of the positive variation of the density at zero , clearly , implies that the limit exists , while the boundedness of the negative variation of the density at implies that the limit exists as well . if for some , then , trivially , . on the other hand ,if , then the same argument as we used in proving ( [ etvaway ] ) , shows that for any , which , together with ( [ etvleft ] ) , both shows that and proves ( [ etvleft1 ] ). one can prove the statement of part ( f ) of the theorem concerning the behavior of the density at the right endpoint of the interval in the same way .it only remains to prove part ( c ) of the theorem , namely the fact that the version of the density given by the right derivative of the cdf is bounded away from zero .recall that assumption [ assu ] is in effect here .suppose , to the contrary , that ( [ edensitylowerbound ] ) fails and introduce the notation clearly , .we claim that we start with the case . notice that , in this case , by ( [ etvaway ] ) the density is constant on the interval .if , then by the right continuity of the density , the constant must be equal to zero , so ( [ edensityvanish ] ) is immediate . if , then given , choose such that . by ( [ etvaway ] )we know that , which implies that on , hence also on .letting proves ( [ edensityvanish ] ) .if either and/or , then ( [ edensityvanish ] ) can be proved using a similar argument , and the continuity of the density at 0 and at shown in part ( a ) of the theorem .furthermore , we also have with the obvious conventions in the case coincide with one of the endpoints of the interval .it follows from ( [ edensityvanish ] ) , ( [ edensityvanish1 ] ) and lemma [ lkey ] that for any , furthermore , we know by lemma [ llocationvar ] that \bigr)\leq f_{{\mathbf{x}},t}\bigl([0,t_1]\bigr)\ ] ] and \bigr)\leq f_{{\mathbf{x}},t}\bigl([t_2,t]\bigr ) .\ ] ] note that for all the quantities in the above equations refer to the leftmost location of the supremum , which is no longer assumed to be unique .since the distributions and have equal total masses ( equal to one ) , it follows from ( [ edensvanishbig ] ) , ( [ eleftcomp ] ) and ( [ erightcomp ] ) that the latter two inequalities must hold as equalities for all relevant sets .we concentrate on the resulting equation \bigr)= f_{{\mathbf{x}},t}\bigl([t_2,t]\bigr ) .\ ] ] since we are working with the leftmost supremum location on a larger interval , we can write for \bigr)&= & p \bigl ( \tau_{{\mathbf{x}},[-\delta , t ] } \in[t_2,t ] \bigr ) \\ & & { } + p \bigl(\tau_{{\mathbf{x}},t}\in[t_2,t ] , \tau_{{\mathbf{x}},[-\delta , t ] } \in [ - \delta,0 ) \bigr).\end{aligned}\ ] ] using lemma [ llocationvar ] and ( [ eequality ] ) , we see that , \tau_{{\mathbf{x}},[-\delta , t ] } \in [ - \delta,0 ) \bigr ) = 0 , \ ] ] which implies that if , then , \sup_{-\delta\leq t\leq -\delta+t - t_2}x(t)\geq\sup_{t_2\leq t\leq t}x(t ) \bigr)=0 .\ ] ] pick . using ( [ enotlargerleft ] ) with see that \bigr\} n=1,2,\ldots, ] .therefore , with , we have for .next we describe what extra restrictions on the distribution of the location of the supremum , in addition to the statements of theorem [ tdensityrcll ] , assumption [ assl ] of section [ secassumptions ] imposes . again, one of the statements of the theorem requires assumption [ assu ] . see remark [ rknomass ] for a discussion .[ tdensityl ]let be a stationary sample upper semi - continuous process , satisfying assumption [ assl ] .then the version of the density of the leftmost location of the supremum in the interval ] with probability 1 .we first prove that } ) \not= x ( \tau_{{\mathbf{x}},t } ) \bigr)=0 .\ ] ] by symmetry , it is enough to prove the one - sided claim } ) < x ( \tau_{{\mathbf{x}},t } ) \bigr)=0 .\ ] ] indeed , suppose , to the contrary , that the probability in ( [ esmallnoteq1 ] ) is positive . under assumption [ assu ] we can use the continuity from below of measures to see that there is such that } ) + { \varepsilon } , x ( \tau_{{\mathbf{x}},t } ) > \max_{t\in l_t , t\not= \tau_{{\mathbf{x}},t}}x(t)+{\varepsilon}\bigr)>0 .\ ] ] here is the ( a.s .finite ) set of the local maxima of in the interval .next , by the uniform continuity of the process on ] . by stationarity , this contradicts the assumption .this contradiction proves ( [ esmallnoteq1 ] ) and , hence , also ( [ esmallnoteq ] ) .next , we check that } ) = x ( \tau_{{\mathbf{x}},t } ) , \tau_{{\mathbf{x}},[t,2t]}-\tau_{{\mathbf{x}},t}<t \bigr)=0 .\ ] ] indeed , suppose that , to the contrary , the probability above is positive . by the continuity from below of measures , there is such that } ) = x ( \tau_{{\mathbf{x}},t } ) , \tau_{{\mathbf{x}},[t,2t]}-\tau_{{\mathbf{x}},t}<t-{\varepsilon}\bigr)>0 .\ ] ] take . by the law of total probability there are such that , where } ) = x ( \tau_{{\mathbf{x}},t } ) , \tau_{{\mathbf{x}},[t,2t]}-\tau_{{\mathbf{x}},t}<t-{\varepsilon } , \\ & & \hspace*{5pt}{}(i_1 - 1)t / n<\tau_{{\mathbf{x}},t}<i_1t / n,\\ & & \hspace*{5pt}{}t+(i_2 - 1)t / n<\tau_{{\mathbf{x}},[t,2t]}<t+i_2t / n \bigr\}.\end{aligned}\ ] ] by the choice of , , so that , on the event , the process has at least two points , and } ] is achieved . by stationarity , this contradicts assumption [ assu ] .this contradiction proves ( [ esmalleq ] ) .finally , we check that } ) = x ( \tau_{{\mathbf{x}},t } ) , \tau_{{\mathbf{x}},[t,2t]}-\tau_{{\mathbf{x}},t}>t \bigr)=0 .\ ] ] the proof is similar to the proof of ( [ esmallnoteq1 ] ) , so we only sketch the argument .suppose that , to the contrary , the probability in ( [ elargeeq ] ) is positive .use the continuity of measures to see that the probability remains positive if we require that }-\tau_{{\mathbf{x}},t}>t+{\varepsilon} ] . by stationarity , this contradicts the assumption . combining ( [ esmallnoteq ] ) , ( [ esmalleq ] ) and ( [ elargeeq ] ), we see that the assumption implies that } ) = x ( \tau_{{\mathbf{x}},t } ) , \tau_{{\mathbf{x}},[t,2t]}-\tau_{{\mathbf{x}},t}=t \bigr)=1.\ ] ] let .we have by stationarity , } \in(a , b ) \bigr ) \\ & = & p \bigl ( \tau_{{\mathbf{x}},[a , a+t]}\in(a , b ) , \tau_{{\mathbf{x}},t}\in(0,a ) \bigr ) \\ & & { } + p \bigl ( \tau_{{\mathbf{x}},[a , a+t]}\in(a , b ) , \tau_{{\mathbf{x}},t}\in(a , t ) \bigr ) .\end{aligned}\ ] ] by ( [ eallequal ] ) , if , then }\in(t , t+a) ] .therefore , the first term in the right - hand side above vanishes .similarly , by ( [ eallequal ] ) , if , then }\in(t+a,2t) ] .therefore , for any , which proves the uniformity of the distribution of .[ rknomass ] a simple special case of the process in remark [ rkzerodensity ] shows that the statement of part ( b ) of theorem [ tdensityl ] may fail without assumption [ assu ] .we take , for clarity , a specific function .let for and extend to a periodic function with period .then for any , the leftmost location of the supremum in the interval ] , so that both statements of theorem [ tdensityl ] fail for this process .the upper bounds in part ( b ) of theorem [ tdensityrcll ] turn out to be the best possible pointwise , as is shown in the following result .[ pruppergen ] for each and any number smaller than the upper bound given in ( [ edensitybound ] ) , there is a sample continuous stationary process satisfying assumptions [ assu ] and [ assl ] for which the right continuous version of the density of the supremum location at time exceeds that number . to this end , let and let be an integer .we define a periodic function with period by defining its values on the interval ] , and is located in the interval .this contributes to the value of the density at each point of the interval . in particular , since , since we can take arbitrarily large , the value of the density can be arbitrarily close to , and since can be taken arbitrarily close to , the value of the density can be arbitrarily close to .suppose now that the stationary process is time reversible , that is , if .that would , obviously , be the case for stationary gaussian processes .if the process satisfies also assumption [ assu ] , then the distribution of the unique supremum location is symmetric in the interval ] , it turns out that the bounded variation property in part ( d ) of theorem [ tdensityrcll ] provides a better bound in this symmetric case .this bound and its optimality , even within the class of stationary gaussian processes , is presented in the following result .[ csymmdensitybound ] let be a time reversible stationary sample upper semi - continuous process satisfying assumption [ assu ] .then the density of the unique location of the supremum in the interval ] such that , and there is a continuity point of the density ] gives us letting and recalling that and , we obtain since , this implies that if , then the largest value of the right - hand side of ( [ etargetfunction ] ) under the constraint ( [ eboundary ] ) requires taking as large as possible and as small as possible .taking and in ( [ etargetfunction ] ) results in the upper bound given in ( [ esymmdensitybound ] ) in this range .if , then the largest value of the right - hand side of ( [ etargetfunction ] ) under the constraint ( [ eboundary ] ) requires taking as small as possible and as large as possible . by ( [ eboundary1 ] ) , we have to take , in ( [ etargetfunction ] ) , which results in the upper bound given in ( [ esymmdensitybound ] ) in this case .it remains to prove the optimality part of the statement of the corollary . by symmetryit is enough to consider .fix such .let be a small number and be a large number , rationally independent of .consider a stationary gaussian process given by where are i.i.d .standard normal random variables .the process is , clearly , sample continuous , and it satisfies assumption [ assl ] .furthermore , rational independence of and implies that , on a set of probability 1 , the process has different values at all of its local maxima , hence assumption [ assu ] is satisfied for any .note that we can write where and have the density on , and and are uniformly distributed between 0 and , with all 4 random variables being independent .clearly , the leftmost location of the supremum of the process is at which is uniformly distributed between 0 and . on the event the process is decreasing on ] .if the supremum of the sum remained at , the density of that unique supremum would be at least at each point of the interval . since as , the value of the density at would exceed any value smaller than after taking large and small .the location of the supremum of the sum does not remain at but , instead , moves to defined by for large , is nearly identical to , and straightforward but somewhat tedious calculus based on the implicit function theorem shows that the above statement remains true for : the contribution of the event to the density of the unique supremum of the process would exceed any value smaller than at any point of the interval after taking large and small .we omit the details .we have shown the optimality of the upper bound given in ( [ esymmdensitybound ] ) in the case .it remains to consider the case .we will use again a two - wave stationary gaussian process , but with a slightly different twist .let be a small number , a large number and a fixed number that is rationally independent of .consider a stationary gaussian process given by where and are as above .as above , is a sample continuous gaussian process satisfying assumptions [ assl ] and [ assu ] . nowthe leftmost location of the supremum of the process is at which is uniformly distributed between 0 and .further , if , then is the unique supremum of in the interval ] satisfying for large , is nearly identical to , and as above , using the implicit value theorem allows us to conclude that , for any value smaller than , the value of the density of in the interval exceeds that value after taking small and large .this proves the optimality of the upper bound given in ( [ esymmdensitybound ] ) in all cases .
|
it is , perhaps , surprising that the location of the unique supremum of a stationary process on an interval can fail to be uniformly distributed over that interval . we show that this distribution is absolutely continuous in the interior of the interval and describe very specific conditions the density has to satisfy . we establish universal upper bounds on the density and demonstrate their optimality .
|
variational integrators have mainly been developed and used for a wide variety of mechanical systems . however ,real - life systems are generally not of purely mechanical character .in fact , more and more systems become multidisciplinary in the sense , that not only mechanical parts , but also electric and software subsystems are involved , resulting in mechatronic systems .since the integration of these systems with a unified simulation tool is desirable , the aim of this work is to extend the applicability of variational integrators to mechatronic systems . in particular , as the first step towards a unified simulation , we develop a variational integrator for the simulation of electric circuits . [ [ overview ] ] overview + + + + + + + + variational integrators are based on a discrete variational formulation of the underlying system , e.g. based on a discrete version of hamilton s principle for conservative mechanical systems .the resulting integrators given by the discrete euler - lagrange equations are symplectic and momentum - preserving and have an excellent long - time energy behavior . choosing different variational formulations ( e.g. hamilton , lagrange - dalembert , hamilton - pontryagin , etc . ) , variational integrators have been developed for classical conservative mechanical systems ( for an overview see ) , forced and controlled systems , constrained systems ( holonomic and nonholonomic systems ) , nonsmooth systems , stochastic systems , and multiscale systems .most of these systems share the assumption , that they are non - degenerate , i.e. the legendre transformation of the corresponding lagrangian is a diffeomorphism . applying hamilton s principle to a regular lagrangian system , the resulting euler - lagrange equations are ordinary differential equations of second order and equivalent to hamilton s equations .the lagrangian formulation for lc circuits is based on the electric and magnetic energies in the circuit and the interconnection constraints expressed in the kirchhoff laws .there exist a large variety of different approaches for a lagrangian or hamiltonian formulation of electric circuits ( see e.g. and references therein ) .all of theses authors treat the question of which choice of the lagrangian coordinates and derivatives is the most appropriate one .several settings have been proposed and analyzed , e.g. a variational formulation based on capacitor charges and currents , on inductor fluxes and voltages , and a combination of both settings , as well as formulations based on linear combinations of the charges and flux linkages . typically , one wants to find a set of generalized coordinates , such that the resulting lagrangian is non - degenerate .however , within such a formulation , the variables are not easily interpretable in terms of original terms of a circuit .a recently - considered alternative formulation is based on a redundant set of coordinates resulting in a lagrangian system for which the lagrangian is degenerate . for a degenerate lagrangian system ,i.e. the legendre transform is not invertible , the euler - lagrange equations involve additional hidden algebraic constraints .then , the equations do not have a unique solution , and additional constraints are required for unique solvability of the system . for the circuit case ,these are provided by the kirchhoff current law ( kcl ) . from a geometric point of view, the kcl provides a constraint distribution that induces a _ dirac structure _ for the degenerate system . the associated system is then denoted by an _ implicit lagrangian system_. in and , it was shown that nonholonomic mechanical systems and lc circuits as degenerate lagrangian systems can be formulated in the context of induced dirac structures and associated implicit lagrangian systems .the variational structure of an implicit lagrange system is given in the context of the hamiltonian - pontryagin - dalembert principle , as shown in .the resulting euler - lagrange equations are called the _ implicit euler - lagrange equations _ , which are semi - explicit differential - algebraic equations that consist of a system of first order differential equations and an additional algebraic equation that constrains the image of the legendre transformation ( called the _ set of primary constraints _ ) .thus , the modeling of electric circuits involves both primary constraints as well as constraints coming from kirchhoff s laws . in , an extension towards the interconnection of implicit lagrange systems for electric circuitsis demonstrated . for completeness, we have to mention that the corresponding notion of implicit hamiltonian systems and implicit hamiltonian equations was developed earlier by . an intrinsic hamiltonian formulation of dynamics of lc circuits as well as interconnections of dirac structures have been developed , e.g. in and , respectively .there are only a few works dealing with the variational simulation of degenerate systems , e.g. in , variational integrators with application to point vertices as a special case of degenerate lagrangian system are developed .although there exists a variety of different variational formulations for electric circuits , variational integrators for their simulation have not been concretely investigated and applied thus far . in ,a framework for the description of the discrete analogues of implicit lagrangian and hamiltonian systems is proposed .this framework is the foundation for the development of an integration scheme .however , no concrete simulation scenarios have yet been performed .furthermore , the discrete formulation of the variational principle is slightly different from the approach presented in this work , thus resulting in a different scheme . [ [ contribution ] ]contribution + + + + + + + + + + + + in this work , we present a unified variational framework for the modeling and simulation of electric circuits .the focus of our analysis is on the case of ideal linear circuit elements , consisting of inductors , capacitors , resistors and voltage sources .however , this is not a restriction of this approach , and the variational integrators can also be developed for nonlinear circuits , which is left for future work .a geometric formulation of the different possible state spaces for a circuit model is introduced .this geometric view point forms the basis for a variational formulation .rather than dealing with dirac structures , we work directly with the corresponding variational principle , where we follow the approach introduced in . when considering the dynamics of an electric circuit , one is faced with three specific situations that lead to a special treatment within the variational formulation and thus the construction of appropriate variational integrators : 1 .the system involves external ( control ) forcing through external ( controlled ) voltage sources .the system is constrained via the kirchhoff current ( kcl ) and voltage laws ( kvl ) .the lagrangian is degenerate leading to primary constraints . for the treatment of forced systems ,the lagrange - dalembert principle is the principle of choice .involving constraints , one has to consider constrained variations resulting in a constrained principle .the degeneracy requires the use of the pontryagin version ; thus , the principle of choice is the constrained _ lagrange - dalembert - pontryagin principle _ .two variational formulations are considered : first , a constrained variational formulation is introduced for which the kcl constraints are explicitly given as algebraic constraints , whereas the kvl are given by the resulting euler - lagrange equations .second , an equivalent reduced constrained variational principle is developed , for which the kcl constraints are eliminated due to a representation of the lagrangian on a reduced space . in thissetting , the charges and flux linkages are the differential variables , whereas the currents play the role of algebraic variables .the number of inductors in the circuit and the circuit topology determine the degree of degeneracy of the system .for the reduced version , we show for which cases the degeneracy of the system is canceled via the kcl constraints . based on the variational formulation, a variational integrator for electric circuits can be constructed . for the case of a degenerate system ,the applicability of the variational integrator is dependent on the choice of discretization .based on the type and order of the discretization , the degeneracy of the continuous system is canceled for the resulting discrete scheme .three different integrators and their applicability to different electric circuits are investigated .the generality of a unified geometric ( and discrete ) variational formulation is advantageous for the analysis for very complex circuits in particular .using the geometric approach , the main structure - preserving properties of the ( discrete ) lagrangian system can be derived . in particular ,good energy behavior and preservation of the spectrum of high frequencies of the solutions can be observed .furthermore , preserved momentum maps due to symmetries of the lagrangian system can be derived .going one step further , we extend the approach to a stochastic and multiscale setting . due to the variational framework, the resulting stochastic integrator will well capture the statistics of the solution ( see for instance ) , and the resulting multiscale integrator will still be variational . [ [ outline ] ] outline + + + + + + + in section [ sec : elecirc ] , we first review the basic notation for electric circuits followed by a graph representation to describe the circuit topology . in addition , we introduce a geometric formulation that gives an interpretation of the different state spaces of a circuit model . based on the geometric view point , the two ( reduced and unreduced ) variational formulations are derived in section [ sec : varcirc ] . the equivalence of both formulations as well as conditions for obtaining a non - degenerate reduced system are proven . in section[ sec : disvar ] , the construction of different variational integrators for electric circuits is described and conditions for their applicability are derived .the main structure - preserving properties of the lagrangian system and the variational integrator are summarized in section [ sec : structure ] . in section [ sec : noise ] , the approach is extended for the treatment of noisy circuits . in section [ sex : example ] , the efficiency of the developed variational integrators is demonstrated by means of numerical examples . a comparison with standard circuit modeling and circuit integrators is given .in particular , the applicability of the multiscale method flavor is demonstrated for a circuit with different time scales .considering an electric circuit , we introduce the following notations ( following ) : a _ node _ is a point in the circuit where two or more elements meet .a _ path _ is a trace of adjacent elements , with no elements included more than once .a _ branch _ is a path that connects two nodes .a _ loop _ is a path that begins and ends at the same node .a _ mesh _ ( also called _ fundamental loop _ ) is a loop that does not enclose any other loops .a _ planar circuit _ is a circuit that can be drawn on a plane without crossing branches .let be the time - dependent charges , the currents and voltages of the circuit elements with $ ] , where are the corresponding quantities through the inductors , the capacitors , the resistors , and the voltage sources .in addition , we give each of those devices an assumed current flow direction . in table [ tab : chars ] , the characteristic equations for basic elements are listed ..characteristic equations for basic circuit elements . [ cols="<,<,<",options="header " , ]in this contribution , we presented a unified framework for the modeling and simulation of electric circuits .starting with a geometric setting , we formulate a unified variational formulation for the modeling of electric circuits .analogous to the formulation of mechanical systems , we define a degenerate lagrangian on the space of branches consisting of electric and magnetic energy , dissipative and external forces that describe the influence of resistors and voltage sources as well as ( non-)holonomic constraints given by the kcl of the circuit .the lagrange - dalembert - pontryagin principle is used to derive in a variational way the implicit euler - lagrange equations being differential - algebraic equations to describe the system s dynamics .a reduced version on the space of meshes is presented that is shown to be equivalent to the original system and for which under some topology assumptions the degeneracy of the lagrangian is canceled .based on the reduced version , a discrete variational approach is presented that provides different variational integrators for the simulation of circuits . in particular , the generated integrators are symplectic , preserve momentum maps in presence of symmetries and have good long - time energy bahvior .furthermore , we observe that the spectrum of high frequencies is especially better preserved compared to simulations using runge - kutta or bdf methods . having the variational framework for the model and the simulation , extensions of the approach using already - existing types of different variational integratorscan be easily accomplished . as an example, we presented the extension for the simulation of noisy circuits using stochastic variational integrator approaches as well as multiscale methods ( in particular flavors ) for an efficient treatment of circuits with multiple time scales . in the future, we will extend the approach to the simulation and analysis of more complicated nonlinear and magnetic circuits that might include nonlinear inductors , capacitors , resistors and transistors .since a variational formulation in terms of energies , forces and constraints is still valid for the nonlinear case , the presented integrators will be derived and applied in straight forward way .furthermore , the inclusion of controlled sources allows for the consideration of optimal control problems for circuits for which techniques also based on a variational formulation can be easily applied ( see e.g. ) .the variational simulation of combined mechanical and electric ( electro - mechanical ) systems is the natural next step towards the development of a unified variational modeling and simulation method for mechatronic systems .furthermore , at nanoscales , thermal noise and electromagnetic interactions become an essential component of the dynamic of electric circuits . we plan to investigate the coupling of variational integrators for circuits with multi - symplectic variational integrators for em fields and continuum mechanics ( see e.g. ) to produce a robust structure - preserving numerical integrator for microelectromechanical and nanoelectromechanical systems .recently , mike giles has developed a multilevel monte carlo method for differential equations with stochastic forcings that shows huge computation accelerations , 100 times in some cases .the extension of the current method to multilevel stochastic variational integrators is straight forward and may further accelerate computation dramatically , especially for multiscale problems , while preserving certain properties of the circuit network .this contribution was partly developed and published in the course of the collaborative research centre 614 self - optimizing concepts and structures in mechanical engineering " funded by the german research foundation ( dfg ) under grant number sfb 614 .the authors acknowledge partial support from nsf grant cmmi-092600 .the authors gratefully acknowledge henry jacobs , melvin leok , and hiroaki yoshimura for delightful discussions about variational mechanics for degenerate systems .furthermore , the authors thank stefan klus , sujit nair , olivier verdier , and hua wang for helpful discussions regarding circuit theory . finally , we thank sydney garstang for proofreading the document .a. m. bloch and p. r. crouch .representations of dirac structures on vector spaces and nonlinear lc - circuits . in _ differential geometry and control ( boulder ,co , 1997 ) _ , volume 64 of _ proceedings of symposia in pure mathematics _ , pages 103117 , providence , ri , 1997 .american mathematical society .h. jacobs , y. yoshimura , and j. e. marsden .interconnection of lagrange - dirac dynamical systems for electric circuits . in _8th international conference of numerical analysis and applied mathematics _ , volume 1281 , pages 566569 , 2010 .doi:10.1063/1.3498539 .m. leok and t. ohsawa .discrete dirac structures and implicit discrete lagrangian and hamiltonian systems . in _ geometry and physics :xviii international fall workshop on geometry and physics _ ,volume 1260 , pages 91102 .aip conference proceedings , 2010 .doi:10.1063/1.3479325 .a. lew , j. e. marsden , m. ortiz , and m. west .an overview of variational integrators . in l.p. franca , t. e. tezduyar , and a. masud , editors , _ finite element methods : 1970 s and beyond _ , pages 98115 .cimne , 2004 . c. w. rowley and j. e. marsden .variational integrators for degenerate lagrangians , with application to point vortices . in _41st ieee conference on decision and control _ , volume 2 , pages 15211527 , 2002 .m. tao , h. owhadi , and j. e. marsden .nonintrusive and structure preserving multiscale integration of stiff odes , sdes , and hamiltonian systems with hidden slow dynamics via flow averaging ., 8(4):12691324 , 2010 . h. yoshimura and j. e. marsden .dirac structures and implicit lagrangian systems in electric networks . in _17th international symposium on mathematical theory of networks and systems _ , pages 14441449 , 2006 .
|
in this contribution , we develop a variational integrator for the simulation of ( stochastic and multiscale ) electric circuits . when considering the dynamics of an electrical circuit , one is faced with three special situations : 1 . the system involves external ( control ) forcing through external ( controlled ) voltage sources and resistors . 2 . the system is constrained via the kirchhoff current ( kcl ) and voltage laws ( kvl ) . 3 . the lagrangian is degenerate . based on a geometric setting , an appropriate variational formulation is presented to model the circuit from which the equations of motion are derived . a time - discrete variational formulation provides an iteration scheme for the simulation of the electric circuit . dependent on the discretization , the intrinsic degeneracy of the system can be canceled for the discrete variational scheme . in this way , a variational integrator is constructed that gains several advantages compared to standard integration tools for circuits ; in particular , a comparison to bdf methods ( which are usually the method of choice for the simulation of electric circuits ) shows that even for simple lcr circuits , a better energy behavior and frequency spectrum preservation can be observed using the developed variational integrator . structure - preserving integration , variational integrators , degenerate systems , electric circuits , noisy systems , multiscale integration
|
when modelling spreading in systems , it is often assumed that an entity , e.g. a virus or information , performs jumps on a static underlying structure represented by a network . in its simple form , the network is assumed to be undirected and unweighted , and it can be made more elaborate by incorporating additional information about the direction of the edges and their weight . in this framework , spreading is described by linear differential equations , where the density on nodes evolves according to the flux induced by neighbouring nodes , and whose solutions is determined by the spectral properties of a matrix encoding the structure of the network , often the laplacian matrix . in a large number of systems , however , this modelling approach is not justified because events taking place on the nodes exhibit complex temporal patterns , and the underlying structure has to be considered as a temporal network . for instance , in systems as diverse as mobile phone communication , email checking and brain activity , nodes and links are not permanently active but are punctually activated over time , and their dynamics tends to deviate strongly from that of a poisson process . an important research question associated to the temporality of the network is to understand its impact on the speed and reach of spreading .for instance , the presence of correlations between activations of neighbouring edges may either slow - down or accelerate spreading by inducing non - random pathways , depending on the type of correlations .another important mechanism is associated to the burstiness of the temporal processes , as two consecutive activations of a link or a node tend to present a broad distribution of inter - contact times .the latter mechanism is the subject of this paper .our main contribution is to consider a model where spreading is not always successful on an edge when it appears , so that it takes place only after a random number of trials .this process allows to introduce , and to tune , different time scales in the system , one associated to network evolution , through its inter - contact times , and the other to spreading . as we show analytically and numerically , imperfect spreading leads to a generalisation of the so - called bus paradox , and has interesting implications on the properties of epidemic spreading on networks and , in particular , on how to optimise the spreading of a spreading process .are randomly taken from the distribution .the infection of one end of the edge takes place at a random time , illustrated by a star . ]the spreading model is defined as follows .let us consider a temporal network where the activation of edges is governed by a renewal process , with inter - contact distribution .it is further assumed that edges have a vanishing duration , and that the random processes associated to the different edges are independent .spreading takes place from node to node .the diffusing entity , called the virus from now on for the sake of simplicity , sits on a node until an edge appears .the virus then invades the neighbouring node with a probability . in order to describe the random process , it is crucial to estimate the waiting time before a certain edge is invaded , which we call the inter - success time .so far , this model is essentially equivalent to an si model , which we assume to take place on a tree in order to avoid non - linear effects . in situationswhen , the relation between the inter - contact distribution and the inter - success distribution is well known to lead to the so - called bus or inspection paradox , as the mean inter - success time is given by , where and are respectively the first and second moments of the inter - contact time distribution . in situationswhen the inter - contact distribution presents a broad tail , the average success time can therefore become arbitrarily large as compared to the average contact time .this result , which seems paradoxical at first glance , originates from the fact that the probability for the virus to arrive during a inter - contact time interval is proportional to its duration .because the activations of different edges are independent , it can indeed be assumed that a virus arrives at a random time within an inter - contact interval ( see fig .1 ) . based on these arguments, one finds that the probability of a success in a first attempt at time is let us now turn to the case of an arbitrary value of . from the previous expression , the inter - success probability after an arbitrary number of attempts is given by where is the probability of a success after trials , where stands for the convolution product and is the inter - success probability in trials .this expression takes a simple form in the associated laplace space where the upper tilde stands for the laplace transform . using the properties of the small expansion of each distribution , one directly find a relations between their moments with , in particular for the first moment , and we recover the expressions of the standard bus paradox when setting .decreasing systematically increases the average inter - success time , as expected because more and more trials are required for the spreading to actually take place . in order to account for this trivial effect and to properly identify how the shape of the inter - contact distribution affects as a function of , we focus on the normalised average success time , a standard measure for the burstiness of a process , defined as where is the average success time in the case of a poisson process . in the latter case ,the inter - contact time is exponential and .one directly finds an expression taking the value in the case of a poisson processes , as expected , and depending linearly on the so - called burstiness coefficient which takes a positive value in fat - tailed distributions , as the ones observed in a majority of empirical systems .this result clearly shows that burstiness tends to slow down dynamics , but that its impact becomes less and less important as the probability of success decreases and a higher number of attempts is necessary for the virus to spread . . indicates the epidemic threshold .the parameters of the model are , , and the inter - contact distribution is a gamma distribution , with parameters and . ] the dynamics described so far allow to estimate the spread of infection in situations when nodes do not recover from the disease . in a majority of practical situations , however , nodes remain infected only during a finite time before recovering . in that case , the propensity of the virus to invade the system is mainly governed by the connectivity of the network , e.g. number of contacts per infected user , and its transmissibility , also called infectivity , defined as the probability that the virus spreads to an available neighbour before the infected node recovers and the virus becomes de - activated .transmissibility is defined as where , as before , is the probability of a successful infection at time , and is the probability that the infected node recovers at an ulterior time .this expression clearly shows the importance of the competition between two temporal processes . in the case of tree - like networks , where all nodes have the same transmissibility , it is straightforward to show that the basic reproduction number , defined as the average number of additional people that a person infects before recovering , is given by where is the expected number of susceptible neighbours of an infected node .the epidemic threshold is defined by the condition separating between growing and decreasing spreading .the epidemic threshold is thus reduced either by reducing the transmissibility or .this result is valid only when the network has a tree - like structure , which is valid for a majority of random network models below the epidemic threshold . in order to illustrate these results and to perform numerical tests in the following ,we consider an inter - contact distribution given by a gamma distribution with scale parameter and shape parameter .the family of distributions has the advantage of including the exponential distribution , for , and to produce distributions with a tuneable variance by changing .one finds fixing , one thus finds that the variance increases by decreasing .let us also note that transmissibility takes a particularly simple expression when is an exponential distribution , so that when the recovery time is a constant , and when the recovery distribution is an exponential with rate . as a first check , one shows the accuracy of eq . [ tree ] to determine the epidemic threshold on a tree by numerical simulations in fig .[ fig : r0 ] .vs probability of success for fixed values for fixed values of , with . in the case of poisson processes , with , the system does not exhibit a dependency on . when the system is bursty , in contrast ( ) , increasing decreases the transmissibility of the process . ]let us now consider the impact of the shape of the gamma distribution , calibrated by , on transmissibility , by using the exponential case as a baseline .numerical solutions show in fig .[ fig : eff ] the dependency of transmissibility on and , for three values of ( the value of is thus determined because of eq .[ klkl ] ) .simulations are all performed with but similar results are obtained for exponential distributions for the recovery time .one observes that transmissibility depends on the ratio when , but that the system exhibits deviations to this linear relationship when the dynamics deviates from a poisson process . in order to quantify this deviation ,we explore the dependency of on the success probability , for fixed values of the parameter .the latter is simply a naive estimation of the time of success , obtained by multiplying the average time between two contacts and the expected number of contacts before a success takes place .numerical results clearly show , in fig .[ fig : ratio_cst ] , that transmissibility becomes less and less efficient , as compared to the poisson case , for increasing values of , in agreement with the observation that the inter - success time is more and more affected by the bus paradox for increasing values of . the mechanism behind this effect is also apparent in fig .[ fig : cdf ] where , again for a fixed value of , transmissibility is clearly less efficient in the case of bursty dynamics , for larger values of .this result has important practical implications , for instance in online marketing , where user activity is known to be bursty .indeed , let us consider a marketing agency having to decide on a strategy in order to optimise its viral impact .the impact of its campaign , evaluated by its transmissibility , increases only sub - linearly with its quality , evaluated by its success probability .this observation therefore needs to be considered when devising a strategy to balance the gains and the cost of increasing the viral potential of an ad ., for and fixed values of .the dynamics is more and more affected by the shape of the distribution for larger values of . ]the main purpose of this paper was to study spreading on temporal networks , where the temporality of edge activations is modelled as a stochastic process and where the inter - contact time distribution takes an arbitrary expression . in practice ,we have mainly been interested in situations when the inter - contact time distribution presents a fat tail , as observed in a variety of social networks .in contrast with a majority of previous works , we have considered a model where the transmission of the diffusive entity is only successful with a certain probability at each contact .our main results are twofold .first , we have derived an analytical expression for the average inter - success time .we have shown that burstiness plays a more and more important role , associated to a slowing down of the dynamics by the so - called bus paradox , when the probability of success is increased . in the limit of a small probability of success ,the shape of the distribution ceases to play a role , and only its average determines the speed of spreading .second , we have turned to numerical computations in order to calculate the epidemic threshold for a dynamics where nodes remain infectious during a random period of time before recovering .our results confirm that the bus paradox hinders the spreading of the process when the probability of success is increased .as we discussed , the results have implications for the design of efficient marketing campaign .in particular , it suggests that an efficient strategy should aim at properly predicting the future activation times of a target user , in order to minimise the time between his infection and his first contact , and therefore the impact of the bus paradox . in this work ,we have considered locally tree - like networks , as often assumed when studying spreading processes .an interesting line of future research would be to incorporate more complex , arbitrary network structures , where cycles and communities play a role .we acknowledge support from iap dysco ( funded by the belgian science policy office ) and the arc ` mining and optimization of big data models ' ( funded by the communaut wallonie - bruxelles ) .computations were performed at the ` plate - forme technologique en calcul intensif ' ( ptci ) of the university of namur , belgium , with the financial support of the f.r.s.- fnrs .m.g . , j.c.d . and r.l .conceived the project , derived the results and wrote the manuscript .m.g . performed the numerical simulations .m. karsai , k. kimmo , a. l barabasi and j.kertesz , sci rep 2 , 397 ( 2014 ) r. lambiotte , l. tabourier and j - c .delvenne , eur .j. b. 86 ( 2013 ) j. kleinberg , d.m.k.d .7,373 ( 2003 ) a.l barabsi , nature 435 , 204 ( 2005 ) m. kivel , r.k .pan , k. kaski , j. kertsz , j. saramki and m. karsai , j. stat .mech 03 , 05 ( 2012 )
|
we study spreading on networks where the contact dynamics between the nodes is governed by a random process and where the inter - contact time distribution may differ from the exponential . we consider a process of imperfect spreading , where transmission is successful with a determined probability at each contact . we first derive an expression for the inter - success time distribution , determining the speed of the propagation , and then focus on a problem related to epidemic spreading , by estimating the epidemic threshold in a system where nodes remain infectious during a finite , random period of time . finally , we discuss the implications of our work to design an efficient strategy to enhance spreading on temporal networks .
|
as quoted by weber ( 1893 ) , leopold kronecker is known to have said : _ god made natural numbers ; all else is the work of man_. the proposed packages adresses aspects of combinatorial aspects of integer encodings and can be paraphrased as a slight modification of kronecker s quote : _god created the integral unit _`` '' ; _ all else is the result of computation_. the topic of integer encoding schemes , is one which generates interest both from the amateur and professional mathematician alike , since : * barriers to entry to the subject are virtually non existent , in light of the fact that the main ideas can easily be conveyed to elementary school students . *it s topics have ramifications and connections with other topics in mathematics such as algebra , cobinatorics and number theory . *most importantly , the topic offers a treasure trove of fascinating easy to state open questions .a number _ circuit _ encoding is a finite directed acyclic graph constructed as follows .nodes of in - degree zero are labeled by either of the constants or .all other nodes of the graph have in - degree two and are labeled either ( ) , ( ) or ( ) .the two edges going into a gate labeled by ( ) are labelled by _ left _ and _ right _ , in order to distinguish the base ( left input ) from the exponent ( right input ) .the nodes of out - degree zero correspond to output gates of the circuit .the _ size _ of is the number of nodes in .the _ depth _ of is the length of the longest path in .a number _ formula _ encoding is a special circuit with the additional restriction that every node has out - degree at most one .given an monotonically increasing function we seek to determine the number of formula encodings for some integer of size at most .in many cases the analysis is considerably simplified by considering _ monotone _formula encodings , namely formula encodings further restricted to have all in - degree zero nodes labeled with the constant .it is rather natural to consider formulas for which is never an input to a multiplication or an exponentiation gate .it was shown in there exists constants and such that some real number number such that the number of formula encodings of is asymptotically equal to in the more general setting where the label is allowed for in - zero nodes of the graphs the asymptotics for the number formula encodings for an integer of size not exceeding as tends to infinity is still unknown .the content of this paper is the following .in section 2 we provide a general overview of the computational model and our basic assumptions .the rest of the paper provides an annotated implementations of the various procedures for manipulating formulas encodings . a seperate sage file which isolates the procedures acompanies the paper andcan be used for experimental set up with our proposed package .let denote the set of formula encodings constructed by combining finitely many fan - in two addition ( ) , multiplication ( ) and exponentiation ( ) gates with restricted to either constants or .for the sake of completeness we pin down our computational model by describing formula transformation rules which prescribe equivalences among distinct elements of .let , , and denote arbitrary elements of .the equivalence between distinct elements of is prescribed by the following transformation rules 1 .commutativity 2 .associativity 3 .unit element 4 .distributivity finally an important rule is that a formula is considered invalid if admits as a subformula any formula equivalent to via the transformation rules prescribed above . throughout the discussion, the efficiency of formula encodings will be a recurring theme and thus we ( often implicitly ) exclude from formulas which admit sub - formulas of the form we remark as is well known that any formulas from the set can be uniquely encoded as strings from the the alphabet using either the prefix or the postfix / polish notation .let denotes the number of formulas encoding in which evaluated to and of size not exceeding constructed using gates from the set and rooted at any of the gates in the set where . as pointed in , the non linear recurrence relations which determines the counts for the number of formulas encodings of and incidentally the number of vertices of the equivalence class graph associated with the integer given by and in order to analyze arithmetic algorithms , we introduce the graph whose vertices are elements which belong to the equivalence class of formulas of size at most which evaluate to some given number .we shall refer to as the arithmeticahedron of .edges are placed in between any two vertices of if either of the following conditions are true 1 .each formula vertex can be obtained from the other by the use of a single associativity transformation rules .each formula vertex can be obtained from the other by the use of a single commutativity transformation rule .each formula can be obtained from the other by the use of one of the distributivity transformation rules .arithmetical algorithm can thus be depicted as walks on some arithmeticahedron and incidentally the performance of algorithm can be measured in terms of the total length of walks on some arithmeticahedron .we present here the implementation details of our integer encoding packages . the package will be crucial for setting up various experiments which would suggest interesting conjecture and possibly proofs to some of these conjectures .we shall think of our formulas as rooted binary trees with leafs labeled with the integral unit ( ) and all other vertices labeled with either the addition ( ) , multiplication ( ) , or exponentiation ( ) operation .it shall be convenient to use the bracket notation to specify such trees to sage and note that the prefix notation is easily obtain from the bracket notation .+ def t2pre(expr ) : `` '' " converts formula written in the bracket tree encoding to the prefix string encoding notation examples : the implementation here tacitly assumes that the input is a valid binary bracket formula - tree expression .the usage of the function is illustrated bellow .: : sage : t2pre([+,1,1 ] ) +11 authors : - edinah k. gnang and doron zeilberger to do : - `` '' `` s = str(expr ) return ( ( ( ( s.replace(''[``,''``)).replace('']``,''``)).replace('',``,''``)).replace(''``,''``)).replace ( '' `` , '' " ) + as the code for the function t2pre suggest the binary - tree formula is very close to the prefix notation .the usage of the function is illustrated bellow \right)=\mbox{'+11'}\ ] ] a minor variation on the prefix notation called the postfix notation is implemented bellow + def t2p(expr ) : `` '' " the function converts binary formula - tree format to the more compacts postfix string notation .examples : the implementation here tacitly assumes that the input is a valid binary formula - tree expression .the usage of the function is illustrated bellow .: : sage : t2p([+,1,1 ] ) 11+ authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - `` '' `` s = str(expr ) return ( ( ( ( s.replace(''[``,''``)).replace('']``,''``)).replace('',``,''``)).replace(''``,''``)).replace ( '' `` , ' ' " ) [ : : -1 ] the usage of the function is illustrated bellow \right)=\mbox{'11+'}\ ] ] when using the wilf methodology , we will require a random number generator which amounts to rolling a loaded die .we implement here the function allowing us to roll a loaded die .+ def rollld(l ) : `` '' " the functions constructs a loaded die according to values specified by the input list of positive integers .the input list also specifies the desired bias for each one of the faces of the dice examples : the tacitly assume that the input list is indeed made up of positive integers as no check is perform to validate that assumption : : sage : rollld([1 , 2 , 3 ] ) 2 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " # summing up all the n = sum(l ) r = randint(1,n ) for i in range(len(l ) ) : if sum(l[:i+1 ] ) > =r : return i+1 + given a list of positive integers the procedures operates in two steps .first it samples uniformly at random a positive integer less than the sum of all the positive integers in the input list . the last step consist in returning the largest index of the element in the input list such that the sum of the integers preceding that index is less or equal to the sampled integers .we provide here a straight forward implementation of procedures for listing formulas which only uses addition .+ def fat(n ) : `` '' " the procedure outputs the list of formula - binary trees constructed using fan - in two addition gates and having inputs restricted to the integral unit 1 and the resulting formulas each evaluate to the input integer n > 0 .examples : the procedure expects a positive integer otherwise it returns the empty list .: : sage : fat(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - `` '' " if n==1 : return [ 1 ] elif n > 1 and type(n ) = = integer : gu = [ ] for i in range(1,n ) : gu = gu + [ [ + , g1 , g2 ] for g1 in fat(i ) for g2 in fat(n - i ) ] return gu else : return [ ] + we illustrate bellow the output of the function call with the inputs 1 and 2 . the formulas returned by the fat procedure are in binary tree form . for convenience we may implement a function which output the expression in prefix notation , the function for formatting the encoding into prefixis provided bellow + def fapre(n ) : `` '' " the procedure outputs the list of formula in prefix notation constructed using fan - in two addition gates having inputs restricted to the integral unit 1 and the resulting formula evaluates to the input integer n > 0 .examples : the input n must be greater than 0 : : sage : fapre(3 ) [ +1 + 11 , ++111 ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return [ t2pre(g ) for g in fat(n ) ] + the postfix variant of the function implemented is immediate and provided bellow .+ def fap(n ) : `` '' " the set of formula only using addition gates which evaluates to the input integer n in prefix notation .examples : the input n must be greater than 0 : : sage : fap(3 ) [ 11 + 1+ , 111++ ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - nothing as this procedure is optimal `` '' " return [ t2p(g ) for g in fat(n ) ] + having implemented procedures which produces formulas using only addition , we now turn to the problem of enumerating such formulas. clearly we could enumerate the sets by first producing the formulas and then enumerating them , but this would lead to a very inefficient use of space and time resources .instead we compute recurrence formulas which determines the number of formulas encoding using only additions and with input restricted to the integral unit .+ def ca(n ) : `` '' " the procedure outputs the number of formula - binary trees constructed using fan - in two addition gates and having inputs restricted to the integral unit 1 and the each of the resulting formulas each evaluate to the input integer n > 0 .examples : the input n must be greater than 0 : : sage : ca(3 ) 2 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return 1 else : return sum([ca(i)*ca(n - i ) for i in range(1,n ) ] ) + we illustrate the usage of the functions bellow furthermore we may note that which would suggest that for to avoid redundancy we may choose to only list formulas for which the second term of the tree is less or equal to the integer encoded in the left term of the tree .we provide bellow the implementation of the procedure .+ def lopfat(n ) : `` '' " outputs all the formula - binary trees only using addition such that the first term of the addition is > = the second term . examples : the input n must be greater than 0 : : sage : lopfat(3 ) [ [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 0 : return [ ] elif n = = 1 : return [ 1 ] else : gu = [ ] for i in range(1,1+floor(n/2 ) ) : gu = gu + [ [ + , g1 , g2 ] for g1 in lopfat(n - i ) for g2 in lopfat(i ) ] return gu + for outputting such formulas in prefix notation we use the function implemented bellow + def lopfapre(n ) : `` '' " outputs all the formula - binary tree which evaluate to the input integer n such that the first term of the addition is > = the second term in prefix notation .examples : the input n must be greater than 0 : : sage : lopfapre(2 ) `` + 11 '' authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return [ t2pre(f ) for f in lopfat(n ) ] + for outputting such formulas in postfix notation we use the function implemented bellow + def lopfap(n ) : `` '' " outputs all the formula - binary tree which evaluate to the input integer n such that the first term of the addition is > = the second term in postfix notation . examples : the input n must be greater than 0 : : sage : lopfap(2 ) `` 11 + '' authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return [ t2p(f ) for f in lopfat(n ) ] + similarly we provide an implementation for a distinct procedure for enumerating formulas trees for which the second term of the tree is less or equal to the integer encoded in the left term of the tree .+ def lopca(n ) : `` '' " outputs the number of formula - binary trees only using addition gates such that the first term of the addition is > = the second term .examples : the input n must be greater than 0 : : sage : lopca(3 ) 1 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return 1 else : return sum([lopca(i)*lopca(n - i ) for i in range(1,1+floor(n/2 ) ) ] ) + in many situations , there will be way more formulas then it would be reasonable to output in a list , however for experimental purposes it is often sufficient to generate formulas of interest uniformly at random . incidentally following the wilf methodologywe implement a function for sampling uniformly at random formula which use only addition gates and have input restricted to the integer 1 .+ def rafat(n ) : `` '' " outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n > 0 .examples : the input n must be greater than 0 : : sage : rafat(3 ) [ + , [ + , 1 , 1 ] , 1 ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 0 : return [ ] if n = = 1 : return [ 1 ] else : # rolling the loaded die .j = rollld([ca(i)*ca(n - i ) for i in range(1,n+1 ) ] ) return [ + , rafat(j ) , rafat(n - j ) ] + quite straightforwardly we provide bellow the implementation of the procedure for sampling a random formulas but returning them respectively in prefix notation + def rafapre(n ) : `` '' `` outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n in prefix notation . examples : the input n must be greater than 0 : : sage : rafapre(3 ) ' ' + + 111 " authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return(t2pre(rafat(n ) ) ) + for outputting uniformly sampled random formula in postfix notation we implement the function bellow + def rafap(n ) : `` '' " outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n in postfix notation . examples : the input n must be greater than 0 : : sage : rafap(3 ) 111++ authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return(t2p(rafat(n ) ) ) similarly we implement a procedure for sampling uniformly at random a formula where the left term is greater or equal to the right term .+ def ralopfat(n ) : `` '' " outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n such that the first term of the addition is > = the second term . examples : the input n must be greater than 0 : : sage : ralopfat(3 ) [ + , [ + , 1 , 1 ] , 1 ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return [ 1 ] else : # rolling the loaded die .j = rollld([lopca(i)*lopca(n - i ) for i in range(1,1+floor(n/2 ) ) ] ) return [ + , ralopfat(n - j ) , ralopfat(j ) ] for outputting a uniformly sampled formulas in prefix having it s first term greater or equal to the second term in prefix notation we have + def ralopfapre(n ) : `` '' `` outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n such that the first term of the addition is > = the second term in prefix notation .examples : the input n must be greater than 0 : : sage : ralopfapre(3 ) ' ' + + 111 " authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return t2pre(ralopfat(n ) ) alternatively for outputting a uniformly sampled formula with the right term greater or equal to the left term expressed in postfix notation we use the function implemented bellow .+ def ralopfap(n ) : `` '' `` outputs a uniformly randomly chosen formula - binary tree which evaluate to the input integer n such that the first term of the addition is > = the second term in postfix notation . examples : the input n must be greater than 0 : : sage : ralopfap(3 ) ' ' 111++ " authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return t2p(ralopfat(n ) ) we discuss here in detail procedures for producing and enumerating formulas which result from a finite combination of fan - in two addition , multiplication gates and having inputs restricted to integer .the basic principles underlying most procedures consists in partitioning the set of formula into disjoint sets according to the root gate of the formulas considered . in this particular case we will consider the partition of formulas according to wether or not the root gate corresponds to an addition or a multiplication gate .+ def famta(n ) : `` '' " the set of formula - binary trees only using additions and multiplications gates with the root gate being an addition gate and most importantly evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : famta(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 0 : return [ ] elif n = = 1 : return [ 1 ] else : gu = [ ] for i in range(1,n ) : gu = gu + [ [ + , g1 , g2 ] for g1 in famt(i ) for g2 in famt(n - i ) ] return gu + the procedures which determines the formulas with root gate corresponding to a multiplication gate is provided bellow : + def famtm(n ) : `` '' " the set of formula - binary trees only using addition and multiplication gates with root gate corresponding to a multiplication gate which evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : famtm(4 ) [ [ * , [ + , 1 , 1 ] , [ + , 1 , 1 ] ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return [ ] else : gu = [ ] for i in range(2 , 1+floor(n/2 ) ) : if mod(n , i ) = = 0 : gu = gu + [ [ * , g1 , g2 ] for g1 in famt(i ) for g2 in famt(n / i ) ] return gu + we implement bellow the function which compute the union of the two partition of formulas , those rooted at an addition gate and the ones rooted at a multiplication gate .+ def famt(n ) : `` '' " the set of formula - binary trees only using addition and multiplication gates . examples : the input n must be greater than 0 : : sage : famt(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return ( famta(n ) + famtm(n ) ) + again following the wilf methodology we implement distinct procedures for enumerating formulas which result from a finite combination of fan - in two addition and multiplication gates .we start by implementing the function which enumerate formulas rooted at an addition gate + def cama(n ) : `` '' " output the size of the set of formulas produced by the procedure famta(n ) .examples : the input n must be greater than 0 : : sage : cama(4 ) 5 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : return sum([cam(i)*cam(n - i ) for i in range(1,n ) ] ) + we then implement the function which enumerate formulas resulting from finite combination of addition , multiplication gates rooted at a multiplication gate .+ def camm(n ) : `` '' " output the size of the set of formulas produced by the procedure famtm(n ) .examples : the input n must be greater than 0 : : sage : camm(4 ) 1 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : return sum([cam(i)*cam(n / i ) for i in range(2,1+floor(n/2 ) ) if mod(n , i)==0 ] ) + finally we implement the function which enumerates all formulas which result from a finite combination of addition , multiplication gates which evaluate to the input integer + def cam(n ) : `` '' " output the size of the set of formulas produced by the procedure famt(n ) .examples : the input n must be greater than 0 : : sage : cam(6 ) 52 authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return cama(n)+camm(n ) + as we have mentioned for formulas of large sizes we implement a function which samples uniformly at random formulas which evaluate to the input integer and result from a finite combination of addition and multiplication gates and rooted at an addition gate + def rafamta(n ) : `` '' " outputs a formula - binary tree formula sampled uniformly at random amoung all formulas which evaluates to the input integer n the formula results from a finite combination of addition and multiplication gates and is rooted at an addition gate .examples : the input n must be greater than 0 : : sage : rafamt(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : j = rollld([cam(i)*cam(n - i ) for i in range(1,n+1 ) ] ) return [ + , rafamt(j ) , rafamt(n - j ) ] + similarly we implement a function which samples a uniformly at random a formula which evaluate to the input integer , which results from a finite combination of addition , multiplication gates and is rooted at a multiplication gate + def rafamtm(n ) : `` '' " outputs a formula - binary tree sampled uniformly at random which evaluates to the input integer n using only addition and multiplication gates and rooted at a mulitplication .examples : the input n must be greater than 0 : : sage : rafamt(6 ) [ *,[+ , 1 , 1 ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : print 1 has no multiplicative split return i elif is_prime(n ) : print str(n)+ has no multiplicative split return i else : lu = [ ] l = [ ] for i in range(2,1+floor(n/2 ) ) : if mod(n , i)==0 : lu.append(i ) l.append(cam(i)*cam(n/i ) ) j = rollld(l ) return [ * , rafamt(lu[j-1 ] ) , rafamt(n / lu[j-1 ] ) ] + finally we can combine the two functions implemented above to obtain a functions which samples uniformly at random a formula which evaluates to the input integer and results from a finite combination of addition and multiplication gate + def rafamt(n ) : `` '' " outputs a formula - binary tree sampled uniformly at random which evaluates to the input integer n using only addition and multiplication gates .examples : the input n must be greater than 0 : : sage : rafamt(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : i = rollld[cama(n),camm(n ) ] if i==1 : return rafamta(n ) else : return rafamtm(n ) + for obtaining the list all formulas which combine addition and multiplication express using the postfix notation and evaluate to the input integer we have + def famp(n ) : `` '' " outputs the set of formula - binary tree written in postfix notation which evaluates to the input integer n using only addition and multiplication gates . examples : the input n must be greater than 0 : : sage : famp(2 ) 11+ authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return [ t2p(f ) for f in famt(n ) ] + similarly for obtaining the list all formulas which combine addition and multiplication gates and evaluate to the input integer express in the prefix notation we have + def fampre(n ) : `` '' " outputs the set of formula - binary tree written in prefix notation which evaluates to the input integer n using only addition and multiplication gates .examples : the input n must be greater than 0 : : sage : fampre(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return [ t2pre(f ) for f in famt(n ) ] + for obtaining the randomly sample integer which evaluates to the input integer and is uniformly sampled among all formulas which combine addition and multiplication express using the postfix notation we have + def rafamp(n ) : `` '' " outputs a uniformly randomly sample formula - binary tree written in postfix notation which evaluates to the input integer n using only addition and multiplication gates .examples : the input n must be greater than 0 : : sage : rafamp(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return t2p(rafamt(n ) ) + similarly obtaining the randomly sample integer which evaluates to the input integer and is uniformly sampled among all formulas which combine addition and multiplication express using the prefix notation we have + def rafampre(n ) : `` '' " outputs a uniformly randomly sample formula - binary tree written in prefix notation which evaluates to the input integer n using only addition and multiplication gates .examples : the input n must be greater than 0 : : sage : rafampre(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return t2pre(rafamt(n ) ) we discuss here procedures for producing and enumerating formulas using a combination of fan - in two addition , multiplication and exponentiation gates .the principles used are very much analogous to those used in the previous section .we start by formulas rooted at addition gates + def fameta(n ) : `` '' " the set of formula - binary trees only using addition , multiplication , and exponentiation gates .the root gate being an addition gate and and the formula evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : fameta(2 ) [ + , 1 , 1 ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return [ 1 ] else : gu = [ ] for i in range(1,n ) : gu = gu + [ [ + , g1 , g2 ] for g1 in famet(i ) for g2 in famet(n - i ) ] return gu + next we implement procedure for listing formulas rooted at a multiplication gate + def fametm(n ) : `` '' " the set of formula - binary trees only using addition .multiplication and exponentiation gates with the top gate being a multiplication gate which evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : fametm(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return [ ] else : gu = [ ] for i in range(2,1+floor(n/2 ) ) : if mod(n , i ) = = 0 : gu = gu + [ [ * , g1 , g2 ] for g1 in famet(i ) for g2 in famet(n / i ) ] return gu + and finally we list formulas rooted at an exponentiation gates + def famete(n ) : `` '' " the set of formula - binary trees only using addition .multiplication and exponentiation gates with the top gate being an exponetiation gate which evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : famete(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n = = 1 : return [ ] else : gu = [ ] for i in range(2,2+floor(log(n)/log(2 ) ) ) : if floor(n^(1/i ) ) = = ceil(n^(1/i ) ) : gu = gu + [ [ ^ , g1 , g2 ] for g1 in famet(i ) for g2 in famet(n^(1/i ) ) ] return gu + finally combining the three function implemented above we obtain the function which lists all formulas which combine addition , multiplication , and exponentiation gates which evaluate to the input integer .+ def famet(n ) : `` '' " the set of formula - binary trees only using addition . multiplication and exponentiation gates which evaluates to the input integer n. examples : the input n must be greater than 0 : : sage : famet(3 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return fameta(n ) + fametm(n ) + famete(n ) + for a more efficient enumeration of the formulas resulting from combination of addition , multiplication and exponentitation gates which evaluate to the input integer we consider here enumerating procedure for formulas rooted at the addition gate : + def camea(n ) : `` '' " output the size of the set of formulas produced by the procedure famta(n ) .examples : the input n must be greater than 0 : : sage : camea(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : return sum([came(i)*came(n - i ) for i in range(1,n ) ] ) + then rooted at a multiplication gate + def camem(n ) : `` '' " output the size of the set of formulas produced by the procedure famta(n ) .examples : the input n must be greater than 0 : : sage : camm(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : return sum([came(i)*came(n / i ) for i in range(2,1+floor(n/2 ) ) if mod(n , i)==0 ] ) + then rooted at an exponentiation gate + def camee(n ) : `` '' " output the size of the set of formulas produced by the procedure famta(n ) .examples : the input n must be greater than 0 : : sage : camee(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return 1 else : return sum([came(i)*came(n^(1/i ) ) for i in range(2,2+floor(log(n)/log(2)))if floor(n^(1/i ) ) = = ceil(n^(1/i ) ) ] ) the enumeration scheme can be described using non - linear recurrence formula expressed earlier and repeated here for the convenience of the reader and so that procedure which enumerate formulas evaluating to the input integer and resulting from finite combination of addition , multiplication and exponentitation gates is implemented bellow + def came(n ) : `` '' " output the size of the set of formulas produced by the procedure famta(n ) .examples : the input n must be greater than 0 : : sage : came(6 ) [ [ + , 1 , [ + , 1 , 1 ] ] , [ + , [ + , 1 , 1 ] , 1 ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " return camea(n)+camem(n)+camee(n ) + the code for computing the base of the exponent in the asymptotic formula , when exponentiation gates are not allowed + def constani(nb_terms , nb_itrs , prec ) : # expressing the truncated series f = sum([cam(n)*x^n for n in range(1,nb_terms ) ] ) g = sum([cam(d)*(f.subs(x=(x^d))-x^d ) for d in range(2,nb_terms ) ] ) g = 1/4-g xk = 1/4.077 for itr in range(nb_itrs ) : xkp1 = realfield(prec)(g.subs(x = xk ) ) xk = xkp1 return realfield(prec)(1/xk ) + the code for computing the base of the exponent in the asymptotic formula , when exponentiation gates are allowed + def constanii(nb_terms , nb_itrs , prec ) : # expressing the truncated series f = sum([came(n)*x^n for n in range(1,nb_terms ) ] ) g = sum([came(d)*(f.subs(x=(x^d))-x^d ) for d in range(2,nb_terms ) ] ) g = 1/4-g xk = 1/4.131 for itr in range(nb_itrs ) : xkp1 = realfield(prec)(g.subs(x = xk ) ) xk = xkp1 return realfield(prec)(1/xk ) + code for computing the constant factor multiple in the asymptotic formula + def constaniii(nb_terms , nb_itrs , prec ) : f = sum([cam(n)*x^n for n in range(1,100 ) ] ) g = sum([cam(d)*(f.subs(x=(x^d))-x^d ) for d in range(2,100 ) ] ) g1 = 1/4-g # iteration xk = 1/4.077 for itr in range(20 ) : xkp1 = realfield(100)(g1.subs(x = xk ) ) xk = xkp1 print realfield(100)(1/xk ) # setting the constant rhor = xk h = x + g g = expand((1 - 4*h)*sum([(x / r)^j for j in range(100 ) ] ) ) l = g.operands ( ) ls = [ ] for i in range(100 ) : ls.append(l[len(l)-i-1 ] ) g = sum(ls ) g1 = sqrt(g.subs(x = x*r ) ) c = -1/2/sqrt(pi ) print n(-g1.subs(x=1)*c/2 ) c = n(-g1.subs(x=1)*c/2 ) # computing the list of ratio for ploting .rt = [ cam(n)*sqrt(n^3)/(c*(1/r)^n ) for n in range(2,100 ) ] plt = line([(n , n(rt[n ] ) ) for n in range(len(rt ) ) ] ) return [ plt , rt ]finally we use dynamic programming to determine the shortest monotone formula which evaluates to input integers .+ def shortesttame(n ) : `` '' " outputs the length and an example of the smallest binary - tree formula using fan - in two addition , multiplication and exponentiation gates . examples : the input n must be greater than 0 : : sage : shortesttame(6 ) [ 9 , [ * , [ + , 1 , 1 ] , [ + , 1 , [ + , 1 , 1 ] ] ] ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " if n==1 : return [ 1,1 ] else : aluf = [ ] si = 2*n for i in range(1,n ) : t1 = shortesttame(i ) t2 = shortesttame(n - i ) if ( t1[0]+t2[0]+1 ) < si : si = t1[0]+t2[0]+1 if eval(t1[1 ] ) < = eval(t2[1 ] ) : aluf = [ + , t1[1 ] , t2[1 ] ] else : aluf = [ + , t2[1 ] , t1[1 ] ] for i in range(2,floor(n/2 ) ) : if mod(n , i)==0 : t1 = shortesttame(i ) t2 = shortesttame(n / i ) if ( t1[0]+t2[0]+1 ) < si : si = t1[0]+t2[0]+1 if eval(t1[1 ] ) < = eval(t2[1 ] ) : aluf = [ * , t1[1 ] , t2[1 ] ] else : aluf = [ * , t2[1 ] , t1[1 ] ] for i in range(2,2+floor(log(n)/log(2 ) ) ) : if floor(n^(1/i ) ) = = ceil(n^(1/i ) ) : t1 = shortesttame(n^(1/i ) ) t2 = shortesttame(i ) if ( t1[0]+t2[0]+1 ) < si : si = t1[0]+t2[0]+1 aluf = [ ^ , t1[1 ] , t2[1 ] ] return [ si , aluf ] the recurrence formula scheme for determining the minimal formula encoding is given by the following tropicalization of the enumeration recurrence formula and the discussion the special formula occurs often enough to deserve an abbreviation , we shall use here the symbol , incidentally it is immediate that the our formula encoding can be viewed as functions and this fact will of some significance in subsequent discussion . but first as we have introduced our canonical encodings let us describe two natural algorithms for recovering formula encoding for relatively large set of integers . for computing goodstein canonical forms for relatively large set of integerswe consider the following set recurrence defined by note that for , we have the implementation of the recurrence is just as straight forward .+ def goodstein(number_of_iterations=1 ) : `` '' " produces the set of symbolic expressions associated with the the first canonical form . in all the expressions the symbolic variable x stands for a short hand notation for the formula ( 1 + 1 ) .: : sage : goodstein(1 ) [ 1 , x^x , x , x^x + 1 , x + 1 , x^x + x , x^x + x + 1 ] authors : - edinah k. gnang , maksym radziwill and doron zeilberger to do : - try to implement faster version of this procedure `` '' " # initial condition of initial set n0 = [ 1 , x ] # main loop performing the iteration for iteration in range(number_of_iterations ) : # implementation of the set recurrence n0 = [ 1 ] + [ x^n for n in n0 ] # initialization of a buffer list n1 # which will store updates to n0 n1 = [ ] for n in set(n0).subsets ( ) : if n.cardinality ( ) > 0 : n1.append(sum(n ) ) n0 = list(n1 ) return n0 + as illustration for the computation one of the major benefit of the goodstein encoding is the fact the additional transformation rule results in the classical algorithms for integer addition , multiplication and exponentiation . in other words the goodstein encoding unifies into a single algorithm the seemingly different decimal algorithms for addition , multiplication and exponentiation , the price we pay for such a convenience is a factor additional space for encoding the integers .let us illustrate the general principle by recovering the goodstein encoding for the number encoded by the formula the main steps of the sequence of transformations are thus sketch bellow : + canonical form ( scf ) encoding are derived from the zeta recursion . ^{k}2,\:2^{\left(^{\left(k-1\right)}2 + 1\right)}\right]\cap\prod_{p\in\mathbb{p}_{k}}\left\ { 1\cup p^{\check{\mathbb{n}}_{k}\cap\left[1,\log_{p}\left\ { 2^{\left(^{\left(k-1\right)}2 + 1\right)}\right\ } \right]}\right\ } \right)\ ] ] and is deduced from via completion and hence more generally we have that ^{\left(^{k-1}2+t\right)},\:2^{\left(^{k-1}2+t+1\right)}\right]\cap\prod_{p\in\mathbb{p}_{k+1}^{(t)}}\left\ { \left\ { 1\right\ } \cup p^{\check{\mathbb{n}}_{k}\cap\left[1,\log_{p}\left\ { 2^{\left(^{k-1}2+t+1\right)}\right\ } \right]}\right\ } \right)\ ] ] quite similarly is deduced from via completion and hence finally the associated rational subset construction is specified by the implementation of the zeta recurrence is therefore given by + def scf(nbitr ) : # symbol associated with the prime 2 .x = var(x ) # pr corresponds to the initial list of primes pr = [ x ] # nu corresponds to the initial list of integer nuc = [ 1,x ] ; tnuc = [ 1,x ] # initializing the upper and lower bound upr_bnd = 2 ^ 2 ; lwr_bnd = 2 # computing the set recurrence for itr in range(nbitr ) : for jtr in range(log(upr_bnd,2)-log(lwr_bnd,2 ) ) : tpnu = [ 1 ] for p in pr : tpnu = tpnu+ # keeping the elements within the range of the upper and lower bound nu = [ f for f in tpnu if ( 2^(n(log(lwr_bnd,2))+jtr)<f.subs(x=2 ) and f.subs(x=2)<=2^(n(log(lwr_bnd,2))+jtr+1 ) ) ] print iteration will find +str(2^(n(log(lwr_bnd,2))+jtr+1)-2^(n(log(lwr_bnd,2))+jtr)-len(nu))+ new primes in [ +str(2^(n(log(lwr_bnd,2))+jtr))+ , +str(2^(n(log(lwr_bnd,2))+jtr+1))+] # obtaining the corresponding sorted integer list la = [ f.subs(x=2 ) for f in nu ] ; lb = copy(la ) ; lb.sort ( ) # obtaining the sorting permutation perm = [ ] for i1 in range(len(la ) ) : for i2 in range(len(lb ) ) : if lb[i1]==la[i2 ] : perm.append(i2 ) break # sorting the list using the obtained permutation nu = [ nu[perm[j ] ] for j in range(len(nu ) ) ] # computing the set completion tnuc = tnuc + nu l = len(tnuc ) i = 2^(log(lwr_bnd,2)+jtr)-1while i < l-1 : if(tnuc[i+1].subs(x=2)-tnuc[i].subs(x=2)==2 ) : pr.append(tnuc[i]+1 ) tnuc.insert(i+1,tnuc[i]+1 ) l = l+1 else : i = i+1 # updating the list of integers nuc = tnuc # updating the upper and lower bound lwr_bnd= upr_bnd ; upr_bnd = 2^upr_bnd return [ pr , nuc ] + we deduce from the similarly the code for obtaining scf encodings for rational numbers is provided bellow .+ def rationalset(pr , nuc ) : # initialization of the rational set quc = [ 1 ] # computing the set for p in pr : quc = quc+[m*pn for m in quc for pn in [ p^n for n in nuc]+ ] return quc + if our in main interest is however to sieve out only scf encodings of primes , we would consider the following slightly modified zeta recursion such that we consider the sets and hence furthermore we have \cap\mathbb{n}_{k+1}=\bigcup_{q\in\mathbb{p}_{k}}\mathbb{n}_{q , k+1}\ ] ] finally , the set completion of to is obtained by adjoining to the set formula integer encodings of the form , for all unordered pairs of distinct elements of that the implementation of the modified zeta recursion as discussed above is discussed bellow + def n_1_k_plus_1(nk , pk , k ) : l = [ ] for q in pk : for n in range(floor(ln(2^(k+1))/ln(q.subs(x=2 ) ) ) , floor(ln(2^(k+2))/ln(q.subs(x=2 ) ) ) ) : l.append(q^nk[n ] ) return l then we consider procedure bellow which generates a script for constructing composite tower with a given number of factors + def generate_factor_script(c ) : # creating the string corresponding to the file name filename = n_+str(c)+_kplus1.sage # opening the file f = open(filename,w ) f.write(def n_+str(c)+_k_plus_1(nk , pk , k): ) f.write( l = [ ] ) # variable storing the spaces sp = for i in range(c ) : if i<1 : sp = sp+ f.write(sp+for p+str(i)+ in pk: ) sp = sp+ f.write(sp+for n+str(i)+ in range(floor(ln(2^(k+2))/ln(p+str(i)+.subs(x=2)))): ) elif i==c-1 : sp = sp+ f.write(sp+for p+str(i)+ in pk[pk.index(p+str(i-1)+)+1:]: ) sp = sp+ dv = for d in range(i ) : # string keeping track of the divisors if d = = i-1 : dv = dv+(p+str(i-1)+^nk[n+str(i-1)+]).subs(x=2) else : dv = dv+(p+str(d)+^nk[n+str(d)+]).subs(x=2)* f.write(sp+if floor(ln(2^(k+1)/(+dv+))/ln(p+str(i)+.subs(x=2)))>=0: ) sp = sp+ f.write(sp+for n+str(i)+ in range(floor(ln(2^(k+1)/(+dv+))/ln(p+str(i)+.subs(x=2))),floor(ln(2^(k+2)/(+dv+))/ln(p+str(i)+.subs(x=2)))): ) sp = sp+ mt = for d in range(c ) : # string keeping track of the symbolic scf expression if d = = c-1 : mt = mt+p+str(c-1)+^nk[n+str(c-1)+] else : mt = mt+p+str(d)+^nk[n+str(d)+]* f.write(sp+l.append(+mt+)return l ) else : sp = sp+ f.write(sp+for p+str(i)+ in pk[pk.index(p+str(i-1)+)+1:]: ) sp = sp+ dv = for d in range(i ) : # string keeping track of the divisors if d==i-1 : dv = dv+(p+str(i-1)+^nk[n+str(i-1)+]).subs(x=2) else : dv = dv+(p+str(d)+^nk[n+str(d)+]).subs(x=2)* f.write(sp+for n+str(i)+ in range(floor(ln(2^(k+2)/(+dv+))/ln(p+str(i)+.subs(x=2)))): ) # closing the file f.close ( ) then the main procedure which uses the two procedure implemented above is implemented here + def zetarecursionii(nbitr ) : # defining the symbolic variables x which corresponds # to shorthand notation for ( 1 + 1 ) .var(x ) # initial conditions for the zeta recursion .# initial list of primes in scf encoding pi = [ x ] # initial list of expression associated with the scf # integer encoding .ni = [ 1 ] + pi if nbitr = = 0 : return [ ni , pi , i ] # the first iteration properly starts here i = 0 rb = [ ] rb.append(ni[len(ni)-1 ] ) rb = rb + n_1_k_plus_1(ni , pi , i ) # sorting the obtainted list tmp = [ ] for f in range(2^(i+1),2^(i+2)+1 ) : tmp.append ( [ ] ) for f in rb : tmp[-2^(i+1)+f.subs(x=2)].append(f ) # filling up rb in order rb = [ ] for f in range(len(tmp ) ) : if len(tmp[f ] ) = = 1 : rb.append(tmp[f][0 ] ) else : rb.append(tmp[f-1][0]+1 ) pi.append(tmp[f-1][0]+1 ) ni = list(ni+rb[1 : ] ) if nbitr = = 1 : return [ ni , pi , i ] for i in range(1 , nbitr+1 ) : print iteration number +str(i ) rb = [ ] rb.append(ni[len(ni)-1 ] ) rb = rb + n_1_k_plus_1(ni , pi , i ) # code for going beyound a single prime factors prm = 6 c = 2 while prm < 2^(i+2 ) : generate_factor_script(c ) load(n_+str(c)+_kplus1.sage ) rb = rb + eval("n _ # since ironically c indexes the next prime we have prm = prm*integer((pi[c-1]).subs(x=2 ) ) c = c+1 # sorting the obtainted list tmp = [ ] for f in range(2^(i+1),2^(i+2)+1 ) : tmp.append ( [ ] ) for f in rb : tmp[-2^(i+1)+f.subs(x=2)].append(f ) # filling up rb in order rb = [ ] for f in range(len(tmp ) ) : if len(tmp[f ] ) = = 1 : rb.append(tmp[f][0 ] ) else : rb.append(tmp[f-1][0]+1 ) pi.append(tmp[f-1][0]+1 ) ni = list(ni+rb[1 : ] ) return [ ni , pi , i ] then running the procedure yields the following set of primes + lp3 = zetarecursionii(3)[1 ] },\sage{lp3[1]},\sage{lp3[2]},\sage{lp3[3]},\sage{lp3[4]},\right.\ ] ] },\sage{lp3[6]},\sage{lp3[7]},\sage{lp3[8]},\sage{lp3[9]},\ ] ] },\sage{lp3[9]}\right]\ ] ] incidentally the number of composites less than with the prime in their tower connected to the root is given by so that we have encoding that we discuss appears to be just as natural as the goodstein encoding and offers the benefit of yield considerably smaller monotone formula encodings of integers .the recursive horner encoding also has the advantage that that it can be efficiently deduced from the goodstein endcoding , this is of course not true of the scf .+ def recursivehorner(nbitr=1 ) : x = var(x ) nk = [ 1 , x , 1+x , x^x ] # initialization of the lists lek = [ x^x ] lok = [ 1+x ] lpk = [ x , x^x ] # main loop computing the encoding for i in range(nbitr ) : # updating the list lekp1 = [ m*n for m in lpk for n in lok ] + [ x^m for m in lek+lok ] lokp1 = [ n+1 for n in lek ] lpkp1 = lpk + [ x^m for m in lek+lok ] # the new replaces the old nk = nk + lekp1+lokp1 lek = lekp1 lok = lokp1 lpk = lpkp1 return nkthis material is based upon work supported by the national science foundation under agreements princeton university prime award no .ccf-0832797 and sub - contract no . 00001583 .the author would like to thank the ias for providing excellent working conditions .the author is also grateful to maksym radziwill for providing the code for the computation of the constants in the asymptotic formula , to doron zeilberger who s s initial maple implementation inspired the current implementation and to carlo sanna for insightful comments and suggestions while preparing this package .
|
the following document accompanies the papers , available from gnang s websites . please report bugs to gnang at cs dot rutgers dot edu . the most current version of the document are available from http://www.cs.rutgers.edu/~gnang[gnangs website ]
|
inversion of poisson s equation with a finite - size source in an unbound system is commonly encountered in physics , astronomy , quantum chemistry and fluid mechanics .examples include the electrostatic problems , magnetostatic problems , gravity problems , etc . with an arbitrary source distribution in an infinite space, one would like to know what the potential appears in space and how the resulting force may act back to the source .for incompressible flows in fluid mechanics , pressure also satisfies the poisson s equation that needs to be solved in order to evolve the flow . herethe source of the poisson s equation is related to the flow vorticity , which is often localized within the active region of interest in an unbound domain .another type of problems involves the monte carlo search , such as gravitational lens problems . hereone needs to search for various lens mass distributions to obtain images consistent with observations . for dynamical or monte carlo problems mentioned above, the poisson s equation must be solved at every time step to properly follow the evolution or solved for every trial to arrive at the global minimum .a fast and accurate poisson s solver is therefore desired . in the past couple of decades , a fast and accurate poisson s solver , the modified green s function method , has already been available for a finite - size source in an unbound space .the modified green s function method properly takes into account the image charges outside the computational boundary so that the solution satisfies the periodic boundary condition and can be correctly evaluated within the original domain .moreover , the fast fourier transform ( fft ) can be adopted in this method for speedy computation . however , the cost of this method is a large computational domain required to include the contributions from image charges .the larger volume is times the original volume , where is the spatial dimension . despite that one may reduce the computation by taking advantage of some symmetries in the green s function , the scaling is unavoidable . as a result , this method begins to lose its computational speed edge for high - dimensional problems .however , the modified green s function method actually contains non - negligible errors in regions where the sources are located , especially the source has a large gradient .the error is originated from the assignment of an ambiguous value for the modified green s function at the very grid where the point source is located .different values have been proposed for different situations , and there has been no universally agreed good choice . in this paper , we provide the motivation of the proposed method in section [ sec : motivations ] .we then give the step - by - step recipe of the new method in section [ sec : detailed correction procedures ] .a three - dimensional example and a two - dimensional example are provided to illustrate the performance of the new method in section [ sec : numerical examples ] .section [ sec : discussion ] discusses a possible extension and concludes this work .the numerical errors of the modified green s function method are mostly located around where high numerical accuracy is desired .for the same amount of errors , if they were to be located in regions where the sources are absent , the solution accuracy would have been greatly improved .this is the primary motivation behind our new method .the errors produced by the new method will be moved close to the boundary where the source is nearly absent .our secondary motivation is the computational speed .we seek a new method that computes the force within the original volume with fft , and in a three - dimensional calculation it can save the computation by a factor of at least two as compared with the modified green s function method .however , this advantage may not be valid in low dimensional problems , such as the two - dimensional gravitational lens problem , where the saving in computation can be limited .we now further substantiate our primary motivation .when we compare the forces computed from a isolated source and those computed by fft from the same source , two types of error sources are found , a long - range image monopole error extending over the whole domain and short - range image multipole errors near the domain boundary .take the gravitational problem as an example .the monopole error arises from the attraction of all image masses , which correctly produces a null force at the mass center , but the force error grows away from the mass center . to remove error produced by the monopole images, we can treat the distributed mass as a mass point located at the mass center , and it is conceptually straightforward to subtract off the erroneous forces given by all image mass points .there is no dipole contribution for gravity and so the next order error is from the quadruple moments of image masses , which produce far - field short range forces , proportional to from the image positions , and the largest errors are near the domain boundary .the far - field force errors produced by the image quadrupole moments can similarly be also subtracted off conceptually , if we know how to sum the image contributions .the correction procedure can continue to any arbitrary multipole order systematically .after the -th order corrections the remaining error force will be ever shorter range , , which decays fairly rapidly across the boundary into the domain .once the principle for error subtraction is clear , the remaining question is how to sum up the far - field multipole forces accurately from infinitely many images .we may take each conventional multipole moment expansion of the original density distribution as the source , use fft to invert the poisson s equation , and compute the force that includes all image contributions , which we call the multipole fft force .the desired error sum is obtained by subtracting the fft force from the exact multipole far - field force of the actual density distribution . however, this procedure will yield large numerical errors for the fft force near the mass center because the multipole forces are singular ( ) at the mass center . to circumvent this problem ,a different approach is adopted .we deliberately design a well - behaved template multipole moment density distribution , for which the multipole force is also well - behaved everywhere and has an exact analytical expression .this template multipole density distribution has the same far - field multipole moment as the original density distribution , and is then inverted to obtain its fft force .the desired image sum is obtained by subtracting the fft force from the exact multipole force , and this is the force error to be removed .such a procedure avoids singularities at the mass center and is able to be carried out order by order in multipole expansion .below , the procedure for image multipole correction for three - dimensional case is described : \(a ) the multipole moment of the source , defined as , is first computed up to the order of correction desired .\(b ) a template of multipole density with analytical expressions of force is given , and we let = so that the template has the same multipole moment as the original density .\(c ) the poisson s equation is then inverted via fft in the original domain with the template as the source to compute the multipole fft force , .the difference between the fft force and the known exact analytical force , , then yields the correction force needed for this particular multipole moment .in fact , once every is determined , all can be summed together before the fft inversion .after the overall fft force is obtained , it is then subtracted from the overall exact analytical force to determine the overall force correction .\(d ) the fft force obtained from the original density distribution is finally subtracted from the result of ( c ) .the corrected forces will have high accuracy in the control region of the computational domain with under - corrected errors confined near the domain boundary .for a two - dimensional problem , similar procedures can be straightforwardly followed . as a technical note , each multipole component of the template multipole density and force has a symmetry .when any of the seven mappings , , , , , and is performed , they will assume the same values but with different signs given by the symmetry .it then follows that the template multipole densities and forces need to be computed only in of the domain .when the computer memory space is allowed , each template multipole density and force should be computed only once and stored in the memory , as they will always be the same regardless of the changing source mass distribution .in appendix , we list the template density - potential pairs used in these examples .\(a ) three dimensional case gaussian mass spheres : we provide a 3d example with 6 gaussian spheres of the same central densities but various sizes to demonstrate the accuracy of the present method compared with the modified green s function method .each gaussian sphere has an exact known force and the composite forces from these gaussian spheres are also known exactly by superposition .the sizes of the gaussian spheres vary from grids to grids in a domain of grids . for convenience of displaying the errors produced by the modified green s function method , which are concentrated at the sources , we deliberately place the centers of six sources on two orthogonal planes ; on each plane , 3 sources are placed randomly but confined within the inner half of the box .one plane contains 3 narrow spheres and the other 3 wide spheres .the metric of goodness for any given method is the ratio of the residue error force strength to the original force strength at the same location . to avoid the force error arising from numerical differentiation on the potential produced by the modified green s function method , the three force components arealso directly computed by analytically differentiating the modified green s function .we then examine the residue error configurations for both the modified green s function method and the image multipole method .the multipole image method contains multipoles up to .plotted in fig .( [ fig : err_green ] ) is the percentage error of force for the modified green s function method , and in fig .( [ fig : err_multipole ] ) that for the image multipole method .these errors are displayed on the two slices where the source centers lie .it is clear for the modified green s function method that the force errors are confined within the sources , with smaller spheres having larger force errors , and away from the sources the force error rapidly approaches zero .the error is of small scale and can lead to internal distortion of the mass objects , more so for smaller ones . on the other hand , the errors peak at the boundary for the image multipole method , with the error strongly depending on the distance from the boundary and being insensitive to the detailed spatial distribution of the sources .these errors are smooth and of large scale , and can lead to erroneous acceleration of the outlier objects , more so for those closer to the domain boundary .pixels in the left one , and the other 3 vary from pixels in the right one .the coordinates are in pixels .note that the error is larger at the locations where the sources are placed . ] ) , except for the use of the image multipole method . noted that the error is larger around the peripheral region . ]we however note a caveat for the modified green s function method .the accuracy of this method depends on the value of the radial derivative ( force ) immediately next to the central singularity of the modified green s function .this feature is similar to a -dependent softening length in the force , , where the profile of the softening length depends on the size of the mass object .we find that a particular size of mass object is optimized by one particular value of force around the central singularity , and this value is not optimal for objects of other sizes .normally the optimal force is larger for a more compact object . even after optimization, the most compact object still contains the biggest error . in the present case , we take the green s function force one pixel away from the singularity to be , to be compared with the force value given by the unmodified force law at the same location . this larger value of green s function force , representing force hardening , minimizes the force error in the most compact object . for a comparison with fig .( [ fig : err_green ] ) , shown in fig .( [ fig : err_green_s ] ) is the much greater percentage error by having all pixels to follow the unmodified force law .it is surprising to find that one actually needs the force hardening to capture the correct gravity of a continuous source instead of the force softening used to mimic the softened gravity of collisionless particles .( [ fig : err_green_s ] ) also shows that the errors are isotropic compared with the octuple errors in fig .( [ fig : err_green ] ) using the optimal parameter .this result reflects the fact that minimization of force errors is often incompatible with isotropy of force errors .for some applications it is worth sacrificing some force accuracies for a more isotropic force . despite that this paper aims to address the fluid - based self - gravity , which has subtle differences from the particle - based self - gravity, we briefly address the particle - based gravity to shed light on its similarities . in astrophysics , particles are normally collisionless , and to avoid two - body relaxation a soften length is implemented , thereby smoothing the singular gravity of a particle .the degree of softening depends on particle density , with more softening in a system of less particles density .in fact , by carefully shaping the particle softening , one can achieve the force resolution at the grid scale and below .the continuity of force resolution extending below the grid scale permits the addition of sub - grid particle - particle interactions , which is the basis for the particle - particle - particle , mesh method aiming to solve high density gradient problems .the fluid - based self - gravity is the limiting case for infinitely many particles per cell , therefore requiring minimal force softening ; in fact , the optimal case turns out to require force hardening .also , compared with the modified green s method the image multipole method performs well for small - scale structures , as illustrated in figs .( [ fig : err_green ] ) and ( [ fig : err_multipole ] ) , thus capable of handling high source gradient problems .the absence of an optimal modified green s function for general mass distributions makes it impossible to evaluate the force errors accurately .nevertheless the force error generally scales as for a given choice of green s function central derivative , where is the size of the mass object and the grid size , as described below . on the other hand ,the force error for the image multipole method depends on the truncation degree , , of the multipole expansion for sources located at least half box size away , and hence the largest force error at the boundary for the image multipole method scales as . to test how each method can be improved, we increase the grid resolution by a factor of two for the green s function method and enlarge the domain by a factor of two to place the boundary twice further away for the image multipole method .both increase the pixel number by a factor of 8 .as expected , the latter improves by two orders of magnitude since the residue force error arises from the contribution of and beyond .but the former improves only by a factor about 8 .this comparison illustrates that the image multipole method has a systematic way for drastic improvements whereas the modified green s function method has only limited improvements . ) , except for the use of the green s function that has no tuning of radial derivatives near the central singularity .note that the error is almost two orders of magnitude greater compared to fig .( [ fig : err_green ] ) . ] to better illustrate the problem of the modified green s function method , we now consider the geometric error , which must be exactly zero for the gravitational force . as a result of force errors ,the numerical gravitational force can contain a pseudo - vector .given the force errors of figs .( [ fig : err_green ] ) and ( [ fig : err_multipole ] ) , we now show yielded by both methods in fig .( [ fig : err_curl ] ) . for convenience , we show only one component of perpendicular the plane of slice in fig .( [ fig : err_curl ] ) .to gain an idea of the magnitude of the error , we normalize by , a component of the zeroth - order shear tensor , where is the potential .it is clear that the modified green s function method creates substantially large geometric errors interior to the domain , especially concentrated at locations where the mass objects are .it can produce non - negligible erroneous circulation flows within mass objects . of the three small spheres computed by both methods normalized to the corresponding component of the shear tensor , where is the potential .the color bar is also in the base 10 logarithm .again , the error distributions are similar to force errors in figs .( [ fig : err_green ] ) and ( [ fig : err_multipole ] ) . ]finally , let us turn to the computational speed for the gravitational force calculation , where the inversion of poisson s equation may be a bottleneck for 3d dynamical simulations .we find the force calculation of the modified green s function method takes about twice more time than that of the image multipole method .this ratio can be accounted for by the fact that the modified green s function method computes fft and inverse fft in a volume 8 times the original volume , but the image multipole method computes fft and inverse fft two times in the original volume , one for the actual mass distribution and the other for the template mass distribution .this gains a factor of 4 in the fft computations for the image multipole method .the remaining extra tasks to compute the moments and the template forces only take an extra small fraction of fft computing time to come up with a speedup factor 3.6 .on the other hand , the green s function method can take advantage of the symmetry of the spectral space to reduce the computation by roughly a half .thus , the image multipole method is a factor 1.8 times faster than the modified green s function method .\(b ) two dimensional case lens within lens : a practical two - dimensional example for a strong gravitational lens problem is considered here .we consider two cored isothermal lenses at the same redshift .the main lens is elliptical located at the center and the small lens , as a subhalo , is spherical located near the critical line of the main lens ; thus the critical line of the main lens gets seriously distorted .it is quite common in the lens observations that one of the multipole images is located near a small lens , and is therefore perfect for illustrating the strength of the image multipole method . shown in fig . ([ fig : lens_ex ] ) is the critical line of radius about 1 , corresponding to a lens galaxy at and a source at .the critical lines produced by the two numerical methods and by the analytical solution are indistinguishable from each other at the resolution of fig .( [ fig : lens_ex ] ) .we let the source ( black dot ) be a quasar , for which the optical emission region is on the order of one micro - arcsecond , i.e. , practically a point source .moreover , the source is located near the corner of the caustics and produces a quad image , where one ( a ) of the three near - by images ( b , a and c ) is co - spatial with the small lens , as depicted in the upper - left inset .note that the forces of elliptical and spherical core isothermal lenses have analytical expressions , and hence we know the exact loci of caustics and critical curves . )view of the four source positions produced by the modified green s function(triangle and subscript g ) and image multipole(square and subscript m ) methods . ] in the lens mass modeling , one normally obtains four source positions calculated from the quad image positions for a given lens mass model . even for a fairly accurate mass model ,it is nearly impossible to yield four spatially coinciding sources . as a result ,one evaluates the accuracy of the lens mass model by finding how close the four sources are from the best source position , and how well the magnifications of the four images agree with the observed fluxes .the upper - right inset of fig .( [ fig : lens_ex ] ) shows the zoom - in view of the four source positions produced by each method . in this case , we have used pixels in the image plane for analysis , with a field 5 times bigger in linear size than that of the einstein ring of 1 radius .this corresponds to 0.01 per pixel .in addition , image multipoles are corrected up to . before addressing the error estimate of each method, we show in figs .( [ fig : err_compare]a ) and ( [ fig : err_compare]b ) the absolute values of errors in deflection angles( ) and in magnification $ ] . here, is the surface density normalized to the critical surface density and is the shear normalized to the critical surface density , where is the kronecker delta function .the performance differences in deflection angles and in magnifications are obvious . whether such errors are acceptable or not is gauged by comparing these errors with observational uncertainties , namely the positional uncertainty ( seeing blurring and telescope diffraction ) and the flux uncertainty ( photometry error ) , both of which are evaluated at the image plane. however , the positional errors are analyzed at the source plane in practice . to account for this difference, we divide the image positional uncertainties by the squared root of magnification of every individual image to translate the image positional uncertainties to the source positional uncertainties , for the reason that a blurred image after de - lensed should recovered a sharp source with a smaller uncertainty .also , as the four quasar images have different brightness , therefore different signal - to - noise ratios for determination of image positions , we hence optimally weigh each image position by the square root of magnification again in the definition of ( see below ) .having such an adjustment for the error measure , we now consider the hubble space telescope observations of the ultra - violet channel , for which the positional uncertainty and the photometry uncertainty magnitude .we let = , the sum of contributions from positional errors and magnification errors , where and with , and being the calculated source position and the exact source position , and and the image flux and the expected source flux and is the exact magnification of each image . herethe index runs from 1 to 4 for the quad images . the example configuration given in fig .( [ fig : lens_ex ] ) , quite common in strong lens observations , yields that and , and and for the green s function method and the image multipole method , respectively .not surprisingly , the magnification errors set it apart for the performance of the two methods ; the image multipole method produces far more accurate image magnifications . even for the positional errors , the image multipole method is still better by a factor 3 .there are a couple of caveats for the image multipole method . first ,when the mass distribution is extended near the computational boundary , the domain of computation must be substantially enlarged for this method to work properly , thereby increasing the computational load .this issue on the other hand creates lesser a problem for the green s function method , as it works even when the mass extends up to very close to the boundary .in such a case , the image multipole method will lose the edge .this difficulty can not be easily alleviated by increasing the order of truncation .the increase of multipole order from to requires more volume integrations for evaluating the multipoles , with a gain in the error reduction by a factor only of order , where and are the source size and the computation box size , respectively .if is comparable to , there is essentially no gain compared to the strategy of increasing the box size , for which the force error is reduced by , with and being the original and the enlarged computational box sizes .a related problem is when the source does not vanish at the computational boundary , such as an infinitely - extended plummer s sphere .the mass distribution of the plummer s sphere in an isolated cube contains not only the monopole moment but also moments beyond .hence the force is no longer radial .the image multipole method can not recover the ideal radial force of plummer s sphere but the force of mass distribution containing and even moments beyond .this problem is generic and also exists for the modified green s function method . to avoid this problem, the source must be truly isolated and vanishes at the computational boundary .second , when the mass distribution contains no substantial low - order multipole moments other than the monopole , the image multipole method must proceed to sufficiently high orders to exercise its corrective power .the situation occurs for very symmetrical mass configurations , for example , 6 identical mass clumps located at the center of the 6 faces of a cube , which , apart from the monopole , has multipole moments beginning at , and the corrections for and have null effects .this is probably the worst scenario for the image multipole method . in view of these problems for the image multipole method, it may be possible to adopt a hybrid method combining the strengths of both .near the domain boundary , one can use the modified green s function method to compute the isolated gravity , and in regions where mass clumps are present , one can employ the image multipole method . matching the gravities computed by both methods at locations where errors of both are small can be a non - trivial problem . but given the opposite trends clearly shown in this work , the hybrid method deserves serious attention when a highly accurate solution of poisson s equation is desired . in sum, we present a new method to compute the force given by a finite - sized source of the poisson s equation .this new method places the numerical errors close to the boundary , leaving the source region almost error free .the performance is compared with an existing method , the modified green s function method . with the image multipole correction up to for 3d and for 2d ,we show that the image multipole method can create smaller errors at the source region , and moreover in 3d calculation its computation load can be a factor of lighter than the modified green s function method . unlike the modified green s function method , where systematic improvements of the approximation are quite limited, the accuracy of this new method can be increased drastically by enlarging the computational domain , making this method generally a better choice when high numerical accuracy is desired .we acknowledge the support from the national science council of taiwan with the grant : nsc-100 - 2112-m-002 - 018 .we thank ui - han zhang for many useful discussions .99 blandford r. , narayan r. , 1986 , apj , 310 , 568 budiardja , r. d.,cardall , c. y. , 2011 , comput .182 , 2265 dehnen , w. 2001 , mnras , 324 , 273 eastwood j. w. , brownrigg d. r. k. , 1979 , j. comput .phys . , 32 , 24 fellhauer , m. , kroupa , p. , baumgardt , h. , bien , r. , boily , c. m. , spurzem , r. , wassmer , n. , 2000 , newa , 5 , 305 gunzburger , m. , d. , 1993 , incompressible computational fluid dynamics .cambridge university press keeton c. r. , 2001a , preprint ( astro - ph/0102341 ) keeton c. r. , 2001b , preprint ( astro - ph/0102340 ) kochanek , c. s. 1991 , apj , 373 , 354 koptelova , e. , chiueh , t. h. , chen , w. p. , chan , h. h. , 2014 , a&a , 566 , 10 kormann r. , schneider p. , bartelmann m. , 1994 , a&a , 284 , 285 vegetti , s. , czoske , o. , koopmans , l. v. e. 2010 , mnras , 407 , 225 richard j. , kneib j. , limousin m. , edge a. , jullo e. , 2010 , mnras , 402 , l44 schneider , p. , kochanek , c. , wambsganss , j. , 2006 , gravitational lensing - strong , weak and micro .spinger - verlag , berlin suyu , s. h. , halkola , a. , 2010 , a&a , 524 , a94 szabo , a. , ostlund , n. , s. , 1996 , modern quantum chemistrythe choice of density - potential pairs is not unique , provided the density is regular and nearly zero close to the domain boundary .here we show the template profile adopted in this paper as an example . for the 3d case ,our density - potential pairs for multipoles are defined as and where specifically , we choose the radial dependence of the monopole as and and the radial dependence of the higher order multipoles as and . \end{split } \label{equ : phi_3d_q}\ ] ] in this work , we choose and for the 3d template profile .the unit of and is in pixel .similarly for this radial profile for the 2d case , we define and where the radial dependence of the monopole is chosen to be and \over 4\pi\left(a^3e^{-{r^2\over2a^2}}+a r^2\right)^2},\ ] ] and that of the higher - order multipoles and }\times\\ & \left[{2 m r^2 e^{-{r^2\over2b^2}}\over b^2 } - \left(1-e^{-{r^2\over2b^2}}\right)\left(2m-2+{r^2\over2b^2}\right)\right ] . \end{split}\ ] ] in this work , we choose and for the 2d template profile .the gaussian weighting in density in 2d and 3d is to render the density to approach zero rapidly .the unit of and is in pixel .[ lastpage ]
|
[ firstpage ] gravitation gravitationallensing : strong methods : numerical
|
the perpetual motion of water carves the surface of the earth by entraining and carrying sediment from one location to another , leading to changes of morphology in the ocean and particularly along the coastline .scientists rely on fundamental understanding of sediment transport to explain and predict the dynamic evolution of the seabed and coastal bathymetry at various spatial and temporal scales ; engineers utilize the understanding of the sediment transport mechanisms to design better civil defense infrastructure , which mitigates the impact of coastal hazards such as storm surges and tsunamis on the coastal communities .however , the understanding and prediction of sediment transport are hindered by the complex dynamics and numerous regimes .traditional hydro- and morphodynamic models for sediment transport simulations heavily relied on phenomenological models and empirical correlations to describe sediment erosion and deposition fluxes , which lack universal applicability across different regimes and can lead to large discrepancies in predictions . with the rapid growth of available computational resources in the past decades ,many high - fidelity models have been proposed , including two - fluid models , particle - resolving models , and interface - resolving models .two - fluid models describe the particle phase as a continuum and thus need constitutive relations to account for the particle particle collisions and fluid particle interactions .particle - resolving models explicitly track the movements of all particles and their collisions , which are thus much more expensive than two - fluid models .empirical models are still used to compute the fluid particle interaction forces . in interface - resolving models , not only individual particles but also the detailed flows fields around particle surfaces are fully resolved .consequently , they are more expensive than particle - resolving models but require even less empirical modeling .particle - resolving models can accurately predict particle phase dynamics such as vertical and horizontal sorting due to densities , sizes , shapes , which are important phenomena in nearshore sediment transport . possibly constrained by computational resources at the time , early particle - resolving models used highly simplified assumptions for the fluid phase by modeling the fluid as two - dimensional layers .the number of particles was also limited to a few thousand particles , and thus the computational domain covers only a few centimeters or less for particle diameters typical for coastal sediments . as a result , these methods were limited to featureless bed under specific flow conditions ( e.g. , intense sheet flow conditions , where the layer fluid assumption is valid ) . in the past few years , researchers started to use modern , general - purpose particle - resolving solvers based on computational fluid dynamics discrete element method ( cfd dem ) to study sediment transport . in cfd dem , reynolds averaged navier stokes ( rans ) equations or large eddy simulations ( les ) are used to model the fluid flows , which are coupled with the discrete element method for the particles . the cfd dem has been used extensively in the past two decades in the chemical and pharmaceutical industry on a wide range of applications such as fluidized beds , cyclone separator , and pneumatic conveying . on the other hand, special - purpose codes have been used to study specific regimes of sediment transport , where solvers are developed based on and valid for only the sediment transport regime to be studied , e.g. , bedload transport under two - dimensional , laminar flow conditions .however , the use of modern , general - purpose cfd dem solvers as those used in chemical engineering applications to simulate sediment transport is only a recent development in the past few years . in his pioneering work , used an open - source cfd dem solver to study suspended sediment transport .the merits and significance of schmeecle s pioneering work are summarized as follows : ( 1 ) it is the first work done by using modern cfd dem solver in the simulation of sediment transport , especially in the suspended sediment transport regime ; ( 2 ) rich data sets are obtained by the cfd dem solver that are very difficult to obtain in the field or the laboratory ; ( 3 ) several questions of the mechanics of sediment transport are answered , including the mechanisms of saltation and entrainment ; ( 4 ) interesting and insightful phenomena are observed , including the increase of bed friction at the transition of suspension . however , a theoretical limitation of his work is that the influence of particle volume fraction on the fluid flow is not considered , since the volume fraction does not appear in the fluid continuity equation ( see eq .( 1 ) in ) .this choice was likely made to avoid the destabilizing effects of the volume fraction on the les equations .moreover , the fluid particle drag law adopted in his work does not explicitly account for the volume fraction .consequently , the drag law he used is not able to represent the varying shielding effects of particles under different particle loading conditions .this effect is important in particle - laden flows where the flow field has disparate distributions of particle loadings from very dilute to very dense , which is the consensus of the cfd dem community in simulating industrial particle - laden flows . finally , the study by focused on suspended sediment on featureless beds with comparison of sediment transport rates to empirical formulas in the literature .many other regimes of sediment transport such as bedload transport as well as more complex patterns such as the formation and evolution of bedforms are still yet to be studied . studied the transport of cuttings particles in a pipe with cfd dem , where a volume - filtered les approach is used to model the fluid flow .the emergence of small dunes and sinusoidal dunes from an initially flat particle bed under different flow velocity are observed , demonstrating the capability of cfd dem in predicting the stability characteristics of sediment beds .however , quantitative comparisons with experimental data are limited to a few integral quantities such as holding rate , and a more detailed validation with experimental or numerical benchmark data were not performed . in summary , while a few researchers have made attempts in using cfd dem to study sediment transport and have obtained qualitatively reasonable predictions , a rigorous , comprehensive study of sediment transport in a wide range of regimes with detailed quantitative comparisons with benchmark data is still lacking .this study aims to bridge this gap by tackling the unique challenges for the cfd dem posed by the physical characteristics of sediment transport problems , which are detailed below . given the decades of experiences of using cfd dem in chemical engineering applications , one may expect that all these experiences should be straightforwardly transferable to simulations of sediment transport . unfortunately, this is not the case .first , most of the critical phenomena such as incipient motion , entrainment , suspension , and mixing of suspended sediments with water occur in a boundary layer near the interface of the fluid and the sediment bed . adequately resolving the flow features within the boundary layer such as the mean velocity gradient , shearstress , and turbulent coherent structures is essential for capturing the overall dynamics of fluid and particle flows .in contrast , in fluidized bed applications , the dynamics of the fluids and particles in the entire bed are of equal importance . accurately resolving the boundary layer featuresposes both theoretical and practical challenges for cfd dem .this is because the characteristic length scales of the flow can be comparable to or smaller than the particle diameters , but the cfd dem describes the fluid flows with _ locally averaged _ navier stokes equations , which are only valid at scales much larger than the particle size .moreover , since the carrier phase ( water ) and the dispersed phase ( particles ) have comparable densities in sediment transport , many effects that are negligible in gas solid flows such as added mass effects and lubrication are important sediment transport . in comparison ,the density of the carrier phase ( air or other gases ) in gas - solid flows is two orders of magnitude smaller than that of the particles .consequently , the fluid particle interactions are dominated by the drag forces , while the other forces mentioned above are of secondary importance and can be neglected . in this work , we demonstrate that cfd dem is able to capture the essential features of sediment transport in various regimes with a small fraction of the computational cost of interface - resolved models . on the other hand ,detailed features in the bed dynamics in the turbulent flows are reproduced correctly , which is beyond the reach of lower fidelity models such as two - fluid models or phenomenological model based morphodynamic simulations .furthermore , we demonstrate that improved results can be obtained by properly accounting for the effects of particle volume fraction on the fluid dynamics and the fluid - particle interaction forces . therefore , when properly used , cfd dem can be a powerful and practical tool to probe the fundamental dynamics of sediment transport across a wide range of regimes .the rest of the paper is organized as follows .section 2 presents the theoretical framework of cfd dem approach .the technique adopted to address the difficulty of comparable scales between the boundary layer and the particle sizes in sediment transport is introduced .section 3 summarizes the implementation of the cfd dem solver sedifoam and the numerical methods used in the simulations .the results are presented and discussed in section 4 and 5 , respectively .finally , section 6 concludes the paper .in cfd dem , the translational and rotational motion of each particle is calculated based on newton s second law as the following equations : [ eq : newton ] where is the velocity of the particle ; is time ; is particle mass ; represents the contact forces due to particle particle or particle wall collisions ; denotes fluid particle interaction forces ; denotes body force .similarly , and are angular moment of inertia and angular velocity of the particle ; and are the torques due to contact forces and fluid particle interactions , respectively .to compute the collision forces and torques , the particles are modeled as soft spheres with inter - particle contact represented by an elastic spring and a viscous dashpot .the fluid phase is described by the locally - averaged incompressible navier stokes equations .assuming constant fluid density , the governing equations for the fluid are : [ eq : ns ] where is the solid volume fraction ; is the fluid volume fraction ; is the fluid velocity .the terms on the right hand side of the momentum equation are : pressure gradient , divergence of the stress tensor ( including viscous and reynolds stresses ) , gravity , and fluid particle interactions forces , respectively . in the present study, we used large - eddy simulation to resolve the flow turbulence in the computational domain .we applied the one - equation eddy viscosity model proposed by as the sub - grid scale ( sgs ) model .the eulerian fields , , and in eq .( [ eq : ns ] ) are obtained by averaging the information of lagrangian particles .the fluid - particle interaction force consists of buoyancy , drag , lift force , and added mass force .although the lift force and the added mass force are usually ignored in cfd dem simulations , they are important in the simulation of sediment transport . the drag on an individual particle is formulated as : where and are the volume and the velocity of particle , respectively ; is the fluid velocity interpolated to the center of particle ; is the drag correlation coefficient which accounts for the presence of other particles . the drag force model proposed by applied to the present simulations .the lift force on a spherical particle is modeled as : where indicates the cross product of two vectors ; is the diameter of the particle ; is the lift coefficient .the added mass force is modeled as : where is the coefficient of added mass .the hybrid cfd dem solver _ sedifoam _ is developed based on two state - of - the - art open - source codes in their respective fields , i.e. , a cfd platform openfoam ( open field operation and manipulation ) developed by and a molecular dynamics simulator lammps ( large - scale atomic / molecular massively parallel simulator ) developed at the sandia national laboratories .the lammps openfoam interface is implemented for the communication of the two solvers .the solution algorithm of the fluid solver in _ sedifoam _ is partly based on the work of on bubbly two - phase flows .the code is publicly available at https://github.com/xiaoh/sedifoam under gpl license .detailed introduction of the implementations are discussed in . the fluid equations in ( [ eq : ns ] )are solved in openfoam with the finite volume method .the discretization is based on a collocated grid , i.e. , pressure and all velocity components are stored in cell centers .piso ( pressure implicit splitting operation ) algorithm is used to prevent velocity pressure decoupling .a second - order central scheme is used for the spatial discretization of convection terms and diffusion terms .time integrations are performed with a second - order implicit scheme .an averaging algorithm based on diffusion is implemented to obtain smooth , and fields from discrete sediment particles . in the averaging procedure ,the diffusion equations are solved on the cfd mesh .a second - order central scheme is used for the spatial discretization of the diffusion equation ; a second - order implicit scheme is used for the temporal integration .simulations are performed using cfd dem for three representative sediment transport problems : ` flat bed in motion ' , generation of dunes , and suspended sediment transport .the objective of the simulations is to show the capability of cfd dem for different sediment transport regimes .the first two simulations aim to demonstrate that cfd dem can capture the features of sediment patterns with a small fraction of the computational cost of interface - resolved method .therefore , the results obtained are validated with both the numerical benchmark data and experimental results .the purpose of the third simulation is to show the capability of cfd dem in ` suspended load ' regime at high reynolds number .the results obtained in ` suspended load ' regime are validated using experimental data .the numerical setup of the simulations is detailed in section [ sec : run - setup ] .the study of sediment transport in ` flat bed in motion ' regime is presented in section [ sec : run1-flat ] .the generation of ` small dune ' and ` vortex dune ' is discussed in section [ sec : run2-dune ] .section [ sec : run3-suspend ] details the study of sediment transport in ` suspended particle ' regime .the numerical tests are performed using a periodic channel .the shape of the computational domain and the coordinates system are shown in fig .[ fig : layout - st2 ] .the cartesian coordinates , , and are aligned with the streamwise , vertical , and lateral directions .the parameters used are detailed in table [ tab : param - sedi ] .the numbers of sediment particles range from 9,341 to 330,000 for sediment transport problems of different complexities .cfd dem is used to study the evolution of different dunes according to the regime map in fig .[ fig : dune - regime ] .this is to demonstrate the capability of cfd dem in the prediction of dune migration .it can be seen from the regime map that the dune height increases with galileo number , which is due to the increase of particle inertia .simulations at different galileo numbers are performed to show that cfd dem is able to predict the generation of both ` small dune ' and ` vortex dune ' .it can be also seen in fig .[ fig : dune - regime ] that the size of the dunes is growing from ` small dune ' to ` vortex dune ' then to ` sinusoidal dune ' with the increase of reynolds number .however , the influence of reynolds number to the dune generation is smaller than that of galileo number .the geometry of different numerical tests are shown in fig .[ fig : layout - st2 ] .the boundary conditions in both - and -directions are periodic in all cases . for the pressure field, zero - gradient boundary condition is applied in -direction .however , there are slight differences in the boundary condition for the velocity field . in case 1 and 2a ,the flow is bounded in the vertical direction by two solid walls and no - slip boundary condition is applied . on the other hand , in case 2b and 3 , the simulations are performed in open channels . in the open channel ,no - slip wall is applied at the bottom while free - slip condition is applied on the top .the cfd mesh is refined at the near - wall region and the particle - fluid interface in the vertical ( - ) direction to resolve the flow at the boundary layer . since the cfd mesh is refined and smaller than the size of sediment particle , a diffusion - based averaging algorithm proposed by the authors applied to average the quantities ( volume fraction , particle velocity , fluid - particle interaction force ) of lagrangian particles to eulerian mesh .the bandwidth used in the averaging procedure is in - and - directions and in -direction . to model the no - slip boundary condition of sediment particles, an artificial rough bottom is applied using three layers of fixed sediment particles .the fluid flow is driven by a pressure gradient to maintain a constant flow rate . to resolve the collision between the sediment particles , the contact force between sediment particles is computed with a linear spring - dashpot model . in this model ,the normal elastic contact force between two particles is linearly proportional to the overlapping distance .the stiffness , the restitution coefficient , and the friction coefficient are detailed in table [ tab : param - sedi ] .the time step to resolve the particle collision is 1/50 the contact time to avoid particle inter - penetration .the initialization of the numerical tests follows the numerical benchmark using direct numerical simulations ( dns ) .the initial positions of the particles are determined in a separated simulation of particle settling without considering the hydrodynamic forces . in the particle settling simulation ,particles fall from random positions under gravity with inter - particle collisions . to initialize the turbulent flow in case 2b and 3 , the simulations first run 20 flow - through times with all particles fixed at the bottom ..parameters used in different simulations of sediment transport . [ cols="<,^,^,^,^",options="header " , ] [ tab : dune - tab ] the sediment transport rates from the present simulationsare compared with the experimental data in fig .[ fig : dune - rate ] .it can be seen that the sediment transport rates of both the ` small dune ' regime and ` vortex dune ' regime are consistent with the experimental results .this agreement supports the conclusion that the dunes formed at the bottom do not significantly influence the sediment transport rate .it is noted that the sediment transport rates obtained in case 2b is significantly larger than the predictions of case 2a .this is because the flow regimes of the two test cases are different . in the ` small dune ' generation test, the particles are rolling and sliding on the sediment bed in laminar flow , and thus there is only bedload .in contrast , in the ` vortex dune ' case , the flow is turbulent and thus the suspended load contributes to the total sediment flux . in turbulent flow ,the sediment particles move much faster than the particles rolling on the sediment bed . therefore , the sediment transport rate in turbulent flow is larger than that in laminar flow even at the same shields parameter .[ fig : dune - rate1 ] , normalized by , of case 2a and 2b plotted as a function of shields parameter . the shields parameter of case 2ais defined using eq .( [ eqn : shields ] ) ; the shields parameter of case 2b is defined as .,title="fig:",scaledwidth=45.0% ] in summary , despite some discrepancies , the overall agreement of the results obtained by using _sedifoam _ and those in the literature is good .compared with the computational costs of interface - resolved method ( computational hours ) , the costs of the case 1b are only computational hours , which is more than two orders of magnitude smaller .the comparison between cfd dem and interface - resolved method suggests that cfd dem can predict the movement of dune generation with satisfactory accuracy by using much smaller computational costs . in turbulent flow ,if the vertical component of the eddy velocity is larger than the terminal velocity of the sediment particle , the particles become suspended .simulations are performed to demonstrate the capability of cfd dem in ` suspended load ' regime .in addition , the improvements of the present model over existing simulations using cfd dem are presented .the results obtained are only validated using experimental results .this is because the simulations using interface - resolved method are not available in the literature due to the computational costs for high reynolds number flows . the domain geometry , the mesh resolution , and the properties of fluid and particles are detailed in table [ tab : param - sedi ] .the flow velocities in three numerical simulations range from m / s to m / s .the averaged properties of sediment particles are presented in fig .[ fig : sedi2-rate ] , including the sediment transport rate and the friction coefficient .it can be seen that the sediment transport rates agree favorably with the experimental data .it is worth mentioning that the prediction of sediment transport rate using _ sedifoam _ agrees better with the experimental data than the results obtained by . in the present simulations , the shielding effect of particles is considered by accounting for solid volume fraction , and thus the terminal velocity of sediment particles at the seabed is smaller . since the terminal velocity of the particles is smaller , the particles are more likely to move faster so that the predicted sediment transport rate is larger .the coefficient of friction of the surface is defined as : which describes the hydraulic roughness . as shown in fig .[ fig : sedi2-rate](b ) , obtained in the present simulation and by are larger than the nikuradse value obtained by using immobile seabed .the increase in is because the hydraulic roughness over a loose bed is larger in the presence of movable particles .note that the friction coefficient predicted by _sedifoam _ is slightly smaller than the results predicted by . in the present simulation ,the volume averaged fluid velocity is obtained by using .when the volume fraction term is considered , the fluid volume fraction at the bottom . since the mean flow velocity at the sediment bed is small , the volume averaged mean flow velocity is larger than that obtained without considering the volume fraction term .hence , underestimated the averaged fluid velocity , and thus the friction coefficient calculated by using eq .( [ eqn : frictioncoef ] ) is slightly larger .the temporally and spatially averaged profiles of sediment volume fraction and normalized fluid velocity at m / s are shown in fig .[ fig : sus - alpha - vel](a ) and [ fig : sus - alpha - vel](b ) , respectively .it is noted that the solid volume fraction ( ) near the bottom obtained in the present simulation is about 0.6 , which agrees better with the experimental measurement than the results obtained by . in the present simulation ,the diffusion - based averaging algorithm used no - flux boundary condition to obtain the solid volume fraction at the near - wall region . when using no - flux boundary condition ,mass conservation is guaranteed at the wall so that the prediction of volume fraction is more accurate .it can be seen from fig .[ fig : sus - alpha - vel](b ) that the flow velocity obtained in the present simulation follows the law of the wall as obtained in other cases by .the immobile particle boundary condition at the bottom provides more friction to the sediment particles so that the motion of the bottom particles is constrained .therefore , the velocity of the fluid flow is smaller due to the drag force provided by the particles .the components of reynolds stress are shown in fig .[ fig : sus - reynolds - stress ] .the discrepancies between the reynolds stresses at the near - wall region is because the bottom particles are fixed so that the flow fluctuation in the present simulation is much smaller .the other turbulent shear stress components and are very small and thus are not omitted in the figure . a snapshot of the iso - surface using q - criterion is shown in fig .[ fig : sus - q ] , which demonstrates the vortical structure in suspend sediment transport .it can be seen that the turbulent eddies are observed at the fluid - particle interface , which is consistent with the results obtained by .compared with the vortical structures in the case 2b , the vortices in suspend load regime are independent from the patterns of the sediment bed .this is because no sediment dunes are generated to change the characteristics of the vortices .is plotted , which is the second invariant of the velocity gradient tensor .the unit of the particle velocity in the figure is m / s.,scaledwidth=80.0% ]the proof - of - concept study in section [ sec : simulations ] aims to demonstrate that cfd dem is able to reproduce the integral or macroscopic quantities of ripple formation and morphological evolution ( e.g. , wave length , dune height , evolution speed ) .this validation against experimental data is a prerequisite for performing detailed physical interpretation of the simulation results . without that, we could risk being misled by numerical artifact of current simulations . however , the investigation of the mechanics in sediment transport is the ultimate goal , and we took advantage of the present cfd dem model to investigate the physical insights of sediment transport of different regimes .the discussions on incipient motion in bedload , transition from bedload to suspended load , and coexistence of bedload and suspended load are detailed below .the critical shields parameter is defined to describe the criteria for the incipient motion of sediment particles . this value can be determined by employing visual observation or video imaging techniques and varies at different galileo numbers .however , the critical shields stress is not easy to define in terms of the sediment flux . this is because sediment flux rate decreases gradually and will not totally vanish when the shields stress decreases , which is supported by both experimental measurement and numerical simulation .the relationship of the shields stress and the sediment flux in case 1 is shown in fig .[ fig : shields-1 ] as an example .the critical shields stress in poiseuille flow from experimental observation is ( aussillous et al . , 2013 ) .however , it can be seen in the figure that there is no sudden change in the sediment flux near the critical shields stress for both cfd dem and dns simulations .the results obtained in our simulations are consistent with previous findings that it is difficult to define the minimum flux for incipient sediment motion . the bagnold criterion for suspension denotes the threshold for the transition from bedload to suspended sediment transport .to study this transition , numerical simulations are performed based on the setup of case 3 by using flow velocities ranging from 0.3 m / s to 1.2 m / s .the sediment transport rate is plotted as a function of the friction velocity in fig .[ fig : regime - bag ] .it can be seen in the figure that the particles become suspended when the friction velocity at the sediment bed is larger than fall velocity .this is consistent with bagnold criterion for suspension .moreover , according to the observation from present simulations , the transition from bedload transport to suspended load is not abrupt but gradual . in bedload regime , when the friction velocity is approaching the bagnold criterion , dunes are observed .if the friction velocity further increases , the dunes first grow , and then gradually disappear due to the erosion of flow .when the friction velocity is larger than the fall velocity , particles become suspended .calculations of sediment transport in engineering practice have assume either bedload or suspended load depending on which mode is dominating . whether or to what extent the two transport modes can co - exist is an open question that is subject to debate .dem simulations have the potential to shed light on this issue . a typical snapshot taken from the case 3 with presented in fig .[ fig : bed - sus](a ) , with the bedload and suspended load separated based on two different criteria based on particle velocity ( panel b ) and particle concentration ( panel c ) of two threshold values .it can be seen in the figure that there is a layer of approximately of particles moving slowly as bedload at high shields parameter . in our study, we use a threshold of particle velocity to separate bedload from suspended load according to the maximum particle velocity in bedload transport , where particle terminal velocity m / s .the threshold of the solid volume fraction is also used to capture bedload sediment transport , which is shown in fig .[ fig : bed - sus](c ) .indeed , the figure suggests that the specific fractions of bedload and suspended load depend on the criterion used to delineate them ( i.e. , based on particle velocity or particle volume fraction ) and on the threshold values ( e.g. , or 0.3 ) .however , it is clear that regardless of the criterion or threshold value adopted , bedload and suspended load co - exist in the snapshot analyzed , and both account for significant portion of the total sediment flux .the empirical formulas calibrated on experiments primarily consisting of bedload can be very inaccurate when used to predict flows with suspended load or a mixture of the transport modes , vice versa for formulas developed for suspended load .this is illustrated in fig .[ fig : mpm ] , which shows revisions of the formula of obtained by and are applied to predict the sediment transport rate at different regimes .the prediction of the sediment transport rate by the revised equation proposed by is based on the bedload and is significantly smaller than the prediction of . to investigate the differences between different revisions of the meyer - peter and mller formula , we separated bedload and suspended load using the threshold particle velocity and plotted them as a function of the shields parameter in fig .[ fig : mpm ] .it can be seen from fig .[ fig : mpm ] that the bedload agrees with the formula proposed by , and the suspended load agrees with the formula proposed by . therefore , the deviation of the coefficient in different revisions of meyer - peter and mller formula is due to the significant increase of sediment transport rate from suspended load .particles moving at are considered as suspended particle and colored by red in the left panel ; particles moving at are considered as bedload and colored by yellow in the left panel .panel ( a ) and ( b ) are the comparisons of the vertical profiles of bedload flux and suspended load flux .the middle panel uses a threshold particle velocity at to capture the bedload ; the right panel uses threshold solid volume fraction values of 0.1 and 0.3 .the initial sediment bed is approximately 10 , which correponds to .,scaledwidth=95.0% ] are considered as suspended particle , where m / s is the terminal velocity of the sediment particle.,scaledwidth=50.0% ]in this work , a comprehensive study of current - induced sediment transport in a wide range of regimes is performed by using cfd dem solver _sedifoam_. detailed quantitative comparisons are performed using the results obtained in the present simulations and those in the literature .it is demonstrated from the comparison that the accuracy of cfd dem is satisfactory for the simulation of different sediment bed patterns . considering the computational cost of cfd dem is much smaller than that of the interface - resolved method , cfd dem is promising in the simulation of sediment transport .this opens up the possibility to apply cfd dem to investigate realistic sediment transport problems , e.g. , the formation of ripples in the wave .in addition , the improvement of the results over existing cfd dem simulations is demonstrated .we used a computational domain that is large enough to incorporate the bed form ( ripples ) , which is important advance over the featureless bed in the studies by .the second improvement of the present model is the averaging algorithm , which enables mass conservation and resolves the boundary layer fluid flow simultaneously .third , we used a drag formulation that considered the influence of the volume fraction , which improves the prediction of the sediment flux in suspended load .finally , we considered the influence of additional forcing terms in our numerical simulations , including added mass and lift force . because of the improvements , the sediment transport rate in the suspended load regime agrees better with the experimental results when the solid volume fraction is considered . moreover , reasonable predictions of the friction coefficient and the fluid flow at the sediment bed are reported .the computational resources used for this project were provided by the advanced research computing ( arc ) of virginia tech , which is gratefully acknowledged .we thank dr .kidanemariam for the discussion that helped the numerical simulations in the present paper .we thank the anonymous reviewers for their comments , which helped improve the quality of the manuscript .the authors gratefully acknowledge partial funding of graduate research assistantship from the institute for critical technology and applied science ( ictas , grant number 175258 ) .52 natexlab#1#1url # 1`#1`urlprefix arolla , s. k. , desjardins , o. , 2015 .transport modeling of sedimenting particles in a turbulent pipe flow using euler lagrange large eddy simulation. international journal of multiphase flow 75 , 1 11 .aussillous , p. , chauchat , j. , pailha , m. , mdale , m. , guazzelli , e. , 2013 .investigation of the mobile granular layer in bedload transport by laminar shearing flows .journal of fluid mechanics 736 , 594615 .ball , r. c. , melrose , j. r. , 1997 . a simulation technique for many spheres in quasi - static motion under frame - invariant pair drag and brownian forces .physica a : statistical mechanics and its applications 247 ( 1 ) , 444472 .calantoni , j. , holland , k. t. , drake , t. g. , 2004 . modelling sheet - flow sediment transport in wave - bottom boundary layers using discrete - element modelling .philosophical transactions of royal society of london : series a 362 , 19872002 .hsu , t .- j . , jenkins , j. t. , liu , p.l .- f . , 2004 .on two - phase sediment transport : sheet flow of massive particles .proceedings of the royal society of london a : mathematical , physical and engineering sciences 460 ( 2048 ) , 22232250 .kempe , t. , vowinckel , b. , frhlich , j. , 2014 . on the relevance of collision modeling for interface - resolving simulations of sediment transport in open channel flow. international journal of multiphase flow 58 , 214235 .kidanemariam , a. g. , uhlmann , m. , 2014 .interface - resolved direct numerical simulation of the erosion of a sediment bed sheared by laminar channel flow .international journal of multiphase flow 67 , 174188 .kloss , c. , goniva , c. , hager , a. , amberger , s. , pirker , s. , 2012 .models , algorithms and validation for opensource dem and cfd dem .progress in computational fluid dynamics , an international journal 12 ( 2 ) , 140152 .sun , r. , xiao , h. , 2016 .sedifoam : a general - purpose , open - source cfd-dem solver for particle - laden flow with emphasis on sediment transport .computers & geosciences . accepted .doi : http://dx.doi.org/10.1016/j.cageo.2016.01.011 available at http://arxiv.org/abs/arxiv:1601.03801 .yoshizawa , a. , horiuti , k. , 1985 . a statistically - derived subgrid - scale kinetic energy model for the large - eddy simulation of turbulent flows. journal of the physical society of japan 54 ( 8) , 28342839 .
|
understanding the fundamental mechanisms of sediment transport , particularly those during the formation and evolution of bedforms , is of critical scientific importance and has engineering relevance . traditional approaches of sediment transport simulations heavily rely on empirical models , which are not able to capture the physics - rich , regime - dependent behaviors of the process . with the increase of available computational resources in the past decade , cfd dem ( computational fluid dynamics discrete element method ) has emerged as a viable high - fidelity method for the study of sediment transport . however , a comprehensive , quantitative study of the generation and migration of different sediment bed patterns using cfd dem is still lacking . in this work , current - induced sediment transport problems in a wide range of regimes are simulated , including ` flat bed in motion ' , ` small dune ' , ` vortex dune ' and suspended transport . simulations are performed by using _ sedifoam _ , an open - source , massively parallel cfd dem solver developed by the authors . this is a general - purpose solver for particle - laden flows tailed for particle transport problems . validation tests are performed to demonstrate the capability of cfd dem in the full range of sediment transport regimes . comparison of simulation results with experimental and numerical benchmark data demonstrates the merits of cfd dem approach . in addition , the improvements of the present simulations over existing studies using cfd dem are presented . the present solver gives more accurate prediction of sediment transport rate by properly accounting for the influence of particle volume fraction on the fluid flow . in summary , this work demonstrates that cfd dem is a promising particle - resolving approach for probing the physics of current - induced sediment transport . cfd dem , sediment transport , multiphase flow , bedload transport , dune migration
|
the process of pairing and matching between members of two disjoint groups is ubiquitous in our society. the underlying mechanism can be purely random , but in general decisions on selections are guided by rational choices , such as the relationship between advisor and advisee , the employment between employer and employee and the marriage between heterosexual male and female individuals . in many of these cases , similarities between the two paired parties are widely observed , such as similar research interests between the advisor and advisee and matched market competitiveness between the executives and the company .the principle of homophily , the tendency of individuals to associate and bond with others who are similar to them , can be applied to explain such similarities . yet , in some cases different mechanisms may be at work in addition to simply seeking similarities .for example , it has been discovered that people end up in committed relationship in which partners are likely to be of similar attractiveness , as predicted by the matching hypothesis in the field of social psychology .however , if the closeness in attractiveness is the goal when searching for partners , one needs an objective self - estimation of it , which is rarely the case .furthermore , it is found in social experiments that people tend to pursue or accept highly desirable individuals regardless of their own attractiveness .these findings suggest that the observed similarities may not be solely caused by explicitly seeking similarities . in some previous works ,stochastic models are applied to simulate the process of human mate choice . by simply assuming that highly attractive individuals are more likely to be accepted , the system generates patterns supporting the matching hypothesis even when similarity is not directly considered in the partner selection process .nevertheless , most if not all of these works ( with a few recent exceptions ) concentrate on systems without topology , also known as fully - connected systems , in which one connects to all others in the other party and competes with all others in the same party .in reality , however , one knows only a limited number of others as characterized by the degree distribution of the social network .hence a simple but fundamental question arises : what is the outcome of the matching process when topology is present ? in this work , we aim to address this question by analyzing the impact of network structure on the specific example of the process of matching , namely , human mate choice . our motivation to address this questionis caused not only by the limited knowledge on this matter , but also by the fact that topology could fundamentally change properties of the system and further affect its dynamical process .we have witnessed evidence of such impact , accumulated in the last decades from the advances towards understanding complex networks : a few shortcuts on a regular lattice can drastically reduce the mean separation between nodes and give rise to the small - world phenomenon , the power - law degree distribution of scale - free networks can eliminate the epidemic threshold of epidemic spreading and synchronization can be reached faster in networks than in regular lattices .indeed , numerous discoveries have been made in different areas when considering topology in the analysis of many classical problems .hence it is fair to expect that the network topology would also bring new insights on the matching process that we are interested in .we start with a bipartite graph with nodes .the bipartite graph consists of two disjoint sets and of equal size , representing two parties , each with members . while our model can be more general , for simplicity, we consider the two parties as collections of heterosexual male and female individuals ( fig .[ fig : figure1].a ) .each node , representing one individual , has links drawn from the degree distribution , randomly connecting to nodes in the other set . on average ,a node has links , referred to the average degree of the network . to characterize the process of human mate choice , each nodeis assigned a random number as its attractiveness drawn uniformly from the range . combining features in some previous works with the network structure, we consider the process of human mate choice as a two - step stochastic process which generates the numerical model as follows ( fig .[ fig : figure1].b ) : \1 . at each discrete time step , randomly pick a link .let s denote the nodes connected by this link as node and node and their attractiveness as and , respectively .draw two random numbers independently and uniformly from the range , denoted by and .check the matching condition defined as and .if the matching condition is satisfied and nodes and are not in a relationship with each other , pair them into intermediate pairing and dissolve them from any previous intermediate pairing with other nodes , if there are any .if the matching condition is satisfied and nodes and are already in the intermediate pairing with each other , join them into the stable couple .make nodes and unavailable to others by removing them from the network together with all their links .repeat from step 1 until there is no link left .the matching condition in step 2 ensures that individuals mutually accept each other .the decision making is probabilistic : the probability that node accepts node is ( independent of its own attractiveness ) .a pairing is successfully established only when both individuals decide to accept each other .the intermediate pairing created in step 3 corresponds to the tendency of people not to fully commit to a relationship at the beginning and to form a stable couple only after such unstable intermediate stage .the removal of nodes and links in step 4 merely accelerates the simulation , as these links should not be considered by others and the corresponding nodes in the stable state are not available for matching .undoubtedly our model only captures a very small fraction of features in the matching process .the goal of this work is not to propose a sophisticated model that is able to regenerate all observations in reality .instead , we focus on attractiveness and popularity ( degree ) that are essential in this process , hence this model could be the simplest to study the interplay between these two factors , shedding light on the effect of topology on this process . to study the effects of topology , we focus on three most commonly used network structures with different degree distributions .1 ) random k - regular graph ( rrg ) whose degree distribution follows a delta function , where is the average degree of the network , corresponding to an extreme case that each person knows exactly the same number of others ; 2 ) erds - rnyi network ( er ) with a poisson degree distribution , representing the situation that most nodes have similar number of neighbors and nodes with very high or low degrees are rare ; 3 ) scale - free network ( sf ) generated via static model whose degree distribution has a fat - tail , featuring a large number of low degree nodes and few high degree hubs .the constructions of these networks are as follows . * constructing a random k - regular graph .* we start from two sets ( sets and ) of disconnected nodes indexed by integer number ( ) . for each node in the set ,connect it to nodes , , and in the set ( using periodic boundary condition such that node in the set connects to node , 1 , and in the set , and so on ) .then randomly pick two links , assuming that one link connects nodes in the set and in the set and the other connects nodes in the set and in the set . check if there is a connection between nodes and and nodes and .if not , remove original links and connect nodes and and nodes and .repeat this process sufficiently large number of times such that connections of the network are randomized . *constructing an erds - rnyi network .* we start from two sets ( sets and ) of disconnected nodes indexed by integer number ( ) .randomly select two nodes and respectively from sets and .connect nodes and if there is no connection between them .repeat the procedure until links are created . * constructing a scale free network . * the scale - free networks analyzed are generated via the static model .we start from two sets ( sets and ) of disconnected nodes indexed by integer number ( ) .the weight is assigned to each node , where is a real number in the range .randomly selected two nodes and respectively from sets and , with probability proportional to and .connect nodes and if there is no connection between them .repeat the procedure until links are created .the degree distribution under this construction is ^{1/ \alpha}}{\alpha } \frac{\gamma(k-1/ \alpha , { \langle}k { \rangle}(1-\alpha)/2)}{\gamma(k+1)}$ ] where the gamma function and the upper incomplete gamma function . in the large limit , the distribution becomes .* introducing correlations between the attractiveness and the degree .* we generate random numbers drawn between 0 and 1 and sort them in ascending order and index them by integer number ( ) .we sort nodes of networks in ascending order of their degrees and index them by integer number ( ) . for positive correlation between the degree and attractiveness , assign random number as the attractiveness of node . fornegative correlation between the degree and attractiveness , assign random number as the attractiveness of node .the matching hypothesis suggests similarities in attractiveness between the two coupled individuals . to test it ,we employ the pearson coefficient of correlation as a measure of similarity , that is defined as where and are the attractiveness of the individuals in sets and of the couple , and are the average attractiveness of the matched individuals in sets and and is the number of matched couples in the network .the pearson coefficient of correlation varies from -1 to 1 , where 1 corresponds to the strongest positive correlation when two quantities are perfectly linearly increasing with each other , whereas -1 is the strongest negative correlation when two quantities are perfectly linearly dependent and one decreases when the other increases .we first check the scenario studied in most of the previous works , when topology is not considered and each node is potentially able to match an arbitrary node in the other set .our model generates a high correlation of the couple s attractiveness with the average ( fig .[ fig : figure2].a ) .this value is similar to the result generated in the previously proposed model which accounts also for attractiveness decay even though this feature is not present in ours .it is noteworthy that similarity is not explicitly considered when establishing a matching in this model and each individual only seeks attractive partners .however , the mutual agreement between two individuals effectively depends on the joint attractiveness of both .hence individuals with high attractiveness will have the advantage in finding highly attractive partners , causing them to be removed from the dynamics soon , while less attractive individuals find their matches later .therefore , as time goes on , only less and less attractive individuals are available to form a couple , thus they are more likely to get a partner with similar attractiveness .the positive correlations in attractiveness are also observed in all three classes of networks studied .they are lower than the correlation observed in the fully - connected systems but increase monotonically with the average degree .furthermore , as the network degree distribution varies from a delta function to a poisson distribution and to a fat - tail distribution , the variance in the degree distribution increases .our results indicated that for a given , decreases with the increased degree diversity ( fig .[ fig : figure2].a ) . in other words ,the broader the degree distribution is , the lower the correlation in attractiveness between the two coupled individuals will be .the reason is that as the degree diversity increases , more and more links are connected to a few high degree nodes .the majority of nodes have lower degrees compared to the network with the same degree but smaller degree diversity .hence the majority of nodes have less opportunities in selecting partners and therefore smaller chance to find a partner with closely matched attractiveness . as the result the attractiveness correlation decreases .while the correlation in attractiveness is strongest when the system is fully - connected , we find that the difference in the correlations is caused mostly by the matched individuals with low attractiveness . indeed , the average attractiveness of those who are coupled with highly desired individuals does not depend much on the presence of the network structure ( fig .[ fig : figure2].b - d ) . in fully - connected systems ,less attractive individuals are bound to be coupled with partners of low attractiveness , which contributes significantly to the total correlation . in sparse networks , however , if they successfully find partners , their partners are likely to be more attractive than them . therefore , the limited choice in sparse networks reduces competitions among individuals , especially for those with low attractiveness , hence giving rise to lower attractiveness correlations between the two coupled individuals . in fully - connected systems all individuals are able to find their partners .but in networks one faces a chance of failing to be matched .how often it occurs depends on one s popularity ( degree ) and attractiveness . herewe consider defined as the probability of failing to be matched conditioned on degree and attractiveness within the range .we find that drops exponentially with both degree and attractiveness .this implies that getting more popular brings the similar benefit as being more attractive in terms of finding a partner ( fig .[ fig : figure3 ] ) .so far we have concentrated only on cases where there is no correlation between one s popularity ( degree ) and attractiveness . in realitythese two features are often correlated .on one hand , the positive correlation is somewhat expected as a highly attractive person can potentially be also very popular hence having a larger degree . on the other hand ,negative correlation could also occur when those with low attractiveness are more active in making friends to balance their disadvantage in attractiveness .we extend our analysis to two extreme cases when degree and attractiveness are correlated ( see method ) . for a given network topology ,the correlation of attractiveness ( ) is strongest when the degree and the attractiveness are positively correlated and weakest when they are negatively correlated .it is noteworthy that with negative degree - attractiveness correlation , can become negative in networks with low , suggesting that the matching hypothesis may not hold in such networks even though the underlying mechanism does not change ( fig .[ fig : figure4 ] ) .another quantity affected by topology and typically studied is the number of couples a system can eventually match .when the system is fully - connected , everyone can find a partner and the number of couples is . in sparse networks , typically there are fewer matched couples than and the highest number of matched couples is given by the maximum matching which disregards the attractiveness . to measure the performance of the system in terms of the matching ,we focus on the quantity defined as the _ ratio _ between the number of couples matched and the size of the maximum matching . while both the number of the couples matched and the size of the maximum matching increase monotonically as the network becomes denser ( figs .[ fig : figure5].a , b ) , their ratio changes non - monotonically with ( fig .[ fig : figure5].c ). the system s performance can be relatively good when the network is very sparse or very dense , but relatively poor for the intermediate range of density .this is mainly because when more links are added to the system , the number of couples matched increases slower than the size of the maximum matching ; only when this size becomes saturated to the ratio starts to increase with .correlation between the degree and attractiveness also plays a role in the value of achieved by a network .the maximum matching depends only on the topology of the network and does not depend on the attractiveness . a successful matching between two nodes in our model, however , depends on both their attractiveness and their degrees .therefore , depends on the degree - attractiveness correlation . in both cases when either positive or negative correlation between degree and attractiveness is present, varies non - monotonically with just like in the case when there is no degree - attractiveness correlation ( fig .[ fig : figure5].d ) . however , negative correlation between degree and attractiveness yields more while positive correlation yields fewer matched couples than that when degree and attractiveness are uncorrelated .considering the fact that the similarity between the two coupled individuals ( ) is largest in networks with positive degree - attractiveness correlation and smallest with negative degree - attractiveness correlation , such a dependence of on degree - attractiveness correlation implies that the system s performance in terms of the number of matched couples is better when it is less selective .in summary , we studied the effect of topology on the process of human mate choice .in general , our findings support the conclusion of the previous works that similarities in attractiveness between coupled individuals occur even though the similarity is not the primary consideration in searching for partners and each individual only seeks attractive partners , in agreement with the matching hypothesis .when topology is present , the extent of such similarity , measured by pearson coefficient of correlation , grows monotonically with the increased average degree and decreased degree diversity of the network . the correlation is weaker in sparse networks because in them the less attractive individuals who are successful in finding partners , are likely to be coupled with more attractive mates . in fully - connected systems , however , they are almost certain to be coupled with partners also less attractive , contributing significantly to the total attractiveness correlation .another effect of the topology is that one faces a chance of failing to find a partner .such the chance decays exponentially with one s attractiveness and degree , therefore being more popular can bring benefits in terms of finding a partner similar to being more attractive .the correlation of couple s attractiveness is also affected by the degree - attractiveness correlation , which is strongest in networks where attractiveness and popularity are positively correlated and weakest when they are negatively correlated . in networks with negative degree - attractiveness correlation , the attractiveness correlation between coupled individuals can be negative when the average degree is low , implying that matching hypothesis may not hold in such systems . finally , the number of couples matched also depends on the topology .the ratio between the number of matched couples and the maximum number of couples that can be matched , denoted as , changes non - monotonically with the average degree . is largest in networks with negative degree - attractiveness correlation and smallest when the attractiveness and the popularity are positively correlated .the non - monotonic behavior of the matching ratio is also interesting from a stochastic optimization viewpoint : the simple trial - and - error matching process , governed and constrained by individuals attractiveness , fares reasonably well everywhere ( against the maximum attainable matching on a given bipartite graph ) , except for a narrow intermediate sparse region ( fig . [ fig : figure5 ] ) .the worst - case " average degree depends strongly on network heterogeneity but _ not _ on degree - attractiveness correlations .our results revealed the role of topology in the process of human mate choice and can bring further insights into the investigations of different matching processes in different networks .indeed , in this work we focused only on the basic model of the mate seeking process in random networks .however , different variations can be considered .for example , there is no degree correlation between the two coupled individuals observed in our model , simply because the networks we studied are random with no assortativity . in reality, the connection may not be random and then assortativity can be considered .furthermore , the networks in our model are static and the degree of a node does not change with time . in reality ,a node may gain or lose friends and consequently its degree may change .likewise , stable matching between individuals does not have to last forever , it just needs to be an order of magnitude longer than unstable matching .it is possible to establish certain rates to stable matching dissolution and analyze the steady state behavior of so generalized system .finally , here we considered the attractiveness as a one dimensional attribute of individuals . in more realistic scenarios , attractiveness can be a multi - dimensional variable with different merits .investigations of such more complicated cases are left to future work .this work was supported in part by the army research laboratory under cooperative agreement number w911nf-09 - 2 - 0053 and by the office of naval research ( onr ) grant no .n00014 - 09 - 1 - 0607 .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies , either expressed or implied , of the army research laboratory or the u.s . government .
|
the matching hypothesis in social psychology claims that people are more likely to form a committed relationship with someone equally attractive . previous works on stochastic models of human mate choice process indicate that patterns supporting the matching hypothesis could occur even when similarity is not the primary consideration in seeking partners . yet , most if not all of these works concentrate on fully - connected systems . here we extend the analysis to networks . our results indicate that the correlation of the couple s attractiveness grows monotonically with the increased average degree and decreased degree diversity of the network . this correlation is lower in sparse networks than in fully - connected systems , because in the former less attractive individuals who find partners are likely to be coupled with ones who are more attractive than them . the chance of failing to be matched decreases exponentially with both the attractiveness and the degree . the matching hypothesis may not hold when the degree - attractiveness correlation is present , which can give rise to negative attractiveness correlation . finally , we find that the ratio between the number of matched couples and the size of the maximum matching varies non - monotonically with the average degree of the network . our results reveal the role of network topology in the process of human mate choice and bring insights into future investigations of different matching processes in networks .
|
the histogram , introduced by karl pearson in 1895 , is one of the most basic but still one of the most widely used tools to visualize data .however , the construction of the histogram is not uniquely defined , leaving the user considerable freedom to choose the number of bins and their widths , see .this arbitrariness allows for radically different visual representations of the data , and it appears that no satisfactory rule for the construction is known , as evidenced by the large number of rules proposed in the literature . in the case of equal bin widths , popular examples of rules for the number of bins are those given by , which is still the default rule in r , , , , and .most of these rules are derived by viewing the histogram as an estimator of a density and choosing the number of bins to minimize an asymptotic estimate of risk .this leads to questions about the performance for small samples as well as about smoothness assumptions that are not verifiable .instead of making all bins equally wide , it is also common to give equal area to all blocks . point out that the first approach typically leads to oversmoothing in regions of high density and is poor at identifying sharp peaks , whereas the second oversmooths in regions of low density and does not identify small outlying groups of data .they advocate for a compromise of these two approaches that is motivated by regarding the histogram as an exploratory tool to identify structure in the data such as gaps and spikes , rather than as an estimator of a density , and they argue that relying on asymptotic risk minimization may lead to inappropriate recommendations for choosing the number of bins .this is in line with recent findings for the regressogram , the regression ` counterpart ' for the histogram . herethe bin choice corresponds to finding _ locations _ of constant segments , which is a different target than conventional risk minimization , e.g. of norm , .this paper proposes a rule for constructing a histogram that is motivated by the two main goals of the histogram , see : 1 .the histogram provides estimates of probabilities via relative areas .2 . the histogram provides a display of the density of the data that is simple but informative , i.e. it aims to have few bins , but still shows the important features of the data , such as modes . the idea of the paper is to construct a confidence set of cumulative distribution functions ( cdfs ) such that each cdf in the confidence set satisfies 1 . in an ( asymptotically ) optimal way . to meet 2 . , we select the simplest cdf in the confidence set , i.e. the one with the fewest bins , as our histogram cdf .the resulting histogram is the simplest histogram that shows important features of the data , such as increases , modes , or troughs .we call this histogram the _ essential histogram_. our approach is motivated by the fact that simplicity is a key aspect of the histogram : not only is it implicit in its goal to serve as an exploratory tool , but also in its definition as a piecewise constant function .we show that in a large sample setting , each cdf in the confidence set estimates probabilities of intervals with a standardized simultaneous estimation error that is at most twice of what is achievable and which is typically much smaller than those obtained from histograms that are constructed via traditional rules .likewise , we show that the cdfs are asymptotically optimal for detecting important features , such as increases or modes of the distribution .therefore , we attain the above two goals of the histogram asymptotically , but we stress that one of the main benefits of our construction is that is provides finite sample guaranteed confidence statements about features of the data : large increases ( or decreases ) of any histogram in the confidence set ( and hence of the essential histogram ) indicate significant increases ( or decreases ) in the true density ( cf . theorem [ thmfeatureinfer ] ) . we illustrate this by an example in figure [ fig : empintro ] .it shows a computationally faster modification of the essential histogram ( see section [ computation ] ) , still within our 90% confidence set .the finite sample guarantee then says that the true density has an increase on the two pink intervals , and has a decrease on the two blue ones , respectively , with simultaneous confidence at least 90% .it readily indicates that the truth has 2 modes and 1 trough , as the plotted intervals are disjoint ( cf .these intervals are a selection of a much larger set of intervals of increase and decrease at all scales , the method offers ( see sections [ optimalfeatures ] and [ computation ] ) .thus , we can state with 90% guaranteed finite sample confidence that these modes or troughs are really there in the underlying population .we think that these confidence statements are quite valuable enhancements to the essential histogram as an exploratory tool .we also mention that any other histogram can be accompanied with our method to obtain such statements for it in order to justify ( or question ) modes it suggests ( see section [ ss : evatool ] ) .is given in the left panel .in the upper part of the right panel , the reconstruction by the essential histogram ( eh ) with significance level and the true density are shown ; in the lower part , intervals indicating regions which are inferred to contain a point of increase ( decrease ) are plotted in pink ( blue ) .[ fig : empintro],scaledwidth=100.0% ] the construction of the confidence set is based on the multiscale likelihood ratio test introduced by , which guarantees optimal detection of certain features in the data . use such a multiscale likelihood ratio test for inference on change - points in a regression setting and they employ the idea of selecting the function in the confidence set that has the fewest jumps . in the context of the histogram , it turns out that this approach includes jumps only at locations where the evidence in data requires the placement of jumps in order to show significant features and to provide good probability estimates .hence the methodology will not put any bins in regions where the density is close to flat .this built - in parsimony is what one would expect from an automatic method for constructing a histogram , see also the comments about open research problems in .the taut string method of can be interpreted as producing a histogram ( although not satisfying requirement 1 . from above ) that has the smallest number of modes subject to the constraint that it lies in a confidence ball given by the kolmogorov metric .it is known that the kolmogorov metric will not result in good probability estimates for intervals unless they have large probability content .this procedure does not aim at parsimony of bins and will typically produce many more bins than the essential histogram ( although often providing visually appealing solutions , and estimating the number of modes very well , see section [ examples ] ) , while the essential histogram automatically results in parsimony of bins and as a consequence also of modes as explained above .the rest of the paper is organized as follows . in section [ confset ] ,we propose a multiscale confidence set of distribution functions , and an estimator , the essential histogram , within the confidence set .the optimality of the confidence set is examined from the probability estimation perspective in section [ optimalprobs ] , and from the feature detection perspective in section [ optimalfeatures ] . in section[ computation ] , we present an accelerated dynamic programming algorithm for a slightly relaxed version of the essential histogram , and show that the theoretical properties of this relaxed estimator essentially remain valid . the performance of the ( relaxed ) essential histogram is demonstrated by simulation in section [ examples ] , where we also illustrate how the proposed confidence set works as an evaluation tool for any histogram estimator .a brief conclusion is given in section [ conclusion ] .some further optimality results and all the proofs are in sections [ optconfint ] and [ proofs ] in the appendix . an implementation of the proposed method is provided in r - package `` esshist '' , available from http://www.stochastik.math.uni-goettingen.de/esshist .for any cdf and any interval we define provides a measure of the ` average density ' over without requiring any smoothness assumptions on .if does have a density , then equals the average of the density over . for a partition of the real line into intervalswe define the corresponding _ histogram of _ as the density given by , where is the interval containing .we say that is a _ histogram cdf _ iff it is the cdf of a histogram , or equivalently , iff is a piecewise linear and continuous cdf .the histogram can be recovered from its cdf as the left - hand derivative of . given univariate i.i.d . with cdf , which from now on we assume to be _ continuous _, we consider the following -confidence region for based on the empirical cdf : here is the log - likelihood ratio statistic for testing , and is the -quantile of the distribution of with being a collection of intervals : : j , k \in \{1+i d_{\ell } , i=0,1,\ldots \}\ \mbox { and } m_{\ell}<k - j\leq 2m_{\ell}\br\},\\ & \;\;\;\;\mbox { where } m_{\ell}=n2^{-\ell},\ ; d_{\ell}= \bl\lceil \frac{m_{\ell}}{6 \sqrt{\ell}}\br\rceil . \end{aligned}\ ] ] this collection of intervals was introduced in to approximate the collection of all intervals on the line in a computationally efficient manner : show that the above multiscale likelihood ratio statistic can be computed in in steps while at the same time the collection is still rich enough to guarantee optimal detection of certain effects .note that in is distribution free .it is further known that is uniformly tight , and equivalently for every , see ( * ? ? ?* proposition 1 ) , and figure [ fig : symd ] for a visual illustration . for ( dotted line ) , ( dashed line ) , and ( solid line ) .[ fig : symd],scaledwidth=95.0% ] the _ essential histogram _ is defined as a histogram whose cdf lies in and whose number of bins minimizes the number of bins among such histograms .the essential histogram exists since clearly contains and hence also the histogram cdf obtained by linearly interpolating .we now show that all cdfs in possess certain optimality properties .in particular , those properties apply to the essential histogram and to the relaxed version ( see section [ computation ] ) which admits fast computation .first we investigate how well perform with regard to the first goal of the histogram , namely estimating probabilities for intervals .to this end , for probabilities of size , we introduce the simultaneous standardized estimation error of as note that .thus , it suffices to consider ] into equally sized bins .then there exists a continuous such that for and odd if one is willing to make higher order smoothness assumptions on , then it can be shown that the performance of these common histogram rules gets much closer to the benchmark .one key advantage of our proposed histogram is that it essentially attains the benchmark in every case by automatically adapting to the local smoothness : theorem [ thma](ii ) shows that with high probability all have a simultaneous estimation error equal to that of the benchmark up to a factor of at most 2 .this optimality property is due to the use of the likelihood ratio test in a multiscale fashion . at the same time, some in will have many fewer than the bins produced by : if the underlying density is locally close to flat , then the multiscale likelihood ratio test will not exclude a candidate that has no jumps in that local region .thus the with the fewest bins provide a histogram that gives a simple visualization of the data while still guaranteeing essentially optimal estimation of . [ thma ]let and .then * for * uniformly in . can also be interpreted as a distance between and that focuses on intervals having probability content .we show in section [ optconfint ] that is an optimal confidence region for with respect to the distance for arbitrary .the optimality results for estimating provided by theorems [ thma ] and [ thmb ] carry over to estimating the average density by simply dividing the inequalities by .we note that the construction of via rather than , say , the standardized binomial statistic is crucial for these optimality results , see the discussion in section [ optconfint ] .arguably the most important features that one would like to detect with a histogram are the presence of increases / decreases in a density as well as ( anti)modes . since we are considering a general cdf and do not want to make any smoothness assumptions , we can formulate the problem as detecting increases / decreases in defined in .we denote by the set of cdfs that have a detectable increase in : where .[ thmd ] similarly as in theorem [ thmb ] it can be shown that in a large sample setting it is not possible to detect smaller values of as both and go to zero . by denote a slightly larger set than that contains some cdf having an undetectable increase : where [ thmlbmon ] let with density .let be any test with level under is non - increasing , in the sense that if , then this optimality result clearly also applies to the simultaneous detection of a finite number of increases / decreases and hence to the detection of ( anti)modes .moreover , it is possible to infer increases / decreases and modes / troughs of from .for instance , large increases ( or decreases ) of a histogram in imply increases ( or decreases ) of with confidence at least .[ thmfeatureinfer ] let , and .denote by the density of some . *if there are intervals such that then it holds that this assertion also holds if we replace `` '' by `` '' .* if there are intervals , for some , such that then it holds with confidence at least that recall from section [ confset ] that the essential histogram is defined as the histogram with the least number of bins within the confidence set .its computation requires the solution of a nonconvex combinatorial optimization problem .this makes it practically not feasible for most real world applications .however , it is possible to compute the exact solution of a slight relaxation ( still nonconvex ) of the original optimization problem in almost linear run time ( see section [ subalg ] ) .this is given by where is a subset of distributions whose density is a histogram , given by and the number of bins of the density of .that is , in the original confidence set , we consider only the intervals on which a candidate is constant . in general , solutions toare not unique . in this case, we will pick with density , that maximizes the following entropy ( up to a factor of ) note that is the log - likelihood if we assume the data are distributed according to , since in other words , we select the one that explains data best in terms of likelihood among all solutions of . from now on ,this particular solution will be referred to as the _ essential histogram _ for brevity . due to this relaxationthe solution will potentially contain less bins than the exact solution ( but never will have more ) and the significance level of the confidence set carries over to the features in the computed histogram .[ thmrelaxesshist ] * the assertions in theorem [ thma ] still hold if we replace by , and by , which is defined as * similar to theorem [ thmd ] , it holds that * let , and .assume that , and that the density of is constant on some intervals .if further then it holds that this assertion also holds if we replace `` '' by `` '' .* under the same notation as in ( iii ) , assume that on intervals the density is constant , for some , and that then it holds with confidence at least that furthermore , in case that the true density itself is a histogram ( i.e. piecewise constant ) , we have an explicit control on the number of modes .[ thmhistdensity ] let have a piecewise constant density . denote , , and .then for the essential histogram ( with cdf ) in it holds that * it controls overestimating the number of bins uniformly over all s * it controls underestimating the number of bins , for , with * it controls the number of modes and troughs , for , with .* remarks : * note first that for a fixed density and a fixed significance level , ( or ) is of order as .thus , the terms within the exponents in theorem [ thmhistdensity ] ( ii ) and ( iii ) are essentially and .moreover we point out that the assertions in theorem [ thmhistdensity ] even hold for sequences of with , , , , and .it is known that for some constant ( see * ? ? ?* ) , so is of order if tends to zero at a polynomial rate .thus , a sufficient condition for the consistent estimation of number of modes and troughs , in case of itself being a histogram , is where , no faster than a polynomial rate , and is some proper constant .the condition quantifies how the underlying difficulty in estimating the numbers of modes and troughs is determined by the minimal size of bins and the minimal difference between heights over neighboring bins of the unknown truth . by denote the order statistics of observations .we treat each as a node in a graph , and set the edge length between nodes and as the minimal number of blocks of a step function on ] , then there are no piecewise constant candidates with changes on ] in figure [ fig : cauchy ] ) of the truth .by contrast , the hw completely distorts the shape of the truth , although still identifies the correct number of modes with moderate frequency . , scaledwidth=100.0% ] lc|*7c & + & & & & & & + & & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 + & & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 + & & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 + & & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 99.8% / 1.01 & 100% / 1.0 + & & 99.8% / 1.01 & 100% / 1.0 & 100% / 1.0 & 99.8% / 1.01 & 100% / 1.0 + & & 99.6% / 1.01 & 99.6% / 1.01 & 99.4% / 1.01 & 99.8% / 1.01 & 100% / 1.0 + & 99.8% / 1.04 & 100% / 1.0 & 100% / 1.0 & 100% / 1.0 & 99.8% / 1.01 + & r - default & 76.8% / 1.04 & 77.2% / 1.07 & 73.2% / 1.15 & 78.4% / 1.13 & 74.6% / 1.01 + & & 46% / 0.81 & 46.4% / 1.02 & 48.2% / 1.08 & 45% / 1.14 & 47.4% / 1.14 + & & 0% / 0 & 43.6% / 0.6 & 47.4% / 0.93 & 47.4% / 1.24 & 43.8% / 1.42 + & & 97.8% / 1.05 & 98.8% / 1.03 & 91.2% / 1.18 & 95.6% / 1.09 & 94.4% / 1.11 + & & 0% / 0 & 100% / 1.0 & 98.8% / 1.01 & 93% / 1.14 & 78.2% / 1.44 + lc|*7c & + & & & & & & + & & 4.4 ( 0.6 ) & 5.4 ( 0.5 ) & 6.2 ( 0.6 ) & 6.9 ( 0.5 ) & 7.3 ( 0.5 ) + & & 4.6 ( 0.6 ) & 5.7 ( 0.6 ) & 6.5 ( 0.6 ) & 7.2 ( 0.5 ) & 7.7 ( 0.6 ) + & & 4.7 ( 0.5 ) & 5.9 ( 0.6 ) & 6.7 ( 0.6 ) & 7.4 ( 0.6 ) & 7.9 ( 0.6 ) + & & 4.9 ( 0.5 ) & 6.3 ( 0.7 ) & 7.0 ( 0.6 ) & 7.8 ( 0.7 ) & 8.4 ( 0.7 ) + & & 5.2 ( 0.6 ) & 6.6 ( 0.7 ) & 7.3 ( 0.6 ) & 8.3 ( 0.7 ) & 8.8 ( 0.7 ) + & & 5.6 ( 0.7 ) & 7.2 ( 0.7 ) & 7.8 ( 0.7 ) & 9.0 ( 0.8 ) & 9.6 ( 0.8 ) + & 14.6 ( 3.0 ) & 22.8 ( 3.4 ) & 28.3 ( 3.6 ) & 32.9 ( 3.8 ) & 38.0 ( 4.1 ) + ) , ha ( ) and dk , respectively .each short vertical line on the horizontal line marks a removable change - point , with its intensity proportional to the number of merged segments whose interior contains this change - point .the sample size is .[fig : eval_msconst],scaledwidth=90.0% ] the multiscale constraint in , an adjusted version of , can actually be beneficial to any histogram estimator . given a histogram estimator , we can always check , for every interval in , where is constant , whether the corresponding local constraint is fulfilled . the set of violation intervals defined here is indeed a subset of all violation intervals from , so the statistical justification of the original confidence set carries over to this subset as well , see corollary [ thmrelaxesshist ] .for instance , the given estimator departs from the truth uniformly over every violation interval with probability at least .there are mainly two reasons to consider the multiscale constraint in instead of for the violation intervals .one is to be compatible with eh , i.e. , the set of its violation intervals should be empty .the other is to improve the clarity in visualization and the interpretability of violation locations by avoiding long intervals across change - points .as argued above , the set of all intervals where the local constraints are violated provides crucial information for the performance of .we illustrate this by a numerical example in figure [ fig : eval_msconst ] .the set of all violation intervals , plotted in the lower part of each panel , nicely depicts the deviation from the true density , which clearly shows where an estimator gives faithful estimation , and where it fails .furthermore , a horizontal gray scale bar is shown with the darkness proportional to the number of violation intervals covering a given location .this , to some extent , indicates how seriously an estimator deviates from the truth . in this example , the set of violation intervals for dk is empty , since it is defined under a similar multiscale constraint , and also has lots of changes , which greatly reduces the number of local constraints .the multiscale constraint , as we have seen , can be used as an evaluation tool to examine whether and where a given histogram estimator misses significant features ( i.e. to detect false negatives ) . on the other hand, it can also be applied to find superfluous jumps of any histogram estimator ( i.e. to detect false positives ) .to this end , we consider , for each change - point of a histogram estimator , whether merging its two nearby segments still satisfies the multiscale constraint in .if it is the case , the change - point is said to be removable . in figure[ fig : eval_msconst ] , each vertical short line , plotted on the horizontal line , corresponds to a removable change - point .note that it by no means indicates that all the removable change - points are removable at the same time .one can , however , claim that any sub - collection of removable change - points , such that every two are not end points of a common segment , are simultaneously removable with probability at least . for instance, it suggests many jumps by dk are unnecessary . sometimes , for a removable change - point, it is possible to merge more than two nearby segments , which potentially strengthens the confidence on its removability .thus , we also encode this information as the intensity of vertical short lines , which scales with the number of possible ways of merging , see figure [ fig : eval_msconst ] .the evaluation in terms of violation intervals and removable change - points is implemented via function ` checkhistogram ` in our r - package , together with the visualization functionality ( as shown in figure [ fig : eval_msconst ] ) .the eh shows a great potential in meeting the two main goals of the histogram , namely , probability estimation , and feature detection .for instance , it is as competitive as the state - of - art methods that are tailored to mode detection , such as dk , in terms of identifying the number of modes for large sample sizes .attractively , the eh gives a histogram as simple as possible , as it minimizes the number of bins , which greatly eases its interpretation .besides , the eh methods with various choice of significance levels serve as a useful data exploration tool , providing a cascade of finite sample inferences with user - specified confidence levels for a given dataset .we are not aware of any histogram which meets this goal , or can provide similar guarantee on modes , troughs or number of bins . based on extensive simulation study , we recommend as the _ default _ choice of significance level if estimation if of primary intent , and ( or even smaller ) if significant conclusions have to be made on modes and troughs . *theorems [ thma](i ) and [ thmb ] show that is an optimal confidence region for with respect to the distance for arbitrary : theorem [ thma](i ) shows that with probability converging to one , will exclude with , where sufficiently slowly . in the case of small ,theorem [ thmb ] shows that if is replaced by , then no test can distinguish and with nontrivial power . in the case of larger ,i.e. when stays bounded away from zero , the condition of theorem [ thma](i ) becomes with . on the other hand ,a contiguity argument as in the proof of theorem 4.1(c ) in dmbgen and walther ( 2008 ) shows that for any test to have asymptotic power 1 against a sequence requires with .[ thmb ] let be any test with level under i.i.d . . if and such that , then * remarks : * 1 .the price for simultaneously considering all in part ( ii ) of theorem [ thma ] , as opposed to a fixed sequence in ( i ) , is a doubling of the distance : for a fixed sequence of intervals , the standardized distance between and becomes negligible compared to the radius of the confidence ball around .but if one needs to consider all intervals simultaneously , then for the worst - case interval the standardized distance between and is also about proof of theorem [ thma ] shows that ( i ) holds even for smaller intervals , namely for , provided that also . if , then ( [ res ] ) requires a different bound .for example , if , , then ( [ res ] ) requires [ smalli ] d_p_n ( f , h ) > ( 1+_n ) ( + ) and it is not clear whether this result can be improved .note that theorem [ thmb ] does not provide a lower bound for scales of order .the construction of via rather than , say , the standardized binomial statistic is crucial for these optimality results : while the tail of is close to subgaussian , it does vary with and becomes increasingly heavy as decreases to 0 , see ch . 11.1 in .it is thus not clear how to construct a penalty that is effective in combining the evidence on the various scales .for example , if for some fixed , then the penalty in the definition of would not be sufficiently large for the standardized binomial statistic and therefore the optimality result ( [ resi ] ) would not hold , at least in the case .we will make use of the following * proof of proposition [ thmc1 ] * : recall that is continuous . since the law of does not depend on , we may assume i.i.d . ] .if is odd , then the bin is .denote the height of the histogram ( i.e. the slope of ) on that bin by .set then and , hence * proof of theorem [ thma ] : * to avoid lengthy technical work we will prove the theorem using all intervals in in the definition of .the technical work in shows that the approximating set of intervals used in section [ confset ] is fine enough so that the optimality results continue to hold with that approximating set . to prove ( [ resi ] ) set . then the inequality ( [ res ] ) reads [ a0 ] >c_n + 1(f(i)<h(i ) ) we have by the assumption of the theorem , and we define the event .we will show that on the event , ( [ a0 ] ) implies [ a1 ] _n + 1 ( f_n(i)<h(i ) ) ev ., uniformly in and , where . hence lemma [ quadapprox ] ( b , c ) gives by chebychev s inequality , and the above conclusions are uniform in and .( [ resi ] ) follows since by proposition 1 in . since .as in the proof of ( i ) one finds eventually .moreover , on we have , hence and so ( [ astar ] ) is not smaller than completing the proof of ( ii ) . it is enough to prove that the second probability goes to 0 uniformly in ; the third one is analogous .the proof follows essentially the argument of theorem [ thma](i ) .the reason why we are not in the case ( ii ) , which would require doubling , is that we do not need to control the probability simultaneously for all intervals and hence we can guarantee that is sufficiently close to .hence the inequality in the second probability in ( [ 2mode ] ) implies ( [ a0 ] ) ( setting ) , and it was shown in the proof of theorem [ thma](i ) that on the event , ( [ a0 ] ) implies [ 1new ] _n + 1 ( f_n(i_1)<h(i_1 ) ) ev .where .but if is large enough so that , then ( [ 1new ] ) implies by lemma [ quadapprox](b , c ) .( as in the proof of theorem [ thma ] we assumed all real intervals and refer to for the technical work showing that the conclusion also obtains with the approximating set used in section 2 . )hence the second probability in ( [ 2mode ] ) is not larger than by chebychev s inequality . * proof of theorem [ thmlbmon ] : * w.l.o.g .we assume ] , and . for a fixed , we have \br ) & = \pr_f\bl(h \equiv c \ge \frac{c_{k-1}+c_k}{2 } \text { on } ( m_{k-1 } , m_k]\text { for some constant } c \br ) \\ & { } \qquad + \pr_f\bl(h \equiv c < \frac{c_{k-1}+c_k}{2 } \text { on } ( m_{k-1 } , m_k]\text { for some constant } c \br ) \\ & \le \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2},\ , h \text { is constant on } i \equiv i_{k-1}^+\br ) \\ & { } \qquad + \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2},\ , h \text { is constant on } i \equiv i_{k}^-\br ) .\end{aligned}\ ] ] by symmetry we only need to consider the first term in the r.h.s of the above equation , where . by the construction of in, it holds that for any with there is an interval and such that .conditioned on and , we have for which implies .thus , for \\ \le & \ , 4\exp\left(-\frac{1}{128}n\lambda^2\underline\theta^2\right ) + \pr_f\bl({\left| \bar{h}_j - \bar{f}_j \right| } \ge \frac{\delta}{2 } - \frac{12\delta_n}{\lambda\sqrt{n}}\br)\qquad\text{[by lemma~\ref{quadapprox } ( i ) ] } \\ \le & \ , 4\exp\left(-\frac{1}{128}n\lambda^2\underline\theta^2\right ) + 2 \exp\left(-\frac{1}{72}n\lambda^2\left(\frac{\delta}{2}-\frac{12\delta_n}{\lambda\sqrt{n}}\right)_+^2\right ) \qquad\text{[by~\eqref{eqempf}]}.\end{aligned}\ ] ] the same bound holds for due to symmetry .therefore , we have for \text { for some } k\br ) \\ & \le 4k\left(2\exp\left(-\frac{1}{128}n\lambda^2\underline\theta^2\right ) + \exp\left(-\frac{1}{72}n\lambda^2\left(\frac{\delta}{2}-\frac{12\delta_n}{\lambda\sqrt{n}}\right)_+^2\right)\right).\end{aligned}\ ] ] for ( iii ) , we further divide ( or ) into two subintervals , ( or , ) of equal lengths . for any fixed , it holds that \br)\\ \le \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2 } , \text { and } h \text { is constant on } i \equiv i^+_{k,1 } \br ) + \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2 } , \text { and } h \text { is constant on } i \equiv i^+_{k,2 } \br)\\ + \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2 } , \text { and } h \text { is constant on } i \equiv i^-_{k,1 } \br ) + \pr_f\bl({\left| \bar{h}_i - \bar{f}_i \right| } \ge \frac{\delta}{2 } , \text { and } h \text { is constant on } i \equiv i^-_{k,2 } \br ) .\end{gathered}\ ] ] each term above can be bounded in a similar way as in ( ii ) , which leads to \br)\\ \le 16\exp\left(-\frac{1}{512}n\lambda^2\underline\theta^2\right ) + 8 \exp\left(-\frac{1}{288}n\lambda^2\left(\frac{\delta}{2}-\frac{24\tilde{\delta}_n}{\lambda\sqrt{n}}\right)_+^2\right)\qquad\text { for } n \ge \frac{32\log n}{\lambda\underline\theta}.\end{gathered}\ ] ] it follows from ( i ) and ( ii ) that for for \br ) \ge 1-\alpha \\ - 4k\left(2\exp\left(-\frac{1}{128}n\lambda^2\underline\theta^2\right ) + \exp\left(-\frac{1}{72}n\lambda^2\left(\frac{\delta}{2}-\frac{12\delta_n}{\lambda\sqrt{n}}\right)_+^2\right)\right).\end{gathered}\ ] ] thus , for \br ) \\\ge & \ , 1 - \alpha - 12k \left(2\exp\left(-\frac{1}{512}n\lambda^2\underline\theta^2\right ) + \exp\left(-\frac{1}{288}n\lambda^2\left(\frac{\delta}{2}-\frac{24\tilde{\delta}_n}{\lambda\sqrt{n}}\right)_+^2\right)\right).\ \ \ \box\end{aligned}\ ] ] * proof of theorem [ thmb ] : * using the probability integral transformation we may assume ] for and , where and . then .further , implies , hence } |f_{nj}(x)-1| = c_n \leq \sqrt{2 \log ( e / p_n)/(np_n ) } \leq \sqrt{2 } ( \log e / p_n)^{-1/2 } \leq \sqrt{2 } ( \log m_n)^{-1/2}$ ] .further , hence by the assumption on .the claim now follows as in the proof of theorem 4.1(b ) in using their lemma 7.4 . that lemma uses the assumption that the sets are pairwise disjoint to establish that the likelihood ratio statistics are conditionally independent given the and to establish .this assumption is not met here , but the proof goes through by defining . since these sets are pairwise disjoint , conditional independence follows for the corresponding . finally , is not needed to establish , since this follows from jensen s inequality and . scott , d. w. ( 1992 ) . .wiley series in probability and mathematical statistics : applied probability and statistics .john wiley & sons , inc ., new york . theory , practice , and visualization , a wiley - interscience publication .
|
the histogram is widely used as a simple , exploratory display of data , but it is usually not clear how to choose the number and size of bins for this purpose . we construct a confidence set of distribution functions that optimally address the two main tasks of the histogram : estimating probabilities and detecting features such as increases and ( anti)modes in the distribution . we define the _ essential histogram _ as the histogram in the confidence set with the fewest bins . thus the essential histogram is the simplest visualization of the data that optimally achieves the main tasks of the histogram . we provide a fast algorithm for computing a slightly relaxed version of the essential histogram , which still possesses most of its beneficial theoretical properties , and we illustrate our methodology with examples . an r - package is available online . * keywords and phrases . * histogram , significant features , optimal estimation , multiscale testing , mode detection . * ams 2000 subject classification . * 62g10 , 62h30 am and hs acknowledge support of dfg for 916 , hl support of dfg rtg 2088 subproject b2 , gw acknowledges support of nsf grants dms-1220311 and dms-1501767 .
|
several notions of coherent states have been formulated that capture in great generality many aspects of this intrinsically interdisciplinary territory .perhaps the most classical one when dealing with lie group symmetries is that of gilmore - perelomov ( gpcs ) , that considers the orbits of some given fiducial vector under the action of irreducible unitary representations of the lie group of interest .the main issue arising when this notion is applied to the euclidean groups is that the irreducible representations are not square integrable , hence no resolution of unity is provided by linear superposition on of projectors , since this would give rise to divergence .this problem has been addressed in , with the use of reducible representations constructed as direct integrals over finite intervals of representation parameters , while in the authors provide square integrable representations on the related homogeneous spaces as an application of a powerful result wich makes use of the geometry of coadjoint orbits for semidirect product groups . moreover , in it is also pointed out that the construction of reproducing kernel hilbert spaces ( rkhs ) is always possible for gpcs , independently on the square integrability , by restricting to the target of the corresponding coherent state transform .we addressed the problem from a different perspective .our approach is strongly motivated by the studies concerning the geometrization of the structure of the primary visual cortex ( v1 ) in mammals , the first cortical region involved in the processing of visual stimuli captured by the retinal receptors .the functional architecture of v1 presents indeed the concurrency of two different symmetries , since the action of single neurons can be modeled as a linear filtering of retinal images with the ( canonical ) coherent states of the second heisenberg group , while the internal axonal connectivity can be described in terms of the lie algebra of the group of euclidean motions of the plane .these two symmetries are tied together by the so - called orientation preference maps ( opm ) , that are mappings from the euclidean plane to the real projective line defining at each point of v1 , represented as a flat surface , the orientation of the gabor filter corresponding to the relative cell .these maps contain then informations on how coherent state analysis of two dimensional images is performed by v1 neurons using a two dimensional set of parameters instead of the total four dimensional set , due to the layered structure of v1 , and have proven to be intimately related with the connectivity .the purpose of linking these two apparently unrelated symmetries has lead us to the gpcs notion of euclidean coherent states , with fiducial vector chosen as a minimizer of the uncertainty principle for the irreducible representation of .the corresponding rkhs can be related to the abstract construction explained in , but its concrete realization contains some extra structure which is relevant for the understanding of this specific problem .more precisely , the characterization of the space of surjectivity for the coherent state transform is similar to the one of the well known bargmann - fock space : an summability condition , but with respect to a singular measure , and a complex differentiability condition .this is not trivial , since the dimension of is odd , and hence it can carry only an almost complex structure : analyticity is then replaced by the weaker cr condition . then , while complex differentiability is enough to get surjectivity for , in this case the excess of redundancy in the coherent state transform needs also to be controlled by the measure .moreover , this rkhs coincides with the target space of the canonical bargmann transform when this last is restricted to irreducible hilbert spaces for the representations of .this result provides then a link between the two symmetries , and also motivates the cr condition , which naturally arises from the corresponding restriction of the ordinary cauchy - riemann equations .in we provided a model of the structure of opm grounded on neurophysiological findings , which is able to reproduce neural activities of v1 measured in the in - vivo experiments and is in agreement with the proposed notion of coherent states .we would like to emphasize that this concrete application provides not only a motivation for the entire work , but also an example of a biological system whose remarkable organization can be deeply inspiring and demanding .the paper is organized as follows . in section [ sec : coherent ] we first provide precise definitions of the irreducible hilbert spaces for the representations of , following the classical approach introduced in .then we define the natural coherent state transform acting on and with theorem [ teo : isosurj ] we characterize its hilbert space of surjectivity . after a comparative discussion on the group structures of and , with theorem [ teo ] and corollary [ cor : diagram ]we then explicit the functional relations between the two symmetries , showing that the newly defined transform acts as the projection of the classical bargmann transform on .in section [ sec : model ] we present , as the fundamental application , the model for activated regions of v1 introduced in : after a description of the functional architecture of v1 and of the experimental setting , we precisely state the model in terms of the introduced coherent states and compare the results with the measurements .the euclidean motion group is the noncommutative lie group obtained as semidirect product between translation and counterclockwise rotations of the euclidean plane with the usual composition law .the left invariant vector fields can be calculated as whose commutator is given by = \cos\theta\partial_{q_1 } + \sin\theta\partial_{q_2}\ .\ ] ] since the two vector fields ( [ generators ] ) together with their commutator span the tangent space at any point , by chow theorem every couple of points can be connected by curves that are piecewise lie group exponential mappings for some , i.e. the group is naturally endowed with a so - called sub - riemannian structure .the nonintegrable distribution of planes provided by and defines then a contact structure , associated to the contact form and can be seen as the double covering of the manifold of contact elements of the plane .this last one is not orientable and hence can not carry a global contact form , but it is useful to note that it arises naturally as the projectivization of the four dimensional phase space .if we denote by the n - th heisenberg group in its semidirect product form , defined by the group law we note that , by an argument analogous to the one expressed for , we can associate to it a contact structure in the darboux normal form in accordance with the notion of as a central extension of the commutative .darboux theorem tells then that locally the geometry of is that of , which indeed is its metric tangent cone .moreover , as we noted , the contact structure ( [ contactform ] ) is directly inherited by the symplectic structure of the phase space , whose central extension returns the group .this will be the point that will permit to relate , in subsection [ sec : se2h2 ] , the groups and in terms of the complex structures underlying their coherent states transforms .we consider the coherent states of obtained with the gilmore - perelomov ( gpcs ) definition , starting from a minimum of the uncertainty principle . in order to do that ,we make use of the irreducible representations of , and of the related algebra representation . the action of the irreducible unitary representation of with parameter on is given by and for any we obtain a representation that is unitarily equivalent to ( [ s1representation ] ) up to rotations , as the representation of the lie algebra of reads where and are the infinitesimal generators corresponding to the left invariant vector fields ( [ generators ] ) .minimal uncertainty states for in the irreducible representation can be obtained as eigenvectors of the properly constructed annihilation operator in terms of the operators provided by ( [ eq : operators ] ) .this specific uncertainty principle has been discussed e.g. in , and we will be interested only in the eigenvector with eigenvalue zero .the equation for minimal uncertainty states with zero average angular momentum reads and is solved by where is the normalization .for we will simply denote them as .gpcs can then be constructed starting from the fiducial vector .we consider the family of coherent states for the group this subsection is devoted to the analysis of the coherent state transform defined by the family ( [ eq : se2cs ] ) , which we will call -bargmann transform in analogy with the classical bargmann transform ( see also theorem [ teo ] ) .its hilbert space of surjectivity , denoted is the space of functions on that satisfy an summability condition and a complex differentiability condition , as for the ordinary bargmann space . following the classical strategy used in to characterize the irreducible hilbert spaces for the representations of , the summability with respect to the variablesis expressed in terms of a hilbert space which , roughly speaking , consists of functions whose fourier transform is concentrated on a circle , and hence can be treated as functions in .the complex differentiability relies instead on the almost complex structure that can be associated to as a contact manifold , and tells that is a space of cr functions .we will call -bargmann transform of a function we start now the construction of the hilbert space . inwhat follows we will use for the ( unitary ) fourier transform the convention [ def : fsemin ] let be a function in the schwartz class and call its fourier transform .we define the distributions where stands for the bessel function of order zero .+ we will also call the corresponding operators from to by we denote the seminorm on noting that , where , and by we denote the functional on the introduced operators and functionals are related by distributional fourier transform , as expressed by the following lemma . [ lem : keyreal ] the following hold * in distributional sense * for all the first claim reads equivalently . to this endwe only need to show since by standard arguments on tempered distributions to see ( [ dum1 ] ) , take the second claim is then a consequence of definition [ def : fsemin ] and the distributional parseval theorem , since .let be the equivalence relation induced by ( [ eq : fsemin ] ) and ^\omega ] its elements . by definition , then , we have that is a norm on . moreover , since this quotient keeps only informations on the behavior of functions on the circle of radius , then the elements of can be considered as functions of the polar angle .[ lem : fhils1 ] any element ^\omega \in { \stackrel{\circ}{{\mathcal h}}\!{}^\omega} ] .in particular , ^\omega(\varphi ) \frac{1}{\omega}\delta(|k| - \omega ) .\ ] ] moreover , we have that ^\omega\|_{l^2(s^1)} ] the related equivalence classes .we will call the closure of the quotient space with respect to the norm and denote by \omega ] of can be represented as an distribution by * we can unambiguously extend the notation ^\omega ] . to any distribution can indeed associate a function such that ^\omega(\varphi ) = \phi(\varphi) ] to mean ^\omega ] . the relations among the various hilbert spaces is now summarized .[ cor : diagram ] the following diagram is commutative where the map is intended in the sense of theorem [ teo ] and transforms with respect to the variable are considered as functions of the polar angle .the only relation that has not been inspected is .the regularity is due to ( [ cr ] ) , while \omega ] and indexed by , then the structures in fig.[pinwheels ] can be approximately reproduced as where the integral should properly be intended in it sense . the experiments that allow to obtain opm as in fig.[pinwheels ] , left rely on optical imaging techniques that quantify blood charge in the neural tissue using fmri .this setting is then used to measure the activity in v1 caused by cells responses to so - called gratings .gratings are images constituted by straight parallel black and white stripes shifting along the perpendicular direction , that can be easily provided by plane waves with phase shift. experiments with gratings have shown that activated regions depend only on the angle at which they are presented , or better the orientation , since the stimulus , and consequently the resulting activity , can not be distinguished at angles of due to the phase shift .an example of the images obtained is given in fig.[blobcomparison ] , left .the result of this experiment is then a family of real maps , that can be used to obtain the opm coded with colors as in fig.[pinwheels ] by performing a vector sum and then considering the resulting orientation ( see fig.[blobcomparison ] , right ) : we note that , starting from a given an orientation map , a way to obtain activated regions that are compatible with ( [ eq : colorcoding ] ) is by scalar product , i.e. in particular if we consider the construction ( [ eq : randomphases ] ) , then ( [ eq : scalarprcol ] ) reduces to we are going to see that model introduced in does indeed reproduce activities in the form ( [ eq : blobsempirical ] ) , and hence opm in the form ( [ eq : randomphases ] ) , but starting from a geometric model of the activities , so that opm arise as a consequence of the color coding ( [ eq : colorcoding ] ) .the model we have proposed in in order to reproduce the activity patterns resulting from the gratings experiment can be stated in terms of an -bargmann transform of a specific white noise process .this will be properly symmetrized due to the intrinsic characteristics of the patterns , and the presence of randomness will be motivated in terms of the retinal random waves previously described .the geometry of the different activities resulting from the exposure to gratings at various orientations is then motivated in terms of the uncertainty principle , providing a description of a family of functions indexed by orientations in terms of a single function on the group .[ [ statement - of - the - model ] ] statement of the model + + + + + + + + + + + + + + + + + + + + + + given a white noise with values in ] , and considering a prolongation to $ ] such that , we define the functions recalling that they are minimal uncertainty states in the sense of proposition [ prop : minuncmix ] .starting from them we define the activity functions as these functions are indeed periodic in , to represent orientations , and provide opposite response at orthogonal angles : , as it is the case for v1 cells . by direct computationwe can explicitly write ( [ modelconstr ] ) in the following form calling then ( [ modelconstr ] ) reads we note that , if we consider a real retinal image obtained as a superposition of plane random waves at a fixed wavelength as given by then by theorem [ teo ] the function can be obtained as the the resulting cell response in the form ( [ eq : csa ] ) . this in particular motivates the choice of random phases . [ [ comparison - with - the - experiments ] ] comparison with the experiments + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in fig.[blobcomparison ] we show a comparison with the experiments .the parameters were chosen as , providing an approximate equipartition of uncertainty .however the results are stable under small variations of .this is reasonable , since the functions ( [ effpot ] ) are such that up to a multiplicative constant .this ensures in particular that the expression ( [ eq : blobsempirical ] ) can be considered as an approximation to ( [ model ] ) . produced by ( [ model ] ) .right , gray valued images visualize the maps by varying the angle .the color image has been constructed associating a color coding representation to preferred orientations , as in ( [ eq : colorcoding ] ) .( center and right figures extracted from ),title="fig : " ] produced by ( [ model ] ) .right , gray valued images visualize the maps by varying the angle .the color image has been constructed associating a color coding representation to preferred orientations , as in ( [ eq : colorcoding ] ) .( center and right figures extracted from ),title="fig : " ] produced by ( [ model ] ) .right , gray valued images visualize the maps by varying the angle .the color image has been constructed associating a color coding representation to preferred orientations , as in ( [ eq : colorcoding ] ) .( center and right figures extracted from ),title="fig : " ]the introduced notion of -bargmann space allows to pass the problem of nonintegrability of the representation using a measure that is singular in the fourier domain , hence reflecting the behavior of the irreducible representation .this method is constructive , and permits to perform integrations on the group in terms of equivalence classes .moreover , the choice of a fiducial vector as a minimum of the uncertainty principle provides a relation between two symmetries that appeared unrelated as and .this relation is given at the level of coherent states transforms , and relies on a compatibility of the complex structures associated to the coherent states of the two lie groups , that can then be considered nested one into the other from this perspective .moreover , it allows to complete the concrete construction of the space of surjectivity of the -bargmann transform .this approach unifies two main symmetries present in the maps of orientation preference of the primary visual cortex , and allows to produce a model that is able to reproduce neural maps of activity measured experimentally .the model is based on the uncertainty principle of , describing activated regions in terms of minimal uncertainty states .this uncertainty principle acts at the macroscopic level , hence does not rely on any microscopic physics assumption on the brain , but rather refers to functional features of the cortex .the modularity of the lie group approach allows to extend the models to other higher symmetries characterizing the functional architecture , as in , and in perspective to model high level functionality of vision .99 ali s.t . ,antoine j.p . and gazeau j.p . , coherent states , wavelets and their generalizations .springer 2000 .arnold v.i . , mathematical methods of classical mechanics .springer 1989 .barbieri d. , citti g. , sanguinetti g. and sarti a. , an uncertainty principle underlying the functional architecture of v1 .j. physiol .paris , to appear .baouendi m.s . ,ebenfelt p. and rothschild l.p ., real submanifolds in complex space and their mappings .princeton 1999 .bargmann v. , on a hilbert space of analytic functions and an associated integral transform .application to distribution theory .pure appl .1967 . , 20 : 1101 .blair d.e . ,riemannian geometry of contact and symplectic manifolds .birkhuser 2002 .blasdel g.g . ,orientation selectivity , preference , and continuity in monkey striate cortex .j. neurosci 1992 , 12:3139 - 3161 .bonhoeffer t. and grinvald a. , iso - orientation domains in cat visual cortex are arranged in pinwheel - like patterns .nature 1991 , 353(6343):429 - 431 .bosking w. h. , zhang y. , schofield b. and fitzpatrick d. , orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex .j. neurosci 1997 , 17(6):2112 - 2127 .breitenberger e. , uncertainty measures and uncertainty relations for angle observables . found .phys . 1985,15:353 - 364 .carruthers p. and nieto m.m . ,phase and angle variables in quantum mechanics .pys 1968 , 40:411 - 440 .citti g. and sarti a. , a cortical based model of perceptual completion in the roto - translation space . j. mathematical imaging and vision 2006 , 24(3):307 - 326 .daugman j.g . , uncertainty - relation for resolution in space , spatial frequency and orientation optimized by two dimensional visual cortical filters . j. opt .am . a 1985 , 2(7):1160 - 1169 .folland g.b ., harmonic analysis on phase space .princeton 1989 .gromov m. , carnot - carathodory spaces seen from within .sub - riemannian geometry , progr .1996 , 144:79323 , birkhuser .hubel d.h . andwiesel t.n . , ferrier lecture . functional architecture of macaque monkey visual cortex .r. soc . lond .b 1977 , 198:1 - 59 .huberman a.d ., feller m.b . and chapman b. , mechanisms underlying development of visual maps and receptive fields .2008 , 31:479 - 509 .isham c.j . and klauder j.r ., coherent states for n dimensional euclidean groups e(n ) and their application .1991 , 32(3):607 - 620 .kaschube m. , schnabel m. , lwel s. , coppola d.m ., white l.e . and wolf f. , universality in the evolution of orientation columns in the visual cortex .science 2010 , 330(6007):1113 - 1116 .montgomery r. , a tour of subriemannian geometries , their geodesics and applications .niebur e. and wrgtter f. , design principles of columnar organization in visual cortex .neural computation 1994 , 6(4):602:614 .ohki k. , chung s. , kara p. , hbener m. , bonhoeffer t. , reid c. , highly ordered arrangement of single neurons in orientation pinwheels .nature 2006 , 442(7105):925 - 928 .perelomov a. , generalized coherent states and their applications .birkhuser 1986 .petitot j. and tondut y. , vers une neurogomtrie .fibrations corticales , structures de contact et contours subjectifs modaux .mathematiques , informatique et sciences humaines 1999 , 145:5 - 101 .ringach d. , spatial structure and symmetry of simple - cell receptive fields in macaque primary visual cortex .j. neurophysiology 2002 , 88(1):455 - 63 .sarti a. , citti g. and petitot j. , the symplectic structure of the primary visual cortex .cybernetics 2008 , 98(1):33 - 48 .sugiura m. , unitary representations and harmonic analysis : an introduction .elsevier 1990 .vilenkin n.j ., special functions and the theory of group representations .wolf f. and geisel t. , spontaneous pinwheel annihilation during visual development .nature 1998 , 395(6697):73 - 78 .
|
the uncertainty principle of allows to construct a coherent states transform that is strictly related to the bargmann transform for the group . the corresponding target space is characterized constructively and related to the almost complex structure of as a contact manifold . such a coherent state transform provides a model for neural activity maps in the primary visual cortex , that are then described in terms of minimal uncertainty states . the results of the model are compared with the experimental measurements .
|
monte carlo simulations have proven to be adequate tools for describing alpha- , beta- and gamma - particle transport , even in complex geometries .a great variety of computer codes have been developed for particle transport , dosimetry , particle physics and industrial applications .different levels of sophistication exist among the codes , but even the simplest ones , which take into account rutherford and compton scattering , photoelectric absorption and continuous slowing - down of charged particles , can provide acceptable results . in many cases , simulation is the only practical way to explore the physics behind observed phenomena .alpha - particle spectrometry is a widely - used analytical method , for example in surveys of environmental radioactivity .the low activity of the samples necessitates long counting times and a small sample - detector distance ( sdd ) .the drawback of a small sdd is the possibility of coincidence summing between the emitted alpha particle and subsequent emissions from the daughter nucleus .in addition , carefully designed sample preparation techniques are essential , since the alpha particles continuously lose their energy as they travel through matter .the energy loss leads to degradation of the spectrum quality via peak spreading , which increases with as the sdd is reduced .simulations can be used to investigate the influence of various phenomena on the spectrum quality .the most important factors can be singled out and the measurement setup can be optimised .moreover , unknown properties of the source , such as source density ( or thickness ) or source particle properties , can be determined .this is important , especially in the case of direct alpha spectrometry , when radiochemical sample treatment is omitted .the particle beam attenuation and interactions in basic research can also be examined .many monte carlo simulation packages , such as the trim package , the geant software suite , and mcnp code , are suitable for simulating the alpha particle behaviour in the medium . however , these packages are not necessarily optimal for alpha spectrometry simulations .more specific approaches to alpha - spectroscopic simulations include the backscattering study of ferrero et al . and the investigation of aerosol particles by pickering .roldn et al . examined the spectrum quality at a small sdd .the present monte carlo simulation code , known as aasi ( advanced alpha - spectrometric simulation ) , is designed to simulate alpha - particle energy spectra .it is intended to be a comprehensive simulation package where all the major processes influencing the energy spectrum are included .samples of various types ( aerosol particles , thick samples , non - uniform samples , etc . ) are accommodated .coincidences between the emitted particles are calculated using nuclide - specific decay data that are stored in a library file prepared in extensible markup language , xml .although the code has so far been applied to the simulation of alpha particle energy spectra from environmental samples , it can also be used for other applications .the typical running time on a 1.6 ghz pentium pc varies from seconds to a couple of minutes depending on the complexity of the simulation problem .the code is written in fortran 95 .particle propagation through a material layer is determined by two physical processes : direction changes ( scattering ) and energy loss .the algorithm for particle propagation in a given material layer proceeds as follows : 1 .emit a particle from a randomly selected point .2 . calculate the distance , i.e. step length to the next scattering ( or photoabsorption ) event using the cross - section data .3 . if the particle is charged , adjust the step if a boundary of absorbing material is crossed .calculate the continuous energy loss during the step .4 . if the particle energy is below the cut - off value , stop tracking .if the particle crossed boundary of the material layer , proceed to the next layer if one is present .otherwise , stop tracking .determine the next direction vector , i.e. scattering angles .if the particle is a photon , determine the energy loss in the scattering or photoabsorption event .goto ( 2 ) . here ,characteristics of the source as well as the particle tracking method , i.e. determination of the scattering angles , are described .calculation of the energy loss of alpha particles , electrons and photons is presented in the following sections .particle emission can originate from a point or from a finite - sized object .these objects , e.g. aerosol particles , can be embedded in the source matrix .the composition of the source matrix and the objects that emit radiation need not to be the same .for example , alpha particles can be emitted from an aerosol particle located inside a glass - fibre filter . in the spectrum simulationsthe number of alpha particle emissions is given in the input .the thickness of the source can be subjected to random fluctuation that is assumed to follow a gaussian distribution with a user - given standard deviation .to prevent impossibly large thicknesses , the resulting source thickness is limited to where is the radial position inside the source , is a user - given parameter and is the nominal ( mean ) thickness .coordinates of the source particles are sampled as described by siiskonen and pllnen . for sources with a random thickness and convex or concave sources ,the vertical coordinate is sampled by the rejection method .convex and concave source shapes are described with a paraboloid of revolution .the user of the code supplies the central and side thicknesses of the source .if the source thickness is zero , all source particles lie on a plane .source particles can have a spherical or elliptical shape .spherical source particles can have a log - normal size distribution .inactive source particles can be coated with a uniform layer of radioactive material .in addition , a spherical shell of inactive material can be placed around a spherical source particle .the distance of the source particles from the source surface can be exponentially distributed inside the source matrix .this is a useful feature for investigating air filters in which radioactive aerosol particles are accumulated .this option is only available for cylindrical sources without thickness fluctuations .the distance of a source particle from the source surface is obtained from where is a random number between 0 and 1 , and is the mean penetration depth given by the user .another user - given parameter , , determines the fraction of the particles to be distributed according to eq .( [ eq : expdistr ] ) .the rest , fraction , is distributed on the source surface ( ) .particles which have larger than the source thickness penetrate the source and are ignored .the total number of emissions from the source is reduced accordingly .an average solid angle subtended by the detector , the geometrical detection efficiency , is calculated .this is the number of hits received by the detector divided by the number of alpha particle emissions .the desired accuracy , the standard deviation of the efficiency , is given in the input .calculation of the geometrical detection efficiency is necessary , for example , in direct alpha spectrometry when radiochemical sample treatment is omitted .tracers can not then be used for quantitative activity determination .the measurement setup , consisting of the source , source backing , absorbing material layers and the detector , can be plotted in a file for visual inspection .library routines for plotting were written by kohler .electrons are tracked when they travel in the source , in the source backing and in the detector , including its dead layer .photons are only tracked inside the detector ( including its dead layer ) .alpha particles are tracked in the source backing for backscattering studies .otherwise , particles are assumed to travel in straight paths .the tracked particle is followed until it escapes the absorbing material or its energy falls below a cut - off value . when crossing a boundary between two adjacent absorbing layers ,the tracking step length is adjusted so that the step does not cross the layer boundary .particle tracking starts with the sampling of the initial emission coordinates .the initial emission direction is chosen from a uniform distribution .following the emission , the cosine of the polar angle ( see fig . [fig : coords ] ) of the tracked particle is determined by where is the polar angle after scattering , is the scattering polar angle and is the scattering azimuthal angle .the cosine and sine of the azimuthal angle are given by where and all angles are in laboratory coordinates .the scattering angle depends on the differential scattering cross section .after the initial emission , the particles undergo successive scatterings which are assumed to be statistically independent .alpha particle scattering is calculated in the centre - of - mass frame . before the determination of ,the scattering angle is transformed to laboratory coordinates via where is the scattering polar angle in centre - of - mass coordinates , and are the masses of alpha particle and target atom , respectively .alpha particle energy loss is calculated as described in ref . , using the stopping power parametrisation of ziegler as described in ref .the total stopping power is the sum of the stopping power due to electrons , , and the nuclear stopping power , . in the energy region of interest ( below 10 mev ) , is parametrised as where and values of parameters are tabulated in ref . . here, is the energy of a proton moving at the same velocity as the alpha particle in question . for composite materials , other parametrisations are also available .an arbitrary number of absorbing material layers can be added between the source and the detector .the user supplies the number of layers , their atomic and mass numbers , densities , thicknesses and standard deviations representing the thickness fluctuations .alpha particles are assumed to travel straight paths , except in the source backing where their path is tracked collision by collision .straggling of the alpha particle energy loss can be approximated by a gaussian energy distribution .although not strictly correct with thin absorbing layers , it gives a reasonable estimate in many cases .standard deviation of the gaussian distribution depends on the maximum energy transfer in one collision between the alpha particle and an atomic electron , , given approximately by .the parameter is defined through , where is the energy of the alpha particle .the deviation is given by where is the alpha particle velocity in units of .parameter is the average energy loss in the material layer in question , here , and are the atomic and mass number of the target , respectively , is the material density in g/ and is the distance travelled in micrometers .alpha particles can be tracked in the source backing plate .screened elastic rutherford scattering is used to determine the changes in the flight direction . between the elastic collisions , alpha particles are assumed to lose their energy continuously .the mean free distance between the collisions is calculated from the potential where is the screening radius , is electron charge magnitude , is the permittivity of free space and is the radial distance .the resulting total cross section is where is the fine structure constant and screening parameter the screening radius is given by where is the bohr radius .the effective charge of the alpha particle , , is calculated as described in .the mean free distance ( step length ) between the collisions , , is sampled from where is the atomic density and is the avogadro constant .the mean atomic spacing is used as a step length if .moreover , if the energy loss between two successive collisions is more than five percent of the alpha particle energy , the step length for the energy loss calculation is reduced until the loss is less than five percent .angular deflection in the scattering event from a potential ( [ eq : rutherfordv ] ) is given by and ( the s are independent random numbers ) .alpha particle energy loss in the detector dead layer is treated as described in section [ sect : loss ] . when the alpha particle hits the active volume of the detector , all its remaining energy is assumed to be deposited . in other words , alpha particles are neither tracked nor is their energy loss calculated in the active volume of the detector . instead, a simplified solution is chosen which notably reduces the calculation time .the properties of the detector are read from the user - prepared file .the parameters are the atomic and mass numbers of the detector material , detector radius and thickness , dead layer thickness , detector full - width at half - maximum ( fwhm ) and the parameters of the exponential tailing function .measurements show that the detector response to monoenergetic alpha particles is not gaussian . to take this into account, a double - exponential tailing function can be added to the detector response .the resulting energy is sampled from a distribution where is the energy of the incoming alpha particle .the user supplies the parameters , and .they should be determined from the measurements of good quality ( i.e. thin ) sources at a large sdd .parameter is the ratio of the areas of the two exponential distributions .typical values for canberra pips with an area of 450 mm are kev , kev and .fwhm of the gaussian detector response is 14 kev .convolution with the gaussian detector response is done after the tailing .large - angle deflections of the electrons result from the screened elastic rutherford scattering , eqs .( [ eq : rutherfordv ] ) and ( [ eq : rutherfordcs ] ) , with replaced with , see , e.g. ref .the screening parameter can be chosen from three alternative models .nigam et al . suggested that where is the electron kinetic energy .adesida et al . fitted electron scattering data in aluminium and proposed that molire ( see also ) concluded that where and is the mass of the electron . between the elastic collisionsthe electrons continuously lose their energy .the energy loss is calculated by using bethe s formula ,\end{aligned}\ ] ] where ( ev ) is the average ionisation energy of the target atom .we neglect bremsstrahlung and the production of delta electrons and x - rays , since their influence on the detected electron energy spectrum is small . the mean distance between the collisions and the angular deflectionis calculated as for the alpha particles , except that the maximum energy loss per step is ten percent of the electron energy .when an electron hits the detector , its path is followed through the dead layer and into the active volume . in the dead layer, the user has an option to have a partially - depleted region , where the electron deposits part of its energy in the detected signal .the amount of energy deposited in the formation of the signal , , is then given by where is the electron energy loss at depth and is the dead layer thickness . in the active volume of the detector ,the deposited energy is equal to the detected signal , .the number of backscattered and transmitted particles from the detector is calculated .a particle is counted as backscattered when it escapes from the front side of the detector , and as transmitted when it escapes from other sides of the detector . for backscattering studies ,a parallel electron beam hitting the detector surface perpendicularly can be used .photons are assumed to interact via photoelectric absorption and compton scattering .pair production is ignored , since we are interested in low - energy phenomena .the mean free distance is calculated from the total cross section of the above - mentioned interactions .a random number is used to decide which interaction occurs at the interaction point .the total photoelectric absorption cross section is read from a text file and interpolated .data were obtained from the national institute of standards and technology database . if data for the element in question do not exist , an analytical approximation for the cross section is used . here, kev and is the photon energy in kev .this approximation overestimates the total cross section for most elements , especially at low energies ( less than 100 kev ) .for example , the overestimation is approximately a factor of 10 in si when is between 15 kev and 100 kev .as the overestimation is quite large , the user should supplement the photoelectric data library for the element in question , if possible .after the photoelectric absorption , an electron is ejected in the direction of the incoming photon , with .x - rays produced in this process are ignored .their energy is often so small that the photon is absorbed in the detector .the differential cross section for compton scattering is ^ 2 } \nonumber \\ & & \qquad\times\left\lbrace \frac{e_\gamma^2(1-\cos\omega)^2}{m_\mathrm{e}c^2\left [ m_\mathrm{e}c^2+e_\gamma(1-\cos \omega ) \right ] } + 1 + \cos^2\omega\right\rbrace\end{aligned}\ ] ] neglecting the binding energy and momentum of the atomic electron .the photon scattering angle is calculated from distribution ( [ eq : kn ] ) using the rejection method by brusa et al .the total cross section for compton scattering is \ln\frac{2e_\gamma+m_\mathrm{e}c^2}{m_\mathrm{e}c^2}\right.\nonumber \\ & & \qquad\left .+ \frac{1}{2}+4\frac{m_\mathrm{e}c^2}{e_\gamma}- \frac{(m_\mathrm{e}c^2)^2}{2(2e_\gamma+m_\mathrm{e}c^2)^2}\right\rbrace.\end{aligned}\ ] ] the total interaction cross section is . the mean free distance is then majority of alpha emitters have a significant decay branch to excited states of the daughter nucleus .the excited states decay by gamma - ray or conversion electron emission .since the lifetimes of the excited states are typically much shorter than the integration time of the data acquisition electronics , pulse summation between the alpha particle and particles emitted by the daughter nucleus may occur .the summation is more pronounced when the sdd is small .the summation may lead to distortion of the peak shape and , thus , may have an influence on nuclide identification .a good example is separation of and , which is difficult even in the case of a high - resolution detector and a sophisticated spectrum deconvolution code .another example is the coincidence summation of , whose main alpha decay branch leads to third excited state of the daughter , resulting in a clearly visible bump on the high - energy side of the main alpha peak .the probability of each alpha decay branch is given in the nuclide library file , consisting of a schema file and actual library in xml format .the fortran - xml interface is written by markus .the alpha decay branch for an individual decay is selected using a random number .the nuclide library file also contains decay routes of the excited states of the daughter nucleus .each decay route has a known probability of occurence ( yield ) , decay type ( gamma or conversion electron emission ) , initial and final state indices , and energy of the emitted particle .the emission of a conversion electron is associated with an x - ray , whose energy is given in the library .for each excited state of the decay route a random number is used to select the next decay channel ( i.e. , final state and emitted particle ) .the route is followed until the ground state is reached . for each emitted particle ,the emission direction is sampled .after the conversion electron emission , an x - ray is emitted before the cascade is followed further .we assume that each conversion electron is associated with x - ray emission .this is a simplification , since we overlook fluorescence yields and auger electrons .the approximation is good for heavy elements , whose k - shell fluorescence yields are close to 100% . if the particle deposits energy in the active volume of the detector , a coincidence is formed and deposited energy is added to the alpha particle energy .if cascade consists of subsequent decays , the alpha particle can be in coincidence with particles .deposited energies of those particles are then added to the alpha particle energy .the algorithm to calculate the coincidences proceeds as follows : 1 .check that decay routes exist , i.e. , transitions are available for the present state .if no route is found , exit the loop .2 . use a random number to select the decay route , i.e. , decay type , line energy and final state index .3 . if the emitted particle is an electron , follow it through the source and its backing .determine if the particle travels towards the detector .4 . if the particle hits the detector , simulate the deposited energy .5 . if particle deposits energy to the detector , a coincidence is formed .add the deposited energy to the alpha particle energy .if the particle is backscattered or transmitted , increase corresponding counters .if the emitted particle is an electron , emit the associated x - ray .go to ( 3 ) .go to ( 1 ) .the lifetimes of the excited states are available in the library file . however ,when coincidences are calculated , they are not taken into account .the lifetimes are assumed to be short enough , compared to the integration time of data acquisition electronics , for a coincidence to be seen .geometrical detection efficiency and the alpha particle energy loss were investigated by siiskonen and pllnen .they found an excellent agreement with earlier results and measurements . to further confirm the homogeneity of the emission point distribution inside a source , we compared a simulated alpha particle energy spectrum from a thick sample with one obtained by numerical integration . in the comparison ,a parallel alpha particle beam was considered ( corresponding to a very large sdd , polar angle ) in order to keep the numerical integration tractable .the agreement between the simulated spectrum and the one from numerical integration is good ( fig .[ fig : int ] ) .this confirms the homogeneity of the emission point sampling , which is qualitatively also shown in ref .equally good agreement was obtained when the source was assumed to be spherical in shape ( results are not shown here ) .electron backscattering can be used to examine the quality of electron transport , since backscattering is sensitively depent on continuous energy loss and angular deflections in elastic rutherford scattering events .electron backscattering coefficients for various elements are compared in table [ table : bsc ] .the agreement between the experimental values and those of the present work is good .however , the results of the present work are higher than the simulated results of gueorguiev et al .the difference could be explained by different calculation of the continuous energy loss .gueorguiev et al .used different average ionisation energy and a three - point difference scheme .the convergence of the present backscattering calculations was ensured to two significant digits . to investigate the influence of coincidences , measured spectrum of compared to the simulation .the coincidence summing of alpha particles with photons and electrons from is clearly visible as a bump above 5490 kev ( fig .[ fig : am ] ) .this bump is absent at a large sdd . since experimental subshellconversion coefficients for were not available , the relative yields for conversion electrons were set as follows : the yield of the l line was assumed to be 20% and that of l was assumed to be 80% .figure [ fig : am ] illustrates that simulations are accurately able to explain the effect of coincidence summing . when the coincidences are ignored in the simulation , the resulting spectrum clearly disagrees with the measurement ( fig . [fig : amnocoinc ] ) .the present monte carlo simulation code , known as aasi , is designed for simulating energy spectra in alpha spectrometry .the code was originally developed for estimating the influence of source characteristics on the alpha particle spectra , for example in the case when the source quality is not optimal for high - resolution alpha spectrometry .the sources may be considerably thicker than those obtained from radiochemical sample treatment and their thickness may not be uniform .the source may even be an aerosol filter in which radioactive materials are deposited .this option is useful when the presence of alpha particle emitting materials in the filter must be identified rapidly , i.e. alpha particles are counted directly without prior radiochemical sample manipulation .this information may be of utmost importance should a nuclear incident or malicious dispersal of radioactive material into the environment occur .later development of the code is focused on the development of modelling of the detector and the effect of alpha - electron and alpha - photon coincidences on the measured spectra .the code can be easily used for various alpha detectors provided that the detector response can be treated as the convolution of a gaussian part of a peak and a double - exponential low - energy tail ( eq . [ eq : peakshape ] ) .the comparison of measured and simulated spectra , especially in the case of a thin source , highlights the importance of coincidence phenomena .when the source - detector distance is small ( less than approximately a few cm ) the coincidences must be taken into account in unfolding the spectra . quantitative separation of nuclides such as and in alpha spectrometryis questionable if coincidences are neglected .developments in the simulation of electrons and photons in the medium also facilitate the use of aasi in basic research .the detector response to various particle beams can be examined .future development of the code will be addressed in the construction of an appropriate library data file and a user - friendly interface .the validity of the results , i.e. the agreement of simulated and those obtained from other calculations or measurements , is verified here and in previous publications . however , validation is a continuous process not limited to this article only .biersack , l.g .haggmark , nucl .instr . and meth . 174 ( 1980 ) 257s. agostinelli , et al .instr . and meth .a 506 ( 2003 ) 250 .briesmeister , mcnp - a general monte carlo n - particle transport code , version 4c ( 2000 ) , los alamos national laboratory , los alamos , new mexico , usa . j.l .ferrero , c. roldn , m.l .acea , e. garca - torao , nucl .instr . and meth .a 286 ( 1990 ) 384 .s. pickering , j. aerosol sci . 15 ( 1984 )533 . c. roldn , j.l .ferrero , f. snchez , e. navarro , m.j .rodrguez , nucl .instr . and meth .a 338 ( 1994 ) 506 .t. siiskonen , r. pllnen , appl .isot . 60 ( 2004 ) 947 .kohler , psplot library home page http://www.nova.edu/ocean/psplot.html ( retrieved in november 2004 ) .firestone , table of isotopes cd - rom , in : s.y.f .chu , v.s .shirley ( eds . ) , eight ed . , version 1.0 , wiley - interscience , 1996 .j. lindhard , v. nielsen , m. scharff , math .36 , vol . 10 ( 1968 ) 2 .g. amsel , g. battistig , a. lhoir , nucl .instr . and meth .b 201 ( 2003 ) , 325 a. lhoir , nucl .instr . and meth . 223 ( 1984 ) 336g. bortels , p. collaers , appl .38 ( 1987 ) 831 .e. steinbauer , g. bortels , p. bauer , j.p .biersack , p. burger , i. ahmad , nucl . instr . and meth .a 339 ( 1994 ) 102 .t. kijima , y. nakase , radiation measurements 26 ( 1996 ) 159 .nigam , m.k .sundaresan , ta - you wu , phys .rev . 115 ( 1959 ) 491 .i. adesida , r. shimizu , t.e .everhart , j. appl .51 ( 1980 ) 5962 .molire , z. naturforschg . 2a ( 1947 ) 133 .krane , introductory nuclear physics , john wiley & sons , new york , 1988 .berger , j.h .hubbell , s.m .seltzer , j.s .coursey , d.s .zucker , available from http://physics.nist.gov/physrefdata/xcom/text/xcom.html ( retrieved in november 2004 ) .scadron , advanced quantum theory , second ed ., springer - verlag , new york , 1991 .d. brusa , g. stutz , j.a .riveros , j.m .fernndez - varea , f. salvat , nucl .instr . and meth .a 379 ( 1996 ) 167 .a. markus , an xml - fortran interface , see http://xml-fortran.sourceforge.net/ ( retrieved in october 2004 ) .r. pllnen , t. siiskonen , p. vesterbacka , radiation measurements 39 ( 2005 ) 565 .gueorguiev , d.i .ivanov , g.m .mladenov , vacuum 47 ( 1996 ) 1227 .l. reimer , c. tollkamp , scanning 3 ( 1980 ) 35 as cited by d.c .joy , http://www.napchan.com/bse/index.htm ( retrieved in december 2004 ) . h.e .bishop , x - ray optics and microanalysis , in : r. castaing , p. deschamps , j. philibert ( eds . ) , hermann , paris , 1965 ..comparison between experimental ( exp . ) and simulated ( mc ) electron backscattering coefficients for various elements when kev and for normal incidence . in all simulations , the screening model of nigam was used , see eq .( [ nigam]).[table : bsc ] [ cols="<,^,^,^,^",options="header " , ] and the azimuthal angle is .laboratory polar angles are and .the -axis is parallel to the detector symmetry axis and points from the source towards the detector.[fig : coords ] ] m , central thickness 2 m , solid black line ) with a spectrum obtained by numerical integration ( dashed grey line ) from the same source .alpha particles were assumed to travel in parallel tracks.[fig : int ] ] am when coincidences are taken into account ( solid line , same as in fig .[ fig : am ] ) and when coincidences are ignored ( circles ) .see the caption of fig .[ fig : am ] for simulation parameters.[fig : amnocoinc ] ]
|
a monte carlo code , known as aasi , is developed for simulating energy spectra in alpha spectrometry . the code documented here is a comprehensive package where all the major processes affecting the spectrum are included . a unique feature of the code is its ability to take into account coincidences between the particles emitted from the source . simulations and measurements highlight the importance of coincidences in high - resolution alpha spectrometry . to show the validity of the simulated results , comparisons with measurements and other simulation codes are presented . , monte carlo simulation ; alpha spectroscopy ; coincidences 02.70.uu , 29.40.wk , 29.30.ep
|
granular materials are omnipresent in nature and widely used in various industries ranging from food , pharmaceutical , agriculture and mining among others . in many granular systems interesting phenomena like dilatancy , anisotropy , shear - band localization , history - dependence , jamming and yield have attracted significant scientific interest over the past decade .the bulk behavior of these materials depends on the behavior of their constituents ( particles ) interacting through contact forces . to understand their deformation behavior , various laboratory element testscan be performed .element tests are ( ideally homogeneous ) macroscopic tests in which one can control the stress and/or strain path .such macroscopic experiments are important ingredients in developing and calibrating constitutive relations and they complement numerical investigations of the behavior of granular materials , e.g. with the discrete element method .different element test experiments on packings of bulk solids have been realized experimentally in the biaxial box while other deformations modes , namely uniaxial and volume conserving shear have also been reported .additionally , element tests with more complex , non - commercial testers have been reported in literature , even though their applications are restricted for example to the testing of geophysically relevant materials at relatively higher consolidating stresses .the testing and characterization of dry , non - sticky powders is well established .for example , rotating drum experiments to determine the dynamic angle of repose have been studied extensively as a means to characterize non - cohesive powders , even though these tests are not well defined with respect to the powder stress and strain conditions .the main challenge comes when the powders are sticky , cohesive and less flowable like those relevant in the food industry . for these powders ,dynamic tests are difficult to perform due to contact adhesion and clump formation .one possibility to overcome this challenge is to perform confined quasi - static tests at higher consolidation stresses .one element test which can easily be realized ( experimentally and numerically ) is the uniaxial ( or oedometric ) compression ( in a cylindrical or box geometry ) involving deformation of a bulk sample in one direction , while the lateral boundaries of the system are fixed .this test is particularly suited for determining the poroelastic properties of granular materials .while most uniaxial tests on dry bulk solids have been devoted to studying the relationship between pressure and density and the bulk long time consolidation behavior , the dynamics of the time - dependent phenomena has been less studied in experimental and practical applications . for example , in standard shear testers like the jenike shear tester and the schulze ring shear tester , during yield stress measurements , the focus is usually not on the relaxation behavior . considerable stress - relaxation of bulk materials can even disturb yield stress measurements .additionally , most cohesive contact models used in discrete element simulation of granular materials do not account for the time dependent relaxation behavior , similar to those observed in viscoelastic materials such as polymers , gels , in dielectric relaxation and in the attenuation of seismic waves . for the improvement of both discrete element contact models and constitutive macro models relating to cohesive powders , it is necessary to have an experimental and theoretical understanding of the stress response of cohesive materials under different loading conditions . for viscoelastic materials ,the relaxation has been reported to imply a memory effect and can be described using convolution integrals transformed to their fractional form and containing a relaxation modulus that describes the response of the system to stress . for these materials , phenomenological models involving the combination of springs and dashpots , such as the maxwell , zener , anti - zener , kelvin - voigt , andthe poynting - thomson models have been developed ( see refs . and references therein ) .even though stress relaxation has also been observed in granular media , not much work has been done in providing a theoretical description of this phenomenon for granular materials . in the present study , using two simple testers , we perform oedometric compression tests with the main goal of investigating the relaxation behavior of industrial powders at different stress levels under constant strain ( volume ) .another goal is to provide a quantitative comparison between the relaxation behavior as observed in two testers , namely the lambdameter and the ft4 powder rheometer , in order to confirm that this behavior occurs irrespective of the device used .the lambdameter has the peculiar advantage that both vertical and horizontal stress can be obtained simultaneously unlike the ft4 powder rheometer and other simpler oedometric test setups .finally , we will propose a simple model for stress relaxation that captures the relaxation of cohesive powders at different compaction levels .the work is structured as follows : in section [ sec : characters ] , we provide a characterization of the material sample , and in section [ sec : expsetup ] the description of the experimental devices and the test protocols . in section[ sec : creeptheory ] , we present the theoretical model for stress relaxation . section [ sec : resultsdiscs ] is devoted to the discussion of experimental and theoretical results , while the conclusions and outlook are presented in section [ sec : creepconclusn ] .in this section , we provide a brief description of the experimental samples along with their material properties . in order to investigate the relaxation behavior ,two cohesive reference samples were chosen , namely cocoa powder and eskal 500 limestone .the choice is based on several selection factors , among which are the suitability for different industrial applications , ability to withstand repeated loading without changes in the property of the sample and long term availability / storage of the samples .the eskal limestone has been used extensively as reference cohesive powder , and is made available in convenient amounts in a collaborative european project , c.f .www.pardem.eu .scanning electron microscope ( sem ) images obtained using a hitachi tm 1000 instrument ( hitachi ltd , japan ) for both powders are displayed in fig .[ morphology ] .the particle size distributions are measured using the helos testing instrument ( sympatec gmbh , germany ) .while limestone powder is dispersed with air pressure , we use the wet mode to disperse cocoa powder since it forms agglomerates . for the wet mode ,cocoa powder is dispersed in dodecane , an oily liquid , in order to retain the fat layer while ultrasound ( vibration ) is applied to stress the dispersion and break off the agglomerates .the particle density is measured by helium pycnometry ( accupyc , micromeritics , usa ) while the water content is given as the ratio of the difference between the original and dried mass ( after 24 hours in a oven at 100 ) and the original sample mass .the bulk cohesion is the limiting value of shear stress for which the normal stress is equal to zero and is determined from shear experiments with a schulze ring shear tester ( rst-01.pc by dietmar schulze schttgutmesstechnik , germany ) .a more specific description of the experimental samples is provided in the following section . one cohesive sample used in this work is cocoa powder with 12% fat content - which is a representative sample for the material used as basic ingredient in the production of chocolate and related beverages .the material properties including size distribution , particle density and water content are shown in table [ tablematerialppt ] along with a scanning electron microscope visualization of its morphology in fig .[ cocoa12 ] .we note that even though the powder is relatively hygroscopic , its humidity does not change significantly during the experiments .additionally , the experiments are performed over a relatively short period under ambient conditions and samples are sealed in air - tight bags when not in use to minimize effects that could arise due to changes in the product humidity ..material parameters of the experimental samples .[ cols= " < , > , < , < , < " , ] [ parametertablec ] in summary , we find that faster loading rates lead to insufficient time for relaxation with differences most visible at lower stress levels . the effect of loading rate diminishes at higher stress levels .we have performed oedometric experiments to study the slow relaxation of two cohesive powders under different consolidation stresses .one goal was to study the slow relaxation behavior in two experimental devices , namely the custom - built lambdameter and the commercially available ft4 powder rheometer .additionally , a comparison of the relaxation behavior of two industrially relevant cohesive powders , namely cocoa powder with 12% fat content and eskal 500 limestone powder , was carried out . the relaxation behavior i.e. , the decrease in stress occurring at constant volume , is qualitatively reproduced in the two testing equipments . on the dependence on aspect ratio , larger strain is required in the setup with higher aspect ratio ( 1.0 ) to reach the same intermediate stress in comparison to the setup with lower aspect ratio ( 0.4 ) . the relaxation model , cf .( [ eq : relaxmodel ] ) , captures perfectly the decrease in stress during relaxation at different stress levels for both aspect ratios with the response time fluctuating and the dimensionless material parameter identical for both aspect ratios and systematically decreases from low to high stress levels .for the two cohesive powders studied , it is interesting that both materials show an identical response to axial loading until percent strain where the difference in the response begins to manifest .eskal 500 limestone is found to produce a softer response to applied vertical stress in comparison to cocoa powder , which is probably due to the difference in particle size . at the same stress level ,cocoa powder is found to relax more slowly but with a larger relative amplitude than eskal . in terms of the parameters of the model , the response timescale for eskal , ,is several orders of magnitude smaller than that of cocoa . on the other hand, the dimensionless parameter shows a decreasing trend for both materials and is only about a factor two higher for cocoa than for limestone . in terms of the relaxation duration, we find that longer previous relaxation leads to observable differences in relative stress reduction , reducing the present relaxation .faster loading rates allow for insufficient time for relaxation with differences in the dimensionless parameter most visible for relaxation at low stresses .the effects of loading rate are attenuated as stress is increased .further studies will focus on the comparison between the two testing devices for identical aspect ratios and the solution of the model for finite compression rate .the effects of system walls of the experimental devices also needs to be given further attention .the validity of the proposed model for relaxation at constant stress ( or strain creep ) will be investigated .finally , the incorporation of the features of the present findings into discrete element contact models for cohesive powders will be explored .a comparison of testers is necessary for several reasons . apart from the fact that several literatures have reported on comparative studies between different testers used in the characterization of cohesive powders ,most differences observed have been attributable to human errors , differences in the filling procedure and the measurement conditions . for our experiments , it is important to confirm that the relaxation feature can be reproduced in different testers and is not due to drift or bias in our testing equipments .the material used for this comparison is cocoa powder . in order to compare the response of the two testing equipments to vertical ( axial ) stress , we perform uniaxial compression test on cocoa powder with a carriage speed of 0.05mm / s ( protocol 1 in table [ protocoltable ] ) .five intermediate relaxation stages ( r1r5 ) , in which the top piston / punch is held in position for 300 seconds at specific intervals of 5 kpa during the compression tests are included . in fig .[ comp_cocoa_time ] we plot the vertical stress as function of time . during loading ,the axial stress builds up with time until the first target stress of 5 kpa ( at r1 ) is reached .we observe a slower increase of the axial stress in the lambdameter in comparison to the ft4 powder rheometer even though the respective pistons were moved with the same speed .consequently , the vertical compaction and thus strain at constant vertical stress is higher in the lambdameter than in the ft4 powder rheometer .this is possibly due to the difference in aspect ratio of the experimental moulds and filling procedures ( e.g. conditioning of the sample by a rotating blade in case of the ft4 powder rheometer ) resulting in different masses for both equipments , leading to different initial densities of the same sample , and consequently producing different response to compressive stress . with the initiation of the first relaxation r1 at 5 kpa, we observe for both equipments a time - dependent stress relaxation during the rest - time of 300 seconds .moreover , according to the equation of janssen , due to the larger diameter of the lambdameter , the stress away from the powder surface is larger , resulting in higher mean vertical stresses .this observation , along with other observations reported in literature for other granular materials , confirms that the stress relaxation is not due to a drift in the measuring equipments but it is a material feature as discussed in section [ sec : compare_equip_aspectratio ] . from 5 kpa, we observe an approximate 45 percent relative decrease in stress for the lambdameter compared to 22 percent in the ft4 powder rheometer . due to the non - porous lid and the larger diameter of the lambdameter , at similar height, the escape of the air trapped and compressed in the powder takes more time . with the activation of axial compression after the relaxation ,we observe a sharp increase in the axial stress until the next intermediate stress state is reached . the evolution of stress and strain in both testing equipments is shown in fig .[ comp_cocoa_strain ] , where the vertical stress is plotted against volumetric strain .we observe that the lambdameter initially produces a softer response to the applied stress , as evidenced by the slower increase in the vertical stress during loading . at higher intermediate strain ,similar stress increase with strain is observed in the lambdameter as compared to the ft4 powder rheometer .the comparison of the response of both testing equipments for identical aspect ratio is a subject for future work and will be presented elsewhere .helpful discussions with n. kumar and m. wojtkowski are gratefully appreciated .this work is financially supported by the european union funded marie curie initial training network , fp7 ( itn-238577 ) , see http://www.pardem.eu/ for more information .baumann , g. , scheffler , t. , jnosi , i. m. , wolf , d. e. , 1996 .angle of repose in a two - dimensional rotating drum model . in : wolf ,d. e. , schreckenberg , m. , bachem , a. ( eds . ) , traffic and granular flow .world scientific , singapore , pp .347351 .freeman , r. , 2007 . measuring the flow properties of consolidated , conditioned and aerated powders a comparative study using a powder rheometer and a rotational shear cell. powder technology 174 ( 1 - 2 ) , 2533 .imole , o. i. , kumar , n. , magnanimo , v. , luding , s. , 2013 .hydrostatic and shear behavior of frictionless granular assemblies under different deformation conditions .kona powder and particle journal 30 , 84108 .imole , o. i. , wojtkowski , m. , magnanimo , v. , luding , s. , 2013 .force correlations , anisotropy , and friction mobilization in granular assemblies under uniaxial deformation . in : yu , a. ,luding , s. ( eds . ) , powders and grains , aip conf .proc . vol .601604 .kruijer , m. p. , warnet , l. l. , akkerman , r. , feb .modelling of the viscoelastic behaviour of steel reinforced thermoplastic pipes .composites part a : applied science and manufacturing 37 ( 2 ) , 356367 .kwade , a. , schulze , d. , schwedes , j. , 1994 .design of silos : direct measurement of stress ratio [ auslegung von silos .unmittelbare messung des horizontallastverhaeltnisses ] .beton- und stahlbetonbau 89 ( 3 ) , 5863 . morgeneyer , m. , brendel , l. , farkas , z. , kadau , d. , wolf , d. e. , schwedes , j. , 2003 .can one make a powder forget its history ?proceedings of the 4th international conference for conveying and handling of particulate solids , budapest , 12118 .schiessel , h. , metzler , r. , blumen , a. , nonnenmacher , t. f. , 1995 .generalized viscoelastic models : their fractional equations with solutions .journal of physics a : mathematical and general 28 ( 23 ) , 65676584 .thakur , s. c. , imole , o. i. , wojtkowski , m. b. , magnanimo , v. , montes , e. c. , ramaioli , m. , ahmadian , h. , ooi , j. y. , 2013 .characterization of cohesive powders for bulk handling and dem modelling . in : bischoff , m. ,oate , e. , owen , d. r. j. , ramm , e. , wriggers , p. ( eds . ) , iii international conference on particle - based methods - fundamentals and applications .icnme , pp .112 .walton , o. r. , 1995 .force models for particle - dynamics simulations of granular materials . in : guazzelli ,e. , oger , l. ( eds . ) , mobile particulate systems .vol . 287 .kluwer academic publishers , dordrecht , pp .
|
we present findings from uniaxial ( oedometric ) compression tests on two cohesive industrially relevant granular materials ( cocoa and limestone powder ) . experimental results are presented for the compressibility , tested with two devices the ft4 powder rheometer and the custom made lambdameter . we focus on the stress response and the slow relaxation behavior of the cohesive samples tested . after compression ends , at constant volume , the ongoing stress relaxation is found to follow a power law consistently for both cohesive powders and for the different testing equipments . a simple ( incremental algebraic evolution ) model is proposed for the stress relaxation in cohesive powders , which includes a response timescale along with a second , dimensionless relaxation parameter . the reported observations are useful for both the improvement of discrete element simulations and constitutive macroscopic models for cohesive granular materials . stress relaxation , cohesive powders , uniaxial compression , equipment comparsion , aspect ratio , relaxation theory
|
experimental information about cosmic particles at very high energies is obtained through the study of atmospheric showers induced by these particles and is hence indirect. a necessary ingredient of these studies is therefore a good understanding of a shower initiated by a primary particle with given parameters .since the shower development is a complicated random process , the monte - carlo simulations are often used to model atmospheric showers .physical parameters are then reconstructed from the simulations and compared to real data . at very high energies , however , the number of particles in a shower is so large that the simulations start to require unrealistic computer resources . among several ways to simplify the problem and to reduce the computational time , the thinning approximation is currently the most popular one .its key idea is to track only a representative set of particles ; while very efficient in calculations and providing correct values of observables on average , this method introduces artificial fluctuations because the number of tracked particles is reduced by several orders of magnitude . these artificial fluctuations mix with natural ones and therefore reduce the precision of determination of physical parameters . the standard approachto account for natural fluctuations in the air - shower simulations is to fix all shower parameters and to simulate sufficient number of artificial showers . technically , these showers differ by initial random seed numbers .all interactions in a simulated shower are fixed by these numbers for a given thinning level .random variations of these numbers result in a plethora of possible interaction patterns which end up in a distribution of an observable quantity of interest calculated for the showers with exactly the same initial physical parameters .this distribution thus intends to represent intrinsic fluctuations in the shower development .both the central value and the width of this distribution are important for physical applications . in practice, however , the width of the distribution arises from two sources : physical fluctuations and artificial fluctuations introduced by thinning . to obtain the physical width alone, one should in principle perform simulations without thinning .this is hardly possible for the highest energies at the current level of computational techniques since one often needs to simulate thousands of events for a typical study .the aim of the present work is to estimate the relative size of these artificial fluctuations ( for the first time it is done by direct comparison of showers simulated with and without thinning ) and to develop an efficient resource - saving method to suppress them in realistic calculations . in sec .[ sec : thinning ] , we start with a description ( sec . [sec : standard - thin ] ) of the standard thinning algorithm and explain why its use introduces additional fluctuations .then , we briefly recall , in sec . [ sec : suppress - fluct ] , conventional approaches to avoid or suppress these fluctuations . sec .[ sec : library ] describes the library of showers simulated without thinning for this study .this library is publicly available .[ sec : fluct - size ] is devoted to a quantitative study of the artificial fluctuations .a new method , multisampling , which allows one to suppress efficiently these unphysical fluctuations without invoking extensive computer resources , is suggested and discussed in sec .[ sec : msampl ] .[ sec : concl ] contains the discussion of the method and our conclusions .the number of particles in an extended air shower ( eas ) , and hence the cpu time and disk space required for its full simulation , scales with the energy of the primary particle . at energies in excess of ev ,the number of particles of kinetic energy above 100 mev at the ground level exceeds and the time required to simulate such a shower at a computer with a few - ghz cpu is of order of several days . a typical vertical shower induced by a hadron of ev requires about 100 gb of disk space and a month of cpu time .modelling individual showers with incident energies of about ev is at the limit of realistic capabilities of modern computers ; meanwhile one needs thousands of simulated showers for comparison with experimental data . as a result of a full simulation of a shower ,one obtains the list of all particles at the ground level .this information is redundant for many practical purposes .real ground - based experiments detect only a small fraction of these particles , so for calculating average particle densities one does not need to know precise coordinates and energies of all particles . in the thinning approximation , groups of particles are replaced by effective representative particles with weights .let us briefly recall how the thinning approximation works ( see e.g.ref . for a detailed discussion ) . denote the primary energy by and introduce a parameter called the thinning level .for each subsequent interaction , consider the energies of the secondary particles created in this interaction .if the condition is satisfied , then the method prescribes to keep one of the secondary particles and to discard the others . the probability to keep the particle is proportional to its energy , to the selected particle , the weight is assigned , where is the weight of the initial particle of this interaction ( for the particle which initiated the shower ) .if the condition ( [ thin0 ] ) is not satisfied , then the so - called statistical thinning operates : among the secondary particles , a subsample of ones with energies is considered and ( one or more ) effective particles are selected with probabilities these particles , to which the weights are assigned , are kept for further simulations together with original particles which have had energies . for useful values of ,the number of particles tracked is reduced by a factor of . for a random process , this change in the number of particles ( and consequently , in the number of interactions ) results in the increase of fluctuations compared to the fully simulated process .this means that a part of fluctuations in the development of a shower simulated with thinning is artificial , that is it is present neither in the full shower simulated with nor in a real eas . for a number of applications , these fluctuations are undesirable and should be suppressed or at least brought under control . in the framework of the thinning method ,the fluctuations are effectively suppressed by introducing the upper limit on the weight factor .the number of `` real '' particles tracked is thus enlarged .maximal weights for hadrons and for electromagnetic particles may be assigned in different ways . for a given problem , the optimal values of the maximal weightsmay be found in order to minimise the ratio of the size of artificial fluctuations to the computational time . in what follows , when we refer to the thinning with weights limitation , we will use the maximal weights optimised in ref . for the calculation of the particle density .the optimal values of parameters of thinning procedure may depend on the interaction models adopted in simulations for a given problem . in principle , the weights should be optimized for each combination of the models ( which are updated every few years ) and for each particular task ( different observables , primaries , energies , etc . ) .however , this optimization requires a dedicated time consuming study in each case .we suggest another approach to the problem in sec . [sec : msampl ] .we have performed simulations of air showers without thinning by making use of the corsika simulation code . for different showers ,we have used qgsjet 01c and qgsjet ii-03 as high - energy and gheisha 2002d as low - energy hadronic interaction models .currently , the library contains about 40 showers induced by primary protons , gamma - rays and iron nuclei with energies between ev and ev and zenith angles between and .the showers have been simulated for the observational conditions ( atmospheric depth and geomagnetic field ) of either agasa or the telescope array experiments .the shower library is publicly available at http://livni.inr.ac.ru .detailed information about input parameters used for the simulation of each shower is available from the library website together with full output files .the access to the data files is provided freely upon request . for users not familiar with the corsika output format , a `` datafile reading programming manual ''is given , containing a working example in c++ .free access to the computational resources of the server is provided to avoid lengthy copying of the output files ( some of which exceed 100 gb in size ) .an access request form along with conditions of the usage of the library are available from the library website . given the amount of computing resources required for simulation , each shower simulated without thinning is valuable .we hope that the open library would be useful in studies of various physical problems , notably facing the improved precision of modern experiments which often exceeds the precision of simulations .the library is being continuously extended ; we plan to supplement it with showers of higher energies in the future .with a library of showers simulated without thinning , the comparison of the observables reconstructed from showers with and without thinning is possible .this allows one to estimate the effect of the approximation . to do that , for each shower without thinning ( )we have simulated a number of showers with different thinning levels ( ) .all initial parameters ( including the random seed numbers ) were kept the same as in the simulation , which enabled us to reproduce exactly the same first interaction in the entire set of showers .three important observables the signal density at 600 m from the shower axis , the muon density at 1000 m from the axis , and the depth of the maximal shower development were reconstructed for each of the showers following the data - processing operation adopted by the agasa experiment .the detector response was calculated with the help of geant simulations in ref . . and were obtained by fitting the corresponding density at the ground level with empirical formulae . for fitting purposethe density was binned into 50m - width rings centered at the shower axis . was obtained by fitting the longitudinal shower profile with the empirical gaisser - hillas curve ( incorporated into corsika ) .this procedure was repeated for all showers in the livni library with the results similar to those shown in figs .[ fig : distrib][fig : fluct1a ] .figure [ fig : distrib ] shows the distribution of the reconstructed for showers with thinning simulated with the same initial random seed ( and thus the same first interaction ) as three representative livni showers .though quite wide for thinning , the distributions of are well centered at unity . the distribution of the mean values of for the ensembles of the thinned showers is presented in fig .[ fig : distrib20 ] for a uniform sample of twenty different showers . for each of them, 500 showers with were simulated with the same first interaction as the corresponding shower .the values of the observable averaged over 500 thinned showers approximate the `` exact '' with the accuracy of about , which is consistent with the level of statistical fluctuations , .we have found the same distributions for other observables considered , and .the important conclusion is that for the first time , _ the usual assumption that thinning does not introduce systematic errors in the reconstructed observables has been checked by explicit comparison of shower and averaged showers _ , at least for energies up to ev , observables , and , and proton , photon and iron primaries .the spread of observables reconstructed from thinned showers depends on the thinning level .it is not the width of the distribution but the average deviation of the observables from those of an shower which is the most interesting for practical purposes .this quantity is plotted in fig .[ fig : fluct1a ] for a typical shower from the livni library .we note in passing that , technically , to study the spread at a given with corsika , one has to simulate showers with slightly different thinning levels ( otherwise they all would be absolutely identical , given a fixed random seed ) .for instance , to obtain the points corresponding to in fig .[ fig : fluct1a ] , we have simulated 500 showers with different thinning levels in the interval . in most casesone is not interested in details of a particular realization of a shower ; it is the ensemble of simulated showers with fixed initial parameters but varied random seeds which is compared to the real data .the study of sec .[ sec : fluct - size - single ] does not help seriously to estimate the effect of thinning on these distributions of parameters because the size of fluctuations seen , e.g. , in fig .[ fig : fluct1a ] is determined by a combination of artificial fluctuations and a part of real ones : the random seed together with initial conditions fixes the first interaction , but different thinning levels introduce variations in other interactions and effectively change the simulation of the entire shower development . to estimate the effect of thinning on the distribution of observables, we have simulated samples of showers with fixed initial conditions but different random seeds for various thinning levels , including . we have considered samples of ev vertical proton - induced showers consisting of 20 showers with , 100 showers with , 100 showers with and weight limitation , 100 showers with and 100 showers with and weight limitation .the simulations have been performed using qgsjet ii and gheisha as hadronic interaction models , for the observational conditions of the telescope array experiment .the distributions of , and have been reconstructed with statistical fluctuations ( originated from the limited number of showers in the samples ) of about , that is , about 23% for showers and about 10% for the other samples .figure [ fig : ta1e17s ] illustrates the widths of the distributions obtained at different .artificial fluctuations in and caused by thinning are clearly seen by comparing case with others ( for the artificial fluctuations are quite small ) .we note that , for a given , the fluctuations should be larger at high energy since the multiplicity of hadronic interactions grows with energy and thinning starts to operate earlier in the shower affecting the first few interactions which determine the fluctuations . for many practical purposes ,these artificial fluctuations should be efficiently suppressed .from the results of the previous section , we conclude that the use of thinning is well motivated when one is interested in the reconstruction of the central values of fluctuating observables ( the most important application is e.g. to establish a relation between , say , and energy for a given experimental setup ) .on the other hand , thinning may limit the precision of composition studies , where the observed value of some quantity is compared to the simulated distributions of the same quantity for different primaries , and the width of these distributions is of crucial importance ( see e.g. the proton iron comparison in examples of ref . ) .as it has been pointed out above , the effect of physical fluctuations on the distribution of an observable quantity should be , in principle , estimated by simulating a set of showers with the same physical parameters , with different random seeds and without thinning . to obtain a good approximation to this distribution ,we make use of the results of sec .[ sec : fluct - size - single ] ( see , in particular , figs . [fig : distrib ] and [ fig : distrib20 ] ) .the average of an observable over a sample of thinned showers with a fixed initial random seed approximates the value of the same observable for an shower with the same random seed with a good accuracy .the distribution of observables for showers with different random seeds is then approximated by a distribution of these approximated observables calculated for samples with random seeds varying from one sample to another but fixed inside a sample .a practical way to do this is as follows : * instead of a single shower with , simulate showers with some and fixed random seed ; * reconstruct the observable for each of showers , average over these realizations and keep this average value which approximates the result for a single shower without thinning ; * repeat the procedure times for different random seeds to mimic a simulation of showers without thinning and to obtain the required distribution of the observable .we will refer to this procedure as to _ multisampling _ . even for relatively large ,averaging over a sufficiently large number of showers ( ) gives a good approximation to an value of the observable ; the larger , the better the approximation . the required value of may be estimated as follows .consider the distribution of an observable reconstructed from showers simulated with the thinning level close to for a given initial random seed .assume that the distribution is gaussian with the width ( though the qualitative conclusions do not depend on the exact form of the distribution , we note that , in practice , it is indeed very close to gaussian ) ; then one needs measurements to know the mean value with the precision .numerical results for the livni showers demonstrate that multisampling for and results in the precision of in the reconstruction of , and of the original showers .the distributions of parameters reconstructed from showers without thinning are consistent ( within statistical errors ) with those extracted by making use of multisampling .the distributions of are presented in fig .[ fig : s600distrib2 ] . in fig .[ fig : ta5e19s ] we present the widths of the distributions obtained with the usual thinning and with multisampling for ev vertical proton - induced showers ; the limited statistics ( we used showers ) implies the statistical uncertainty of about . the gain in precisionis clearly seen ; for the case of ev the multisampled distribution ( which is expected to mimic the distribution with a good accuracy ) allows us to estimate the size of purely artificial fluctuations caused by thinning .for instance , for with weights limitations , these fluctuations remain at the level of for and of for .let us note in passing that , for this particular simulation ( ev vertical protons at the telescope array location ) and for our choice of hadronic models ( qgsjet ii and gheisha ) , the choice of maximal weights suggested in ref . may not be optimal .let us compare now the computer resources needed for calculations with the standard thinning ( with and without weights limitations ) and with multisampling .the disk space scales as the number of simulated particles ; fig .[ fig : disk - versus - t ] illustrates this fact .we see that the multisampling ( ) saves the disk space compared to with weights limitation , giving at the same time gain in the precision of simulations .the cpu time is very sensitive to the choice of the hadronic interaction model : since thinning starts to work when the number of particles is large enough , the first few interactions are simulated in full even for relatively large .if the high - energy model is slow , then the effect of multisampling on the computational time is not so pronounced . by variations of the hadronic interaction models ,we have estimated the average time consumed by qgsjet ii , sybill , fluka and gheisha for simulations of showers at energies ev and ev . for vertical proton showers , ( ) multisampling is about 5 times faster than thinning with weights limitation for sybill while for ( very slow ) qgsjet ii , both take roughly the same time . a way to change the multisampling procedure in order to gain in the cpu time for any hadronic modelis discussed below in sec .[ sec : concl ] .a library of atmospheric showers has been simulated without the thinning approximation .the showers have been used for a quantitative direct study of the effect of thinning on the reconstruction of signal ( ) and muon ) densities at the ground level as well as on the depth of the maximal shower development .we demonstrate that thinning _ does not introduce systematic shifts _ into these observables , as was conjectured but never explicitly checked .we estimate the size of artificial fluctuations which appear due to the reduction of the number of particles in the framework of the thinning approximation ; these unphysical fluctuations may affect the precision , e.g. , of the composition studies .for instance , at the energies of ev for vertical proton primaries , artificial fluctuations are about 10% in the signal density at 600 m and about 12% in the muon density at 1000 m for thinning with weight limitations .an effective method to suppress these artificial fluctuations , multisampling , is suggested and studied .the method does not invoke any changes in simulation codes ; only parameters of , say , the corsika input are affected .compared to the thinning with weights limitations , it gives a similar precision but allows one to gain an order - of - magnitude decrease in the required disk space .gain in the cpu time depends on the speed of the high - energy interaction model : it is of order for fast ones ( sybill ) and of order 1 for slow ones ( qgsjet ii ) .a way to change the multisampling procedure in order to further improve the gain in the cpu time is to simulate the high - energy part of a shower once for each initial random seed while having the low - energy part multisampled .the multisampling procedure described above is a particular case of such improved procedure with a high - energy part restricted to the first interaction only .we would expect the modification to make it possible to conserve the physical fluctuations in the second and several following interactions and will allow for an order - of - magnitude improvement in the computational time for any hadronic model .however , it would require ( simple ) changes in the simulation codes thus loosing an important advantage of the multisampling discussed above : to implement the latter , one operates with the standard simulation code ( e.g. , corsika ) without any modifications .this minimal change is to add the option to start simulations from a _ predefined set _ of the primary particles .we are indebted to t.i .rashba and v.a .rubakov for helpful discussions .this work was supported in part by the intas grant 03 - 51 - 5112 , by the russian foundation of basic research grants 07 - 02 - 00820 , 05 - 02 - 17363 ( dg and gr ) , by the grants of the president of the russian federation ns-7293.2006.2 ( government contract 02.445.11.7370 ; dg , gr and st ) and mk-2974.2006.2 ( dg ) and by the russian science support foundation ( st ) .numerical part of the work was performed at the computer cluster of the theory division of inr ras .our library of showers without thinning is publicly available at http://livni.inr.ac.ru .l. g. dedenko , can . j. phys .* 46 * , 178 ( 1968 ) .a. m. hillas , nucl .suppl . *52b * , 29 ( 1997 ) . m. nagano , d. heck , k. shinozaki , n. inoue and j. knapp , astropart .* 13 * , 277 ( 2000 ) .m. kobal , astropart .* 15 * , 259 ( 2001 ) .d. heck , g. schatz , t. thouw , j. knapp and j. n. capdevielle , `` corsika : a monte carlo code to simulate extensive air showers , '' fzka-6019 .n. n. kalmykov , s. s. ostapchenko and a. i. pavlov , nucl .* 52b * , 17 ( 1997 ) . s. ostapchenko , nucl . phys . proc .suppl . * 151 * , 143 ( 2006 ) .h. fesefeldt , `` the simulation of hadronic showers : physics and applications , '' cern - dd - ee-80 - 2 .n. chiba _ et al ._ , nucl .instrum .a * 311 * , 338 ( 1992 ) .f. kakimoto _ et al ._ [ ta collaboration ] , proc .28th icrc , tsukuba , 2003 .g. i. rubtsov , talk given at the 3rd international workshop on the highest energy cosmic rays and their sources : forty years of the gzk problem , http://www.inr.ac.ru/~st/gzk-40_program.html n. sakaki _ et al ._ , proc .27th icrc , hamburg , 2001 , * 1 * , 333 .s. yoshida _ et al ._ , j. phys .g * 20 * , 651 ( 1994 ) .n. hayashida _ et al ._ [ agasa collaboration ] , j. phys .g * 21 * , 1101 ( 1995 ) .t. k. gaisser and a. m. hillas , proc .15th icrc , plovdiv * 8 * , 353 ( 1977 ). d. s. gorbunov , g. i. rubtsov and s. v. troitsky , arxiv : astro - ph/0606442 . s. v. troitsky , talk given at the 3rd international workshop on the highest energy cosmic rays and their sources : forty years of the gzk problem , http://www.inr.ac.ru/~st/gzk-40_program.html
|
the most common way to simplify extensive monte - carlo simulations of air showers is to use the thinning approximation . we study its effect on the physical parameters reconstructed from simulated showers . to this end , we have created a library of showers simulated without thinning with energies from ev to ev , various zenith angles and primaries . this library is publicly available . physically interesting applications of the showers simulated without thinning are discussed . observables reconstructed from these showers are compared to those obtained with the thinning approximation . the amount of artificial fluctuations introduced by thinning is estimated . a simple method , multisampling , is suggested which results in a controllable suppression of artificial fluctuations and at the same time requires less demanding computational resources as compared to the usual thinning .
|
from its early days quantum computing was perceived as a means to efficiently simulate physics problem , and a host of results have been derived along these lines for quantum wiesner:96,meyer:97,boghosian:97a , abrams:97,zalka:98,lidar:98rc , ortiz:00,terhal:00,freedman:00,wubyrdlidar:01,aspuru - guzik:05,cirac:02,cirac:08 , and classical systems lidar : pre97a , yepez:01,meyer:02,georgeot:01a , georgeot:01b , terraneo:03,lidar:04,dorit - tutte , joe , bravyi:07,aspuru - guzik1,aspuru - guzik2 .a natural problem relating quantum computation and statistical mechanics is to understand for which instances quantum computers provide a speedup over their classical counterparts for the evaluation of partition functions .for the _ potts model _ , results obtained in provide insight into this problem when the evaluation is an additive approximation .we provided a class of examples for which there is a quantum speedup when one seeks an exact evaluation of the potts partition function . in this workwe address the connection between quantum computing and classical statistical mechanics from the opposite perspective .namely , we seek to find restrictions on the power of quantum computing , by employing known results about efficiently simulatable problems in statistical mechanics . specifically , we restrict our attention to the _ ising model _ partition function , and use a mapping between graph instances of the ising model and quantum circuits introduced in , to identify a certain class of quantum circuits which have an efficient classical simulation. restricted classes of quantum circuits which can be efficiently simulated classically have been known since the gottesman - knill theorem nielsen : book .this theorem states that a quantum circuit using only the following elements can be simulated efficiently on a classical computer:(1 ) preparation of qubits in computational basis states , ( 2 ) quantum gates from the clifford group ( hadamard , controlled - not gates , and pauli gates ) , and ( 3 ) measurements in the computational basis . such stabilizer circuits on qubits can be be simulated in time using the graph state formalism .other early results include ref . , where the notion of matchgates was introduced and the problem of efficiently simulating a certain class of quantum circuits was reduced to the problem of evaluating the pfaffian .this was subsequently shown to correspond to a physical model of noninteracting fermions in one dimension , and extended to noninteracting fermions with arbitrary pairwise interactions ( see further generalizations in refs . ) , and lie - algebraic generalized mean - field hamiltonians .criteria for efficient classical simulation of quantum computation can also be given in terms of upper bounds on the amount of entanglement generated in the course of the quantum evolution .a result that is more directly related to the one we shall present in this work is given in ref . , but within the measurement - based quantum computation ( mqc ) paradigm .mqc relies on the preparation of a multi - qubit entangled resource state known as the cluster state .it is known that mqc with access to cluster states is universal for quantum computation .reference considers _ planar code states _ which are closely related to cluster states in that a sequence of pauli - measurements applied to the two - dimensional cluster state can result in a planar code state .mqc with planar code states consists of a sequence of measurements where the are one - qubit measurements and is a final measurement done on the remaining qubits in some basis which depends on the results of the .reference demonstrates that planar code states are not a sufficient resource for universal quantum computation ( and can be classically simulated ) .this fact is attributed to the exact solvability of the ising partition function on planar graphs .our results complement the work in , as they are provided in terms of the circuit model , and generalize to ising model instances that correspond to graphs which are not necessarily subgraphs of a two - dimensional grid .other conceptually related work uses the connection between graphs and quantum circuits and the formalism of tensor network contractions , to show that any polynomial - sized quantum circuit of - and - qubit gates , which has log depth and in which the -qubit gates are restricted to act at bounded range , may be classically efficiently simulated markov : tensor , jozsa-2008,yoran:170503 .a tensor network is a product of tensors associated with vertices of some graph such that every edge of represents a summation ( contraction ) over a matching pair of indexes .we also use a relationship between quantum circuits and graphs but whose construction is quite different . also , connects matchgates and tensor network contractions to notions of efficient simulation .finally , other closely related work was recently reported in ( see also ) , which addresses the classical simulatability of quantum circuits .their results use a connection to the partition function of spin models , as do we , and they too provide a mapping between classical spin models and quantum circuits . specifically pertinent to our workis the fact that they give criteria for the simulatability of quantum circuits , using the 2d ising model .that is , circuits consisting of single qubit gates of the form and nearest - neighbor gates of the form are classically efficiently simulable .we shall discuss how the nearest - neighbor restrictions can be lifted while retaining efficient classical simulatability .the structure of this paper is as follows .we begin with a brief review of the ising model in section [ sec : ising ] , where we define the ising partition function . in section [ sec2 ]we review quadratically signed weight enumerators ( qwgt s ) and their relationship to quantum circuits , and review the relationship between qwgt s and . in section [ sec : mapping ] we introduce an ansatz that allows one to associate graph instances of the ising model with circuit instances of the quantum circuit model . in this section we derive a key result : an explicit connection between the partition function for the ising model on a graph , and a matrix element of the unitary representing a quantum circuit which is related to this graph via the graph s incidence matrix [ eq . _ _ ( [ eq : element ] ) ] .we then present our main result in section [ sec : proof ] : a theorem on efficiently simulatable quantum circuits .the proof depends on the fact that there are algorithms for the efficient evaluation of for planar instances of the ising model .we also discuss the relation to previous work . in section nextstepwe present a discussion and some suggestions for future work , including the possibility of a quantum algorithm for the additive approximation of .we conclude in section [ sec : conc ] .the appendix gives a review of pertinent concepts from graph theory , and additional details , including some proofs .we briefly introduce the ising spin model accompanied by some notation and definitions .let be a finite , arbitrary undirected graph with edges and vertices . in the ising modeleach vertex is occupied by a classical spin , and each edge represents a bond ( interaction energy between spins and ) .[ def : ising]an _ instance _ of the ising problem is the data , i.e. , represents a weighted graph .the hamiltonian of the spin system is spin configuration is a particular assignment of spin values for all spins .a bond with is called ferromagnetic , and a bond with is called antiferromagnetic .the probability of the spin configuration in thermal equilibrium for a system in contact with a heat reservoir at temperature , is given by the gibbs distribution : , where the boltzmann weight is ] thus we have does the matrix change as we go from by this edge deletion ?if the edge is a _ dangling edge _, i.e. , not part of a cycle , then we lose a column ( column ) but if the edge deletion causes the breaking of cycles , then will lose rows ( in addition to column ) , as the rows encode the cycle structure of the graph . in this case , the dimension ( or length ) of will be less than the dimension of and the will also be shorter by entries .we call these shorter , .further , and most importantly , will vanish , as mentioned . after taking this into considerationwe now can conclude that .$ ] thus , =\mathrm{rank}% [ a^{(g^{\prime } ) } ] .\ ] ] the proof for edge contractions is similar .the main difference is that an edge contraction does not cause the loss of a cycle except when the edge in question belongs to a cycle of length three .thus in general , the contraction case is simpler except when dealing with cycles of length three . in this casethe proof caries over in the same way .thus the set of graphs is downwardly closed .* lemma [ finite - obs ] * _ the obstruction set for is finite . _ the set of graphs is _ downwardly closed _ by the above lemma .one may then apply the robertson - seymour theorem ( theorem [ th : rs ] ) and immediately conclude that the number of forbidden minors of is finite .* lemma [ quad - form ] * _ a quadratic form _ _ over gf(2 ) is linear in __ ( equal to _ _ ) _ iff_ __ is symmetric . __ let be an -dimensional column vector and an matrix , both over gf(2 ) . consider the quadratic form and assume that is symmetric : .then in the second line we used [ true over gf(2 ) ] , exchanged and in the third summand and used .the first and third summands are equal and hence add up to zero over gf(2 ) .we are left with the , second , linear term , i.e. , , where denotes a vector comprising the diagonal of .next , assume that is not symmetric .then there exists a pair of indices such that , i.e. , . as above, we have : .consider the index pair in this sum : the quadratic form contains at least one non - linear ( quadratic ) term .note that all calculations are done modulo 2 .* input : * a graph for which we wish to determine if there exists some satisfying edge interaction that satisfies eq .( [ eq : w ] ) .specifically , we considered , , ( with one edge deleted ) and . 1 . from the incidence matrix of the following items : 1 .all vectors belonging to the null space of the incidence matrix .these row vectors form a matrix .2 . construct a matrix representation of the possible corresponding quantum circuits ( under the mapping presented earlier ) . from .this matrix will have variables corresponding to all the possible ways that one can include or omit operations ( changing these affects the types of edge interactions that one obtains , if any . )form the vector whose entry is , where are the elements of the null space of .this is the left - hand side of eq .( [ eq : w ] ) .each entry of consists of linear equations whose variables represent the presence or absence of a operation in a quantum circuit that corresponds to .3 . form a matrix whose rows are all possible bonds .produce a matrix whose row is equal to .these are all possible values of the right hand side of eq .( eq : w ) .( note that due to symmetry there will be many repeats , so that the total number of possible bonds to check is far fewer than all possible bonds . ) 5 .attempt to solve the system of linear equations over ( where runs from to the number of rows of ) for the variables . a solution for some information for a specific circuit representation .if no solution for the exists , then there is no satisfying bond distribution for eq .( [ eq : w ] ) .this is indeed the case for , , and . if there is a solution for some fixed , continue .take this specific ( i.e. , this has no variables and corresponds to a specific circuit given by the solution of the above ) and now form again .this time however , contains no variables and is a numerical vector .thus , one now has the binary vector and the matrix .solve the linear system and output the edge interaction .
|
we exploit a recently constructed mapping between quantum circuits and graphs in order to prove that circuits corresponding to certain planar graphs can be efficiently simulated classically . the proof uses an expression for the ising model partition function in terms of quadratically signed weight enumerators ( qwgts ) , which are polynomials that arise naturally in an expansion of quantum circuits in terms of rotations involving pauli matrices . we combine this expression with a known efficient classical algorithm for the ising partition function of any planar graph in the absence of an external magnetic field , and the robertson - seymour theorem from graph theory . we give as an example a set of quantum circuits with a small number of non - nearest neighbor gates which admit an efficient classical simulation .
|
a quantum noisy channel can be expressed with a trace - preserving completely positive map : here , and are density matrixes in the hilbert space , and are parameters characterizing the channel .so a single parameter quantum noisy channel can be expressed as to the complete positivity of the map , it can be expanded to composite quantum systems in .the composite channels in the expanded systems have two forms , one is _ mixed _ noisy channel ; the other is _ double _ noisy channel . here , density matrixes in , and are the single quantum identity and noisy channel . for a known quantum noisy channel at lest two subjectsare interested .one is to determine its information capacities , the best probability to understand the input state under assumption that the action of the quantum channel is known .although the capacities of quantum noisy channels have not been solved thoroughly much effort has been put into this topic and many results are obtained .another topic , the estimation of quantum noisy channel has also been attracted much attention in last years because it is also important in quantum information theory .the estimation of quantum noisy channel is to identify a quantum noisy channel as the type of the channel is known but its quality is unknown . the quality of the channel can be characterized with some parameters . thus estimating some channel is equal to estimating its certain parameters , which may be appealed to the quantum estimation theory .quantum estimation theory is one about seeking the best strategy for estimating one or more parameters of a density operator of a quantum mechanical system . about how to estimate a quantum noisy channelwe refer the readers to the refs . and recent . in this paperwe restrict our attention in two aspects of the estimation of a quantum noisy channel , _ lossy channel _ ( which will be described in section ii ) . because of the anisotropy of the quantum lossy channels , different input states must have different effects for estimating the quantum noisy channel .so at first , we will discuss what _coherent states _ are optimal for estimating the single quantum lossy channel and what bases of the input states are optimal for estimating the composite lossy channels ? because entanglement has been taken a kind of resource for processing quantum information , secondly , we will discuss : can the estimation of the composite channels and be improved by using entangled input states ?this paper is constructed as follows . in sectionii we shall set up a model of quantum lossy channel and explain why we choose coherent states to estimate these channels . in section iii we shall seek for the optimal input coherent states or optimal bases of input states for estimating the single and the composite lossy channels . in sectioniv we shall calculate the symmetric logarithmic derivative ( sld ) fisher information of output states exported from the channels , and and answer whether the entangled input states improve the estimation .a brief conclusion will close this paper in last section .we set the lossy channel to be estimated is described by the following physical model .a quantum system , such as photons in state is in a vacuum environment , the evolution of the state is a completely positive map : . in this model ,making use of the language of master equation we can obtain that the interaction of the in question system with its environment makes the system evolving according to is the energy decay rate .the formal solution of eq.([e4 ] ) may be written as leads to the solution for the initial single - mode \left| \alpha \right\rangle \left\langle \beta \right| = \left\langle \beta \right| \left .\alpha \right\rangle ^{t^{2}}\left| \alpha t\right\rangle \left\langle \beta t\right| , \label{e6}\]]where estimating this channel is equal to estimating the parameter as known , in order to estimate the channel , one , for example alice must prepare many identical initial states , input states and another one , for example bob must measure the output samples exported from this channel . for enhancing the detection efficiency we use coherent states to be the input states .because we do not know what coherent state is the optimal input state for the estimation in advance , we generally set this state be the superposition of coherent states and , namely , a schrdinger cat state state is considered one of realizable mesoscopic quantum systems yurkeetalprl1986 .zheng has shown the method for preparing this state and the measurement scheme of this state has been given in jeong01 .set ( in fact only if ) then thus , and can be taken into a pair orthogonal bases .setting we have in the time - varying bases ^{t}, ] of the estimator should be identical to and the estimator for the parameter satisfies the quantum cramr - rao inequality \geqslant \left ( j_{\varsigma } \right ) ^{-1}, ] is the variance of estimator and is the quantum sld fisher information with the symmetric logarithmic derivative . here, the hermitian operator satisfies the equation is important to notice that the lower bound in the quantum cramr - rao inequality is achievable ( at lest locally ) . in other words ,the inverse of the sld fisher information gives the ultimate limit of estimation .so in our problem the bigger of the sld fisher information is , the more accurately the estimation may be fujiwara042304 in the author has proved that the sld fisher information is convex so we only need to investigate the pure state inputs . in the followingwe will calculate the sld fisher information of output states for above three lossy channels .in the previous section we obtain that the equal probability schrdinger cat states are the optimal input states for estimating the single lossy channel , and they are also the optimal bases of the input states for estimating the composite channels . in the following we calculate the sld fisher information of the output state for above three lossy channels by use of optimal input states or optimal bases of the input states . at first, we calculate the sld fisher information of single lossy channel when the input state is in this problem , the parameter in above formulas is in the output states of lossy channels from eq.(e9 ) we have we can easily calculate the sld fisher information of output state as secondly , we calculate the sld fisher information of the mixed lossy channel by using the input state ( see eq.([e20 ] ) ) . here , we have here , the expression of hermitian operator is too complex , we do not give it .fortunately , by using eq.([eq43 ] ) we can obtain a simple expression of its sld fisher information as is shown that the sld fisher information of mixed channel do not vary with the changing of the entanglement degree of input state , namely , it does not vary with changing .the sld fisher information of mixed lossy channel is accurately equal to one of one - shot single lossy channel .it means that when we take the equal probability schrdinger cat states as the bases the entangled input states the estimation of the mixed lossy channel can not be improved .thirdly , we investigate the double lossy channel . by using the input state have the output state as $ % \end{tabular}% , \label{eq45}\]]where , by using the we can calculate its sld fisher information but the expression of is too complex and too long . here, we do not give it , too .we plot the with and time as fig.3 where and from fig.3 , we see that when we take the equal probability schrdinger cat states as the bases of the input states , the entangled inputs can not improve the estimation of the double lossy channel , too .before we end this subsection we analytically investigate two kinds of specific cases .namely , we calculate the sld fisher information of output state of double lossy channel when the input states are product state and maximally entangled state where optimal bases is still held .when the input state is a product state , namely , or , the density operator of the output state is `` + '' denotes a product state of , and `` - '' denotes a product state of .thus , we have when the input state is the product state , the sld fisher information of double lossy channel is result is reasonable , because or and is correspond to the input of product state of two - shot single lossy channel and the result eq.([eq48 ] ) is just the two times of the sld fisher information of single lossy channel .when we set the input state is the maximally entangled state and we can obtain and are uncertain matrix elements . fortunately , by using the uncertain we can calculate the sld fisher information as is the sld fisher information of channel when the input state is the maximally entangled state .a numerical work shows that is less than for in this subsection we obtained that when we take the equal probability schrdinger cat states as the bases of the input states the entangled input states can not improve the estimation of composite lossy channels . in this subsection, we discuss another case .when the input states are some non - optimal states , namely , when in the input states , whether the entangled inputs are better than the non entangled inputs for estimating these composite lossy channels ? in the following , we only discuss the cases of thus , in this cases the input state eq.([e20 ] ) becomes the mixed channel , the output state is .\label{eq53}\]]thus we can calculate the sld fisher information of as , we do not give out the sld operator for its very complex .similarly , we can obtain the output state of the double lossy channel as .\label{eq55}\]]its sld fisher information is , we also do not give the the sld operator for its very complex .from eqs.([eq55 ] ) , ( [ eq56 ] ) we can easily obtain that and take their maximum at ( see figs.4 and 5 ) , which shows that when the bases of input states are not optimal bases ( equal probability schrdinger cat states ) the entangled input states may improve the estimation of the composite lossy channels .in this paper , we have discussed two corresponding problems to estimating quantum lossy channels .firstly , we have investigated what the optimal input states is for estimating the single lossy channel and what the best bases are of the input states for estimating the composite lossy channels .we obtain that the equal probability schrdinger cat states are the optimal input states for estimating the single lossy channel and they are also the optimal bases of input states for estimating the composite lossy channels . secondly , we have investigated that whether the entangled input states can improve the estimation of the quantum lossy channels . by calculating the sld fisher information of the output states we obtainedthat when we take the equal probability schrdinger states as the bases of the input states the entangled input states can not improve the estimation of the composite channel , however when we take the coherent states instead of the equal probability schrdinger cat states as the bases , the maximally entangled input states can improve the estimation of the composite lossy channels .99 a. s. holevo , ieee trans .inf . theory * 44 , * 269 ( 1998 ) ; p. hausladen , r. jozsa , b. schumacher , m. westmoreland , and w. k. wootters , phys .rev . * a 54 , * 1869 ( 1996 ) ; b. schumacher , and m. d. westmoreland , phys .* a 56 , * 131 ( 1997 ) ; c. h. bennett , p. w. shor , j. a. smolin , and a. v. thapliyal , phys .* 83 * , 3081 ( 1999 ) ; c. h. bennett , p. w. shor , j. a. smolin , and a. v. thapliyal , e - print quant - ph/0106052 ; a. s. holevo , e - print quant - ph/0106075 ; b. schumacher and m. a. nielsen , phys . rev . * a 54 * , 2629 ( 1996 ) ; j. harrington , and j. preskill , phys . rev . * a 64 * , 062301 ( 2001 ) .
|
due to the anisotropy of quantum lossy channels one must choose optimal bases of input states for best estimating them . in this paper , we obtain that the equal probability schrdinger cat states are optimal for estimating a single lossy channel and they are also the optimal bases of input states for estimating composite lossy channels . on the other hand , by using the symmetric logarithmic derivative ( sld ) fisher information of output states exported from the lossy channels we obtain that if we take the equal probability schrdinger cat states as the bases of input states the maximally entangled inputs are not optimal , however if the bases of the input states are not the equal probability schrdinger cat states the maximally entangled input states may be optimal for the estimating composite lossy channel .
|
since the seminal paper , association rule and frequent itemset mining received a lot of attention . by comparing five well - known association rule algorithms ( i.e. , apriori , charm , fp - growth , closet , and magnumopus ) using three real - world data sets and the artificial data set from ibm almaden , zheng et al . found out that the algorithm performance on the artificial data sets are very different from their performance on real - world data sets . thus there is a great need to use real - world data sets as benchmarks .however , organizations usually hesitate to provide their real - world data sets as benchmarks due to the potential disclosure of private information .there have been two different approaches to this problem .the first is to disturb the data before delivery for mining so that real values are obscured while preserving statistics on the collection .some recent work investigates the tradeoff between private information leakage and accuracy of mining results .one problem related to the perturbation based approach is that it can not always fully preserve individual s privacy while achieving precision of mining results . the second approach to addressthis problem is to generate synthetic basket data sets for benchmarking purpose by integrating characteristics from real - world basket data sets that may have influence on the software performance .the frequent sets and their supports ( defined as the number of transactions in the basket data set that contain the items ) can be considered to be a reasonable summary of the real - world data set . as observed by calders , association rules for basket data setcan be described by frequent itemsets .thus it is sufficient to consider frequent itemsets only .ramesh et al . recently investigated the relation between the distribution of discovered frequent set and the performance of association rule mining .it suggests that the performance of association rule mining method using the original data set should be very similar to that using the synthetic one compatible with the same frequent set mining results . informally speaking , in this approach , one first mines frequent itemsets and their corresponding supports from the real - world basket data sets .these frequent itemset support constraints are used to generate the synthetic ( mock ) data set which could be used for benchmarking . for this approach, private information should be deleted from the frequent itemset support constraints or from the mock database .the authors of investigate the problem whether there exists a data set that is consistent with the given frequent itemsets and frequencies and show that this problem is * np*-complete .the frequency of each frequent itemset can be taken as a constraint over the original data set .the problem of inverse frequent set mining then can be translated to a linear constraint problem .linear programming problems can be commonly solved today in hundreds or thousands of variables and constraints .however , the number of variables and constraints in this scenario is far beyond hundreds or thousands ( e.g. , , where is the number of items ) .hence it is impractical to apply linear programming techniques directly .recently , the authors of investigated a heuristic method to generate synthetic basket data set using the frequent sets and their supports mined from the original basket data set . instead of applying linear programming directly on all the items , it applies graph - theoretical results to decompose items into independent components and then apply linear programming on each component .one potential problem here is that the number of items contained in some components may be still too large ( especially when items are highly correlated each other ) , which makes the application of linear programming infeasible .the authors of proposed a method to generate basket data set for benchmarking when the length distributions of frequent and maximal frequent itemset collections are available . though the generated synthetic data set preserves the length distributions of frequent patterns, one serious limitation is that the size of transaction databases generated is much larger than that of original database while the number of items generated is much smaller .we believe the sizes of items and transactions are two important parameters as they may significantly affect the performance of association rule mining algorithms . instead of using the exact inverse frequent itemset mining approach, we propose an approach to construct transaction databases which have the same size as the original transaction database and which are approximately consistent with the given frequent itemset constraints .these approximate transaction databases are sufficient for benchmarking purpose . in this paper, we consider the complexity problem , the approximation problem , and privacy issues for this approach .we first introduce some terminologies . is the finite set of items .a transaction over is defined as a pair where is a subset of and tid is a natural number , called the transaction identifier .a transaction database over is a finite set of transactions over . for an item set and a transaction , we say that contains if .the support of an itemset in a transaction database over is defined as the number of transactions in that contains , and is denoted .the frequency of an itemset in a transaction database over is defined as calders defined the following problems that are related to the inverse frequent itemset mining .freqsat _ instance _ : an item set and a sequence , , , , where are itemsets and are nonnegative rational numbers , for all ._ question _ : does there exist a transaction database over such that for all ?ffreqsat ( fixed size freqsat ) _ instance _ : an integer , an item set , and a sequence , , , , where are itemsets and are nonnegative rational numbers , for all ._ question _ : does there exist a transaction database over such that contains transactions and for all ?fsuppsat _ instance _ : an integer , an item set , and a sequence , , , , where are itemsets and are nonnegative integers , for all ._ question _ : does there exist a transaction database over such that contains transactions and for all ? obviously , the problem fsuppsat is equivalent to the problem ffreqsat .calders showed that freqsat is * np*-complete and the problem fsuppsat is equivalent to the intersection pattern problem * ip * : given an matrix with integer entries , do there exist sets such that ] be the closest integer to for and be the transaction database that contains copies of the itemset for each .then contains transactions and for all . in another word, the given instance of the approsuppsat problem is satisfiable if and only if there exist itemsets and an integer sequence such that the transaction database consisting of copies of itemset for each witnesses the satisfiability .thus approsuppsat which completes the proof of lemma .[ apphard ] approsuppsat is * np*-hard .* the proof is based on an amplification of the reduction in the * np*-hardness proof for freqsat in which is alike the one given for 2sat in . in the following ,we reduce the * np*-complete problem 3-colorability to approsuppsat . given a graph , is 3-colorable if there exists a 3-coloring function such that for each edge in we have .for the graph , we construct an instance of approsuppsat as follows .let , and for some large ( note that we need for the constant we will discuss later ) .let the itemset and the support constraints are defined as follows . for each vertex : , support(\{g_v\})=[\frac{n}{3 } ] , \\support(\{b_v\})=[\frac{n}{3}],\\ support(\{r_v , g_v\})=0 , support(\{r_v , b_v\})=0,\\ support(\{g_v , b_v\})=0 .\end{array}\ ] ] for each edge : in the following , we show that there is a transaction database satisfying this approsuppsat problem if and only if is 3-colorable .suppose that is a 3-coloring of .let be a transaction defined by letting where let transactions and be defined by colorings and resulting from cyclically rearranging the colors in the coloring .let the transaction database consist of ] ) .then satisfies the approsuppsat problem .suppose is a transaction database satisfying the approsuppsat problem .we will show that there is a transaction in from which a 3-coloring of could be constructed .let be the collection of itemsets defined as that is , is the collection of itemset that should have support according to the support constraints .since satisfies , for each , is approximately satisfied .thus there is a constant such that at most transactions in contain an itemset in .let be the transaction database obtained from by deleting all transactions that contain itemsets from .then contains at least transactions . for each vertex , we say that a transaction in does not contain if does not contain any items from . since satisfies , for each , approximately one third of the transactions contain ( , , respectively ) .thus there is a constant such that at most transactions in do not contain some vertex . in another word, there are at least transactions in such that contains for all .let be the transaction database obtained from by deleting all transactions such that does not contain some vertex .the above analysis shows that contains at least transactions .let .then we have by the assumption of at the beginning of this proof , we have . for any transaction in , we can define a coloring for by letting by the definition of , the coloring is defined unambiguously . that is , is 3-colorable .this completes the proof for * np*-hardness of approsuppsat .[ npcompletetheorem ] approsuppsat is * np*-complete .* this follows from lemma [ smallmodel ] and lemma [ apphard ] .we showed that the problem approsuppsat is * np*-hard . in the proof of lemma [ apphard ], we use the fact that the number of transactions of the target basket database is larger than the multiplication of the number of support constraints and the approximate error ( that is , is in the order of ) . in practice, the number may not be larger than . then one may wonder whether the problem is still * np*-complete .if is very small , for example , at the order of , then obviously , the problem approsuppsat becomes trivial since one can just construct the transaction database as the collection of copies of the itemset ( that is , the entire set of items ) .this is not a very interesting case since if is at the order of , one certainly does not want the approximate error to be at the order of also .a reasonable problem could be that one defines a constant number to replace the approximate error .then the proof in lemma [ apphard ] shows that the problem approsuppsat with approximate error ( instead of ) is still * np*-complete if .tighter bounds could be achieved if weighted approximate errors for different support constraints are given .in this section , we design and analyze a linear program based algorithm to approximate the * np*-complete problem approsuppsat .let be the collection of items , be the number of transactions in the desired database , and , , , be the sequence of support constraints . according to the proof of lemma [ smallmodel ] ,if this instance of approsuppsat is solvable , then there is a transaction database , consisting of at most itemsets , that satisfies these constraints .let be variables representing the numbers of duplicated copies of these itemsets in respectively .that is , contains copies of for each . for all and , let and be variables with the property that and then we have and the above given approsuppsat instancecould be formulated as the following question . subject to for .the condition set ( [ lpc3 ] ) contains the nonlinear equation and the nonlinear condition specified in ( [ lpc1 ] ) .thus in order to approximate the given approsuppsat instance using linear program techniques , we need to convert these conditions to linear conditions .we first use characteristic arrays of variables to denote the unknown itemsets . for any itemset ,let the -ary array be the characteristic array of .that is , the -th component =1 ] .it is straightforward to show that for two itemsets , we have and if and only if .now the following conditions in ( [ lpeq1 ] ) will guarantee that the condition in ( [ lpc1 ] ) is satisfied . for all , , and .the geometric interpretation of this condition is as follows .if we consider as a point in the 2-dimensional space shown in figure [ config1 ] , then defines points below the line passing the points and , and defines the points above the line passing through the points and .thus if and only if . that is , if and only if .the nonlinear equations can be converted to the following conditions consisting of inequalities . for all and .the constant is used in the inequalities due to the fact that for all . the geometric interpretation for the above inequalitiesis described in the following . if we consider as a point in a 3-dimensional space shown in figure [ fig1 ] , then 1 . defines the plane passing through points , , and ; thus guarantees that if .[ cond2 ] defines the points above the plane passing through points , , and .this condition together with the condition guarantees that when .3 . defines the points below the plane passing through points , and . this condition together with the condition guarantees that . together with the condition [ cond2 ] , we have when . * note * : for the reason of convenience , we introduced the intermediate variables . in order to improve the linear program performance, we may combine the conditions ( [ lpeq1 ] ) and ( [ lpeq2 ] ) to cancel the variables .thus the integer programming formulation for the given approsuppsat instance is as follows . subject to conditions ( [ lpeq1 ] ) , ( [ lpeq2 ] ) , and for .we first solve the linear relaxation of this integer program .that is , replace the second equation in the condition ( [ lpeq1 ] ) by and replace the third equation in the condition ( [ lpeq3 ] ) by let denote an optimal solution to this relaxed linear program .there are several ways to construct an integer solution from .let denote the optimal value of for a given approsuppsat instance and be the corresponding value for the computed integer solution .for an approximation algorithm , one may prefer to compute a number such that theorem [ npcompletetheorem ] shows that it is * np*-hard to approximate the approsuppsat by an additive polynomial factor .thus is not in the order of in the worst case for any polynomial time approximation algorithms , and it is not very interesting to analyze the worst case for our algorithm . in the following ,we first discuss two simple naive rounding methods to get an integer solution from .we then present two improved randomized and derandomized rounding methods .construct an integer solution by rounding to their closest integers , rounding to their almost closest integers so that , and computing , and according to their definitions .that is , for each and set for the rounding of , first round to their closest integers ] .then randomly add / subtract s to / from these values according to the value of until .now round as follows .let s could be computed by setting the values of and can be derived from easily . we still need to further update the values of by using the current values of since we need to satisfy the requirements . from the construction ,it is clear that is a feasible solution of the integer program .the rounding procedure will introduce the following errors to the optimal solution : 1 . by rounding , we need to update the values of , which again leads to the update of values of .2 . by rounding , the values in will change also .for quite a few * np*-hard problems that are reduced to integer programs , naive round methods remain to be the ones with best known performance guarantee .our methods 1 and 2 are based on these naive rounding ideas . in last decades , randomization and derandomization methods ( see , e.g. , ) have received a great deal of attention in algorithm design . in this paradigm for algorithm design , a randomized algorithm is first designed , then the algorithm is `` derandomized '' by simulating the role of the randomization in critical places in the algorithm . in this section , we will design a randomized and derandomized rounding approach to obtain an integer solution from with performance of at least the expectation .it is done by the method of conditional probabilities . in rounding method 1, we round to its closest integer . in a random rounding , we set the value of to with probability and to with probability ( independent of other indices ) . in rounding method 2, we round to the closest value among and . in a random rounding , we set the value of to with probability and to with probability ( independent of other indices ) .a random rounding approach produces integer solutions with an expected value for .an improved rounding approach ( derandomized rounding ) produces integer solutions with guaranteed to be no larger than the expected value . in the following ,we illustrate our method for the random rounding based on the rounding methods 1 and 2 . *randomized and derandomized rounding of .* we determine the value of an additional variable in each step .suppose that has already been determined , and we want to determine the value of with .we compute the conditional expectation for of this partial assignment first with set to zero , and then again with it set to .if we set according to which of these values is smaller , then the conditional expectation at the end of this step is at most the conditional expectation at the end of the previous step .this implies that at the end of the rounding , we get at most the original expectation . in the following ,we show how to compute the conditional expectation . at the beginning of each step , assume that for all entries in , has been determined already and we want to determine the value of for in this step . in order to compute the conditional expectation of , we first compute the probability ] .otherwise , continue with the following computation . by regarding as the probability that takes the value , we know that with at least probability we have . however , the actual probability may be larger since other entries with may contribute items to , which may lead to the inclusion of in .first we define the following sets . and for each , let the probability ] since in the computation , we assume that = \frac{x^*_{i',j}}{\bar{x}_j} ] from the previous round .if sufficient rounds are repeated , the probability will converge in the end .since we have the probabilities ] for all . assume and , , .set = \hat{u}_{j , i_1}\times\cdots\times\hat{u}_{j , i_{|i_i|}}\ ] ] where for .using ] . in another word, does not contain the confidential information if and only if there exists an integer with or such that is consistent .that is , there is a transaction database that satisfies all support constraints in . in the following ,we show that there is even no efficient way to approximately decide whether a given support constraint set contains confidential information .we first define the problem formally .approprivacy _ instance _ : an integer , an item set , a support constraint set , , , and a set _ question _ : for all transaction database of transactions over with for all , do we have ] .however , no one may be able to recover this information since it is * np*-hard to infer this fact .support constraint inference has been extensively studied by calders in .it would be interesting to consider conditional privacy - preserving synthetic transaction database generations .that is , we say that no private information is leaked unless some hardness problems are solved efficiently .this is similar to the methodologies that are used in public key cryptography .for example , we believe that rsa encryption scheme is secure unless one can factorize large integers . in our case , we may assume that it is hard on average to efficiently solve integer linear programs .based on this assumption , we can say that unless integer linear programs could be solved efficiently on average , no privacy specified in is leaked by if the computed confidence level is small .privacy preserving data mining has been a very active research topic in the last few years .there are two general approaches mainly from privacy preserving data mining framework : data perturbation and the distributed secure multi - party computation approach .as the context of this paper focuses on data perturbation for single site , we will not discuss the multi - party computation based approach for distributed cases ( see for a recent survey ) .agrawal and srikant , in , first proposed the development of data mining techniques that incorporate privacy concerns and illustrated a perturbation based approach for decision tree learning .agrawal and agrawal , in , have provided a expectation - maximization ( em ) algorithm for reconstructing the distribution of the original data from perturbed observations .they provide information theoretic measures to quantify the amount of privacy provided by a randomization approach .recently , huang et al . in , investigated how correlations among attributes affect the privacy of a data set disguised via the random perturbation scheme and proposed methods ( pca based and mle based ) to reconstruct original data .the objective of all randomized based privacy - preserving data mining is to prevent the disclosure of confidential individual values while preserving general patterns and rules .the idea of these randomization based approaches is that the distorted data , together with the distribution of the random data used to distort the data , can be used to generate an approximation to the original data values while the distorted data does not reveal private information , and thus is _ safe _ to use for mining .although privacy preserving data mining considers seriously how much information can be inferred or computed from large data made available through data mining algorithms and looks for ways to minimize the leakage of information , however , the problem how to quantify and evaluate the tradeoffs between data mining accuracy and privacy is still open . in the context of privacy preserving association rule mining , there have also been a lot of active researches . in ,the authors considered the problem of limiting disclosure of sensitive rules , aiming at selectively hiding some frequent itemsets from large databases with as little impact on other , non - sensitive frequent itemsets as possible .the idea was to modify a given database so that the support of a given set of sensitive rules decreases below the minimum support value .similarly , the authors in presented a method for selectively replacing individual values with unknowns from a database to prevent the discovery of a set of rules , while minimizing the side effects on non - sensitive rules .the authors studied the impact of hiding strategies in the original data set by quantifying how much information is preserved after sanitizing a data set .the authors , in , studied the problem of mining association rules from transactions in which the data has been randomized to preserve privacy of individual transactions .one problem is it may introduce some false association rules .the authors , in , investigated distributed privacy preserving association rule mining .though this approach can fully preserve privacy , it works only for distributed environment and needs sophisticated protocols ( secure multi - party computation based ) , which makes it infeasible for our scenario .have proposed a general framework for privacy preserving database application testing by generating synthetic data sets based on some a - priori knowledge about the production databases .the general a - priori knowledge such as statistics and rules can also be taken as constraints of the underlying data records . the problem investigated in this papercan be thought as a simplified problem where data set here is binary one and constraints are frequencies of given frequent itemsets .however , the techniques developed in are infeasible here as the number of items are much larger than the number of attributes in general data sets .in this paper , we discussed the general problems regarding privacy preserving synthetic transaction database generation for benchmark testing purpose .in particular , we showed that this problem is generally * np*-hard .approximation algorithms for both synthetic transaction database generation and privacy leakage confidence level approximation have been proposed .these approximation algorithms include solving a continuous variable linear program . according to ,linear problems having hundreds of thousands of continuous variables are regularly solved .thus if the support constraint set size is in the order of hundreds of thousands , then these approximation algorithms are efficient on regular pentium - based computers . if more constraints are necessary , then more powerful computers are needed to generate synthetic transaction databases .r. agrawal , t. imilienski , and a. swami .mining association rules between sets of items in large databases . in _ proc . of acmsigmod international conference on management of database _ , pages 207216 , 1993 .a. evfimievski , j. gehrke , and r. srikant .limiting privacy breaches in privacy preserving data mining . in : _the 22nd acm sigmod - sigact - sigart symposium on principles of database system ( pods ) _ , pages 211222 , 2003 .a. evfimievski , r. srikant , r. agrawal , and j. gehrke .privacy preserving mining of association rules . in : _8th acm sigkdd international conference on knowledge discovery and data mining _ , pages 217228 , edmonton , canada , 2002 .m. kantarcioglu and c. clifton .privacy preserving distributed mining of association rules on horizontally partitioned data . in : _ proc .acm sigmod workshop on research issues on data mining and knowledge discovery _ ,pages 2431 , 2002 .h. kargupta , s. datta , q. wang , and k. sivakumar . on the privacy preserving properties of random data perturbation techniques . in : _3rd international conference on data mining _ , pages 99 - 106 , 2003 .g. ramesh , w. maniatty , and m. zaki .feasible itemset distributions in data mining : theory and application . in : _22nd acm sigmod - sigact - sigart symposium on principles of database systems ( pods ) _ , pages 284295 , 2003 .d. shmoys . computing near - optimal solutions to combinatorial optimization problems ._ dimacs series in discrete mathematics and theoretical computer science : combinatorial optimization _ , pages 355398 .ams press , 1995 .y. wang , x. wu , and y. zheng .privacy preserving data generation for database application performance testing . in : _ proc .1st int . conf . on trust and privacy in digital business( trustbus 04 , together with dexa ) _ , lecture notes in computer science 3184 , pages 142 - 151 , 2004 , springer - verlag .x. wu , y. wu , y. wang , and y. li .privacy aware market basket data set generation : a feasible approach for inverse frequent set mining . in : _proc . of the 5th siam international conference on data mining _ , april 2005 .z. zheng , r. kohavi , and l. mason .real world performance of association rule algorithms . in _ proc . of the acm - sigkdd international conference on knowledge discovery and data mining _ , pages 401406 .acm press , 2001 .
|
in order to generate synthetic basket data sets for better benchmark testing , it is important to integrate characteristics from real - life databases into the synthetic basket data sets . the characteristics that could be used for this purpose include the frequent itemsets and association rules . the problem of generating synthetic basket data sets from frequent itemsets is generally referred to as inverse frequent itemset mining . in this paper , we show that the problem of approximate inverse frequent itemset mining is * np*-complete . then we propose and analyze an approximate algorithm for approximate inverse frequent itemset mining , and discuss privacy issues related to the synthetic basket data set . in particular , we propose an approximate algorithm to determine the privacy leakage in a synthetic basket data set . keywords : data mining , privacy , complexity , inverse frequent itemset mining
|
the recent financial crisis has been marked with series of sharp falls in asset prices triggered by , for example , the s&p downgrade of us debt , and default speculations of european countries .many individual and institutional investors are wary of large market drawdowns as they not only lead to portfolio losses and liquidity shocks , but also indicate potential imminent recessions .as is well known , hedge fund managers are typically compensated based on the fund s outperformance over the last record maximum , or the high - water mark ( see , among others ) . as such, drawdown events can directly affect the manager s income . also , a major drawdown may also trigger a surge in fund redemption by investors , and lead to the manger s job termination .hence , fund managers have strong incentive to seek insurance against drawdowns .these market phenomena have motivated the application of drawdowns as path - dependent risk measures , as discussed in , , among others . on the other hand, vecer argues that some market - traded contracts , such as vanilla and lookback puts , have only limited ability to insure the market drawdowns . "he studies through simulation the returns of calls and puts written on the underlying asset s maximum drawdown , and discusses dynamic trading strategies to hedge against a drawdown associated with a single asset or index . the recent work provides non - trivial static strategies using market - traded barrier digital options to approximately synthesize a european - style digital option on a drawdown event .these observations suggest that drawdown protection can be useful for both institutional and individual investors , and there is an interest in synthesizing drawdown insurance . in the current paper, we discuss the stochastic modeling of drawdowns and study the valuation of a number of insurance contracts against drawdown events .more precisely , the drawdown process is defined as the current relative drop of an asset value from its historical maximum . in its simplest form ,the drawdown insurance involves a continuous premium payment by the investor ( protection buyer ) to insure a drawdown of an underlying asset value to a pre - specified level . in order to provide the investor with more flexibility in managing the path - dependent drawdown risk, we incorporate the right to terminate the contract early .this early cancellation feature is similar to the surrender right that arises in many common insurance products such as equity - indexed annuities ( see e.g. , , ) . due to the timing flexibility , the investor may stop the premium payment if he / she finds that a drawdown is unlikely to occur ( e.g. when the underlying price continues to rise ) . in our analysis , we rigorously show that the investor s optimal cancellation timing is based on a non - trivial first passage time of the underlying drawdown process . in other words, the investor s cancellation strategy and valuation of the contract will depend not only on current value of the underlying asset , but also its distance from the historical maximum .applying the theory of optimal stopping as well as analytical properties of drawdown processes , we derive the optimal cancellation threshold and illustrate it through numerical examples .moreover , we consider a related insurance contract that protects the investor from a drawdown preceding a drawup .in other words , the insurance contract expires early if a drawup event occurs prior to a drawdown . from the investor s perspective , when a drawup is realized ,there is little need to insure against a drawdown .therefore , this drawup contingency automatically stops the premium payment and is an attractive feature that will potentially reduce the cost of drawdown insurance .our model can also readily extended to incorporate the default risk associated with the underlying asset . to this end, we observe that a drawdown can be triggered by a continuous price movement as well as a jump - to - default event . among other results ,we provide the formulas for the fair premium of the drawdown insurance , and analyze the impact of default risk on the valuation of drawdown insurance . in existing literature, drawdowns also arise in a number of financial applications .pospisil and vecer apply pde methods to investigate the sensitivities of portfolio values and hedging strategies with respect to drawdowns and drawups .drawdown processes have also been incorporated into trading constraints for portfolio optimization ( see e.g. ) .meilijson discusses the role of drawdown in the exercise time for a certain look - back american put option .several studies focus on some related concepts of drawdowns , such as maximum drawdowns , and speed of market crash . on the other hand , the statistical modeling of drawdowns and drawups is also of practical importance , and we refer to the recent studies , among others . for our valuation problems , we often work with the joint law of drawdowns and drawups . to this end , some related formulas from , , , and are useful . compared to the existing literature and our prior work ,the current paper s contributions are threefold .first , we derive the fair premium for insuring a number of drawdown events , with both finite and infinite maturities , as well as new provisions like drawup contingency and early termination .in particular , the early termination option leads to the analysis of a new optimal stopping problem ( see section [ sect - cancellableinsurance ] ) .we rigorously solve for the optimal termination strategy , which can be expressed in terms of first passage time of a drawdown process .furthermore , we incorporate the underlying s default risk a feature absent in other related studies on drawdown into our analysis , and study its impact on the drawdown insurance premium .the paper is structured as follows . in section [ sect - formulation ] , we describe a stochastic model for drawdowns and drawups , and formulate the valuation of a vanilla drawdown insurance . in sections [ sect - cancellableinsurance ] and [ sect - drawuptoo ] , we study , respectively , the cancellable drawdown insurance and drawdown insurance with drawup contingency . as extension , we discuss the valuation of drawdown insurance on a defaultable stock in section [ sect - def ] .section [ sect - conclude ] concludes the paper .we include the proofs for a number of lemmas in section [ sect_proofs ] .we fix a complete filtered probability space satisfying the usual conditions .the risk - neutral pricing measure is used for our valuation problems . under the measure , we model a risky asset by the geometric brownian motion where is a standard brownian motion under that generates the filtration .let us denote and , respectively , to be the processes for the running maximum and running minimum of .when writing the contract , the insurer may use the historical maximum and minimum recorded from a prior reference period .consequently , at the time of contract inception , the reference maximum , the reference minimum and the stock price need not coincide .this is illustrated in figure [ fig : spx ] . the running maximum and running minimum processes associated with follow and .] , }s_s\big),\quad \underline{s}_t=\underline{s}\wedge\big(\inf_{s\in[0,t]}s_s\big ) .\end{aligned}\ ] ] we define the stopping times respectively as the first times that attains a _ relative drawdown _ of units and a _ relative drawup _ of units . without loss of generality ,we assume that so that , almost surely . to facilitate our analysis, we shall work with log - prices .therefore , we define so that where and .denote by and to be , respectively , the running maximum and running minimum of the log price process .then , the relative drawdown and drawup of are equivalent to the absolute drawdown and drawup of the log - price , namely , where ( see ) , and note that under the current model the stopping times and , and they do not depend on or equivalently the initial stock price . and the log - price , so the initial drawdown .we remark that the large drawdown in august 2011 due to the downgrade of us debt by s&p.,scaledwidth=70.0% ] we now consider an insurance contract based on a drawdown event . specifically , the protection buyer who seeks insurance on a drawdown event of size will pay a constant premium payment continuously over time until the drawdown time . in return, the protection buyer will receive the insured amount at time . here, the values , and are pre - specified at the contract inception .the contract value of this drawdown insurance is where is the conditional laplace transform of defined by this amounts to computing the conditional laplace transform , which admits a closed - form formula as we show next .the conditional laplace transform function is given by where define the first time that the drawdown process decreases to a level by by the strong markov property of process at , we have that for , therefore , the problem is reduced to finding , which is known ( see ) : substituting this to yields . therefore , the contract value in is explicit given for any premium rate .the fair premium is found from the equation , which yields [ rem_1 ] our formulation can be adapted to the case when the drawdown insurance is paid for upfront . indeed , we can set in , then the price of this contract at time zero is . on the other hand ,if the insurance premium is paid over a pre - specified period of time , rather than up to the random drawdown time , then the present value of the premium cash flow will replace the first term in the expectation of ( 2.7 ) . in this case , setting the contract value zero at inception , the fair premium is given by . in section [ sect - drawuptoo ] , we discuss the case where the holder will stop premium payment if a drawup event occurs prior to drawdown or maturity . for both the insurer and protection buyer , it is useful to know how long the drawdown is expected to occur .this leads us to compute the expected time to a drawdown of size , under the physical measure .the measure is equivalent to , whereby the drift of is the annualized growth rate , not the risk - free rate . under measure ,the log price is where is a -brownian motion .[ exptime]the expected time to drawdown of size is given by where by the markov property of the process , we know that where and is the standard markov shift operator . if , applying the optional sampling theorem to uniformly integrable martingale with , we obtain that moreover , using the fact that , and eq .( 11 ) of : we conclude the proof for .the case of is obtained by taking the limit .as is common in insurance and derivatives markets , investors may demand the option to voluntarily terminate their contracts early .typical examples include american options and equity - indexed annuities with surrender rights . in this section ,we incorporate a cancelable feature into our drawdown insurance , and investigate the optimal timing to terminate the contract . with a cancellable drawdown insurance, the protection buyer can terminate the position by paying a constant fee anytime prior to a pre - specified drawdown of size . for a notional amount of with premium rate , the fair valuation of this contractis found from the optimal stopping problem : for .the fair premium makes the contract value zero at inception , i.e. .we observe that it is never optimal to cancel and pay the fee at since the contract expires and pays at .hence , it is sufficient to consider a smaller set of stopping times , which consists of -stopping times strictly bounded by . we will show in section [ sect - optcancel ] that the set of _ candidate _ stopping times are in fact the drawdown stopping times indexed by their respective thresholds ( see ) .next , we show that the cancellable drawdown insurance can be decomposed into an ordinary drawdown insurance and an american - style claim on the drawdown insurance .this provides a key insight for the explicit computation of the contract value as well as the optimal termination strategy .the cancellable drawdown insurance value admits the decomposition : where is defined in .let us consider a transformation of .first , by rearranging of the first integral in and using , we obtain note that the first term is explicitly given in and , and it does not depend on . since the second term depends on only through its truncated counterpart , and that is suboptimal , we can in fact consider maximizing over the restricted collection of stopping times . as a result ,the second term simplifies to then , using the fact that , as well as the strong markov property of , we can write where hence , we complete the proof by simply noting that ( compare and ) . using this decomposition , we can determine the optimal cancellation strategy from the optimal stopping problem , which we will solve explicitly in the next subsection . in order to determine the optimal cancellation strategy for in, it is sufficient to solve the optimal stopping problem represented by in for a fixed . to simplify notations ,let us denote by and .our method of solution consists of two main steps : 1 .we conjecture a candidate class of stopping times defined by , where this leads us to look for a candidate optimal threshold using the principle of _ smooth pasting _ ( see ) .we rigorously verify via a martingale argument that the cancellation strategy based on the threshold is indeed optimal .* step 1 . * from the properties of laplace function ( see lemma [ lem1 ] below ) , we know the reward function in is a decreasing concave . therefore , if , then the second term of is non - positive , and it is optimal for the protection buyer to never cancel the insurance , i.e. , .hence , in search of nontrivial optimal exercise strategies , it is sufficient to study only the case with , which is equivalent to for each stopping rule conjectured in , we compute explicitly the second term of as the _ candidate optimal _ cancellation threshold is found from the smooth pasting condition : this is equivalent to seeking the root of the equation : where and are explicit in view of and .next , we show that the root exists and is unique ( see section [ proof - prop - unique ] for the proof ) .[ prop - unique ] there exists a unique satisfying the smooth pasting condition . * step 2 . * with the candidate optimal threshold from ( [ pasting ] ), we now verify that the candidate value function dominates the reward function .recall that for .[ lem2]the value function corresponding to the candidate optimal threshold satisfies we provide a proof in [ sect - appx - gf ] . by the definition of in , repeated conditioning yields that the stopped process is a martingale . for , we have where . as a result ,the stopped process is in fact a super - martingale . to finalize the proof, we note that for and any stopping time , maximizing over , we see that . on the other hand ,becomes an equality when , which yields the reverse inequality . as a result ,the stopping time is indeed the solution to the optimal stopping problem . in summary, the protection buyer will continue to pay the premium over time until the drawdown process either falls to the level in or reaches to the level specified by the contract , whichever comes first . in figure [ fig1 ]( left ) , we illustrate the optimal cancellation level .as shown in our proof , the optimal stopping value function connects smoothly with the intrinsic value at . in figure [ fig1 ]( right ) , we show that the fair premium is decreasing with respect to the protection downdown size .this is intuitive since the drawdown time is almost surely longer for a larger drawdown size and the payment at is fixed at .the protection buyer is expected to pay over a longer period of time but at a lower premium rate .lastly , with the optimal cancellation strategy , we can also compute the expected time to contract termination , either as a result of a drawdown or voluntary cancellation .precisely , we have for , we have where is defined in proposition [ exptime ] . according to the optimal cancellation strategy, we have where . applying the optional sampling theorem to the uniformly martingale with if , or , we obtain the result in the .[ remark - finitemat ] in the finite maturity case , the set of candidate stopping times is changed to in .like proposition [ decomp ] , the contract value at time zero for premium rate still admits the decomposition where and is the conditional laplace transform of : this problem is no longer time - homogeneous , and the fair premium can be determined by numerically solving the associated optimal stopping problem .we now consider an insurance contract that provides protection from any specified drawdown with a drawup contingency .this insurance contract may expire early , whereby stopping the premium payment , if a drawup event occurs prior to drawdown or maturity . from the investor s viewpoint ,the realization of a drawup implies little need to insure against a drawdown .therefore , this drawup contingency is an attractive cost - reducing feature to the investor .first , we consider the case with a finite maturity . specifically ,if a -unit drawdown occurs prior to a drawup of the same size or the expiration date , then the investor will receive the insured amount and stop the premium payment thereafter .hence , the risk - neutral discounted expected payoff to the investor is given by where the expectation is taken under the pricing measure .the fair premium is chosen such that the contract has value zero at time zero , that is , applying to , we obtain a formula for the fair premium : as a result , the fair premium involves computing the expectation and the laplace transform of . in order to determine the fair premium in , we first write the special case of the probability on the right - hand side , is derived using results from ( eq . ( 39)-(40 ) ) , namely , , \label{q1 } \end{aligned}\ ] ] where . therefore , the expectation can be computed via numerical integration . in the general case that , we have the following result . in the model , for and , we have where \bigg\}. \end{aligned}\ ] ] we begin by differentiating both sides of ( 2.7 ) in with respect to maturity to obtain that where for .the function , derived in theorem 5.1 of , is given by integration yields and this completes the proof .similarly , we express the laplace transform of as to compute this , we notice that the equivalence of the probabilities ( under the reflection of the processes about ) : therefore , we have where denotes the pricing measure whereby has drift .hence , we can again compute the laplace transform of by numerical integration , and obtain the fair premium for the drawdown insurance via . the expectation and the laplace transform of are in fact linked .this is seen through : where is the expectation defined in , and if the protection buyer pays a periodic premium at times , , with , then the fair premium is compared to the continuous premium case , the fair premium here involves a sum of the probabilities : , each given by above .now , we consider the drawdown insurance contract that will expire not at a fixed finite time but as soon as a drawdown / drawup of size occurs . to study this perpetual case ,we take in . as the next proposition shows, we have a simple closed - form solution for the fair premium , allowing for instant computation of the fair premium and amenable for sensitivity analysis .[ prop - perp1]the perpetual drawdown insurance fair premium is given by where with in the perpetual case , the fair premium is given by where and .to get formulas for and , we begin by multiplying both sides of by and integrate out .then we obtain that where . using formulas in , we have that for an integration yields .the computation of follows from the discussion in the proof of proposition 3.1 .this completes the proof of the proposition .finally , the probability that a drawdown is realized prior to a drawup , meaning that the protection amount will be paid to the buyer before the contract expires upon drawup , is given by let such that , then where is defined in proposition [ exptime ] . from the proof of proposition [ prop - perp1 ], we obtain that finally , lhpital s rule yields the last limit and . in figure [ fig2 ] ( left ), we see that the fair premium increases with the maturity , which is due to the higher likelihood of the drawdown event at or before expiration .for the perpetual case , we illustrate in figure [ fig2 ] ( right ) that higher volatility leads to higher fair premium . from this observation ,it is expected in a volatile market drawdown insurance would become more costly .in contrast to a market index , an individual stock may experience a large drawdown through continuous downward movement or a sudden default event .therefore , in order to insure against the drawdown of the stock , it is useful to incorporate the default risk into the stock price dynamics . to this end , we extend our analysis to a stock with reduced - form ( intensity based ) default risk . under the risk - neutral measure , the defaultable stock price evolves according to is the constant default intensity for the single jump process , with independent of the brownian motion under . at ,the stock price immediately drops to zero and remains there permanently , i.e. for a.e . , .similar equity models have been considered e.g. in and more recently , among others .the drawdown events are defined similarly as in where the log - price is now given by we follow a similar definition of the drawdown insurance contract from section [ sect - formulation ]. one major effect of a default event is that it causes drawdown and the contract will expire immediately . in the perpetual case ,the premium payment is paid until if it happens before both the default time and the maturity , or until the default time if .notice that , if no drawup or drawdown of size happens before , then the drawdown time will coincide with the default time , i.e. . the expected value to the buyer is given by again , the stopping times and based on do not depend on , and therefore , the contract value is a function of the initial drawdown and drawup . under this defaultable stock model ,we obtain the following useful formula for the fair premium . the fair premium for a drawdown insurance maturing at , written on the defaultable stock in is given by where and are given in and , respectively .as seen in , the fair premium satisfies we first compute the expectation in the numerator . next , the laplace transform of is given by rearranging and yields . by taking in , we obtain the fair premium for the perpetual drawdown insurance in closed form .the fair premium for the perpetual drawdown insurance written on the defaultable stock in is given by where and are given in .in figure [ fig_def ] , we illustrate for the perpetual case that the fair premium is increasing with the default intensity and approaches for high default risk .this observation , which can be formally shown by taking the limit in , is intuitive since high default risk implies that a drawdown will more likely happen and that it is most likely triggered by a default . , which dominates the straight dashed line .as , the fair premium .parameters : ., width=288 ]we have studied the practicality of insuring against market crashes and proposed a number of tractable ways to value drawdown protection . under the geometric brownian motion dynamics , we provided the formulas for the fair premium for a number of insurance contracts , and examine its behavior with respect to key model parameters . in the cancellable drawdown insurance , we showed that the protection buyer would monitor the drawdown process and optimally stop the premium payment as the drawdown risk diminished . also , we investigated the impact of default risk on drawdown and derived analytical formulas for the fair premium . for future research, we envision that the valuation and optimal stopping problems herein can be studied under other price dynamics , especially when drawdown formulas , e.g. for laplace transforms and hitting time distributions , are available ( see for the diffusion case ) .although we have focused our analysis on drawdown insurance written on a single underlying asset , it is both interesting and challenging to model drawdowns across multiple financial markets , and investigate the systemic impact of a drawdown occurred in one market .this would involve modeling the interactions among various financial markets and developing new measures of systemic risk .lastly , the idea of market drawdown and the associated mathematical tools can also be useful in other areas , such as portfolio optimization problems , risk management , and signal detection . + * acknowledgement .* we are grateful to the seminar participants at johns hopkins university and columbia university .we also thank informs for the best presentation award for this work at the 2011 annual meeting .tim leung s work is supported by nsf grant dms-0908295 .olympia hadjiliadis work is supported by nsf grants ccf - msc-0916452 , dms-1222526 and psc - cuny grant 65625 - 00 43 .finally , we thank the editor and anonymous referees for their useful remarks and suggestions .in order to prepare for our subsequent proofs on the cancellable drawdown insurance in section [ sect - cancellableinsurance ] , we now summarize a number properties of the conditional laplace transform of ( see ) . 1 . is positive and increasing on . satisfies differential equation with the neumann condition 3 . is strictly convex , i.e. , for .property ( i ) follows directly from the definition of and strong markov property .property ( ii ) follows directly from differentiation of . for property ( iii ) , the proof is as follows .if , then ( [ ode ] ) implies that if , then ( [ zd ] ) and ( [ ode ] ) imply that for , last inequality follows from the fact that and .hence , strict convexity follows . in view of, we seek the root of the equation : to this end , we compute since is monotonically decreasing from to , there exists a unique such that .we have by and , which implies that has at least one solution .moreover , for , and hence by , there is no root in .next , we show the root is unique by proving that for all . to this end, we first observe from that can be expressed as , for , where and .putting these into , we express in terms instead of . in turn , verifying is reduced to : we begin by using ( [ ode ] ) to rewrite the statement in the lemma as by the strong markov property of process , the function satisfies a more general version of .specifically , for , define for , then function satisfies ( see ( [ xy ] ) ) from which we can easily obtain that straightforward computation shows that thus , using ( [ zeta2lambda ] ) , the above inequality is equivalent to let us denote by we will show that notice that for $ ] , therefore moreover , as a result , this completes the proof of lemma [ lem1 ] . since our problem concerns , lemma [ lem1 ] says for , which confirms that there is at most one solution to equation .this concludes the uniqueness of smooth pasting point .using probabilistic nature of function we know that it is positive and decreasing .therefore , if , we have on the other hand , if , from ( [ xi ] ) we have so in either case ( or ) , is an increasing function , and which implies that this completes the proof .
|
this paper studies the stochastic modeling of market drawdown events and the fair valuation of insurance contracts based on drawdowns . we model the asset drawdown process as the current relative distance from the historical maximum of the asset value . we first consider a vanilla insurance contract whereby the protection buyer pays a constant premium over time to insure against a drawdown of a pre - specified level . this leads to the analysis of the conditional laplace transform of the drawdown time , which will serve as the building block for drawdown insurance with early cancellation or drawup contingency . for the cancellable drawdown insurance , we derive the investor s optimal cancellation timing in terms of a two - sided first passage time of the underlying drawdown process . our model can also be applied to insure against a drawdown by a defaultable stock . we provide analytic formulas for the fair premium and illustrate the impact of default risk . drawdown insurance ; early cancellation ; optimal stopping ; default risk . + jel subject classification : c61 , g01 , g13 , g22 .
|
one of us , long , came to know dr .brandt first through his important works in quantum information and later his role as editor - in - chief of the journal quantum information processing(qip ) .long proposed a new type of quantum computer in 2002 , which employed the wave - particle duality principle to quantum information processing .his acquaintance with qip began in 2006 through the work of stan gudder who established the mathematical theory of duality quantum computer , which was accompanied by a different mathematical description of duality quantum computer in the density matrix formalism .the development of duality quantum computer owes a great deal to qip first in the term of dr .david cory as editor - in - chief , and then the term of dr .h. e. brandt as the editor - in - chief .for example , the zero - wave function paradox was pointed out firstly by gudder , and two possible solutions were given in refs . and .long has actively participated in the work of qip as a reviewer when dr .brandt was the editor - in - chief , and as a member of the editorial board from 2014 .at this special occasion , it is our great honor to present a survey of the duality quantum computer in this special issue dedicated to the memory of dr .howard e. brandt . as is well - known , a moving quantum object passing through a double - slit behaves like both waves and particles .the duality computer , or duality quantum computer exploits the wave - particle duality of quantum systems .it has been proven that a moving -qubit duality computer passing through a -slits can be perfectly simulated by an ordinary quantum computer with -qubit and an additional levels qudit .so we do not need to build a moving quantum computer device which is very difficult to realize .this also indicates that we can perform duality quantum computing in an ordinary quantum computer , in the so - called duality quantum computing mode .there have been intensive interests in the theory of duality computer in recent years , and experimental studies have also been reported .this article is organized as follows . in section[ s2 ] , we briefly describe the generalized quantum gate , the divider and combiner operations . section [ s3 ] reviews the duality quantum computing mode , which enables the implementation of duality quantum computing in an ordinary quantum computer . in section [ s4 ] , we outline the main results of mathematical theory of duality quantum computer . in section [ s5 ] ,we give the duality quantum computer description of the work of childs et al which simulates a quantum system with sparse hamiltonian efficiently . in section [ s6 ] ,we give the duality quantum computer description of the work of berry et al which simulates a quantum system having a sparse hamiltonian with exponential improvement in the precision . in section [ s7 ] ,we give a brief summary .a duality quantum computer is a moving quantum computer passing through a -slits . in fig .[ f1 ] , we give an illustration for a duality quantum computer with 3-slits .the quantum wave starts from the 0-th slit in leftmost wall , and then goes to the middle screen with three slits ( this is the divider operation ) . between the middle screen and the rightmost screen ,some unitary operations are performed on the sub - waves at different slits .they are then collected at the 0-slit in the rightmost screen , and this is the quantum wave combiner operation .the result is then read - out by the detector placed at the 0-slit on the right wall ., width=340 ] in a duality quantum computer , the two new operations are the quantum wave divider ( qwd ) operation and quantum wave combiner ( qwc ) operation .the qwd operation divides the wave function into many identical parts with reduced amplitudes .the physical picture of this division is simple and natural : a quantum system passing through a -slits with its wave function being divided into sub - waves .each of the sub - waves has the same internal wave function and are different only in the center of mass motion characterized by the slit .conversely , the combiner operation adds up all the sub - waves into one wave function .it should be noted that one divides the wave function of the same quantum system into many parts in quantum divider , whereas in quantum cloning one copies the state of one quantum system onto another quantum system (which may also holds true for classical systems ) .so , the division operation does not violate the non - cloning theorem .considering a quantum wave divider corresponding to a quantum system passing through a -slits . writing the direct sum of hilbert space as the form of where , .the divider structure characteristics which describes the properties of a quantum wave divider can be denoted as where each is a complex number with a module less than 1 and satisfy .the divider operator which maps is defined by where , .this is the most general form of the divider operator , and it describes a general multi - slits . in a special case ,the multi - slits are identical slits , then .the corresponding combiner operation can be defined as follows where denotes the combiner structure that describes the properties of a quantum wave combiner .each is a complex number that satisfy the module less than 1 , and . in the case of ,the combiner structure is uniform .now , we consider the uniform divider and combiner structures which correspond to and , respectively . in this case , the combined operations of divider and combiner leave the state unchanged .the process can be described as follows if the divider structure and combiner structure satisfy certain relation , this property also holds .the details will be given in the next section .it will be shown later in the next section that the divider and combiner structure and can be expressed by a column or a row of elements of a unitary matrix respectively . for dualityquantum gates with the form of in eq.([e3 ] ) , the relation of the two unitary matrices makes the structures of and adjoint of each other .the most general form of duality quantum gates has been given in refs . . for the convenience of readers, we use the expressions from duality quantum computing mode . compared to ordinary quantum computer where only unitary operators are allowed ,the duality quantum computer offers an additional capability in information processing : one can perform different gate operations on the sub - wave functions at different slits .this is called the duality parallelism , and it enables the duality quantum computer to perform non - unitary gate operations .the generalized quantum gate , or duality gate is defined as follows where is unitary and is a complex number and satisfies the duality quantum gate is called real duality gate or real generalized quantum gate when it is restricted to positive real . in this case, is denoted by , and they are constrained by the condition of . the real duality gate is denoted as .so , the real duality quantum gate can be rewritten as this corresponds to a physical picture of an asymmetric -slits , and is the probability that the duality computer system passes through the -th slit . according to the definition of duality quantum gates , they are generally non - unitary. it naturally provides the capability to perform non - unitary evolutions .for instance , dynamic evolutions in open quantum systems should be simulated in such machines .more interestingly , it is an important issue to study the computing capabilities of duality quantum computing .an important step toward this direction is that wang , du and dou proposed an theorem which limits what can not be a duality gate in a hilbert space with infinite degrees of freedom .the divider operation can be expressed by a general unitary operation and the combiner operation can be expressed by another general unitary operation .the two unitary operations are implemented on an auxiliary qudit which represents a -slits .the quantum circuit of duality quantum computer is shown in fig .[ f2 ] . denotes the initial state of duality quantum computer with as the controlling auxiliary qudit .the squares represent unitary operations and the circles represent the state of the controlling qudit , unitary operations , , are activated only when the qudit holds the respective values indicated in circles .[ f2],width=340 ] there are controlled operations between the operations and .the energy levels of the qudit represent the -slits .we divide the duality computing processing into four steps to reveal the computing theory in a quantum computer .step one : preparing the initial quantum system where is the initial state . then performing the divider operation by implement the on the auxiliary qudit , and this operation transforms the initial state into is the first column element of the unitary matrix representing the coefficient in each slit . denotes the divider structure .note that is a complex number with and satisfies .so is a generalized quantum division operation .step two : we perform the qudit controlled operations , , on the target state which leads to the following transformation this corresponds to the physical picture that implements unitary operations simultaneously on the sub - waves at different slits .step three : we combine the wave functions by performing the unitary operation .the following state is obtained , step four : detecting the final wave function when the qudit is in state by placing a detector at slit 0 as shown in fig .the wave function becomes it should be noted that is a generalized quantum combiner operation and which is the combiner structure in eq .( [ ecm ] ) .hence is the coefficient in the generalized duality gate defined in eq .( [ e1 ] ) .now , we have successfully realized the duality quantum computing in an ordinary quantum computer .considering a special case that , the coefficients satisfy where is defined in eq .( [ e3 ] ) corresponding to the real duality gate . generally speaking , is a complex number .the sum of s can be denoted as the value of is just an element of a unitary matrix , and naturally has the constraint .hence the most general form of duality gates allowable by the principles of quantum mechanics is the form of ( [ e1 ] ) in a recent study , zhang et al has proved that it is realizable and necessary to decompose a generalized quantum gate in terms of two unitary operators and in eq .( [ eqvw ] ) if and only if the coefficients satisfy obtaining the explicit form of the decomposition is a crucial step in duality quantum algorithm design and related studies .the mathematical theory of duality quantum computer has been the subject of many recent studies . herewe briefly review the mathematical description in duality quantum computing . in this case , the mathematical theory of the divider and combiner operations are restricted to a real structure , namely each is real and positive , and the uniform structure is also a special case of the combiner structure .the following results are from ref . and we label the corresponding lemma , theorem and corollaries by a letter g , and the corresponding operators are labeled with a subscript . hereare the properties of generalized quantum gates and related operators . defining the set of generalized quantum gates on as which turns out to be a convex set .then we have * theorem g 4 . 1 * the identity is an extreme point of , where is the identity operator on .any unitary operator is in and .the identity is an extreme point of which indicates if and only if for all .* corollary g 4 . 2 * the extreme points of are precisely the unitary operators in . we can conclude from * theorem g 4 . 1 * and * corollary g 4 . 2 * that the ordinary quantum computer is included in the duality quantum computer .denoting by the set of bounded linear operators on and let be the positive cone generated by .that is * theorem g 4 .3 * if dim , then .this theorem shows us that the duality quantum computer is able to simulate any operator in a hilbert space if dim .it should be pointed out that these lemmas , corollary and theorems also hold for divider and combiner with a general complex structure .one limitation has been given explicitly by wang , du and dou that what can not be a generalized quantum gate when the dimension is infinite .it is an interesting direction to study the computing ability of duality quantum computer in terms of this theorem .simulating physics with quantum computers is the original motivation of richard feynman to propose the idea of quantum computer .benioff has constructed a microscopic quantum mechanical model of computers as represented by turing machines .quantum simulation is apparently unrealistic using classical computers , but quantum computers are naturally suited to this task .simulating the time evolution of quantum systems or the dynamics of quantum systems is a major potential application of quantum computers .quantum computers accelerate the integer factorization problem exponentially through the use of shor algorithm , and the unsorted database search problem in a square - root manner through the grover s algorithm ( see also the improved quantum search algorithms with certainty ) .quantum computers can simulate quantum systems exponentially fast .lloyd proposed the original approach to quantum simulation of time - independent local hamiltonians based on product formulas which attracted many attentions .however , in this formalism , high - order approximations lead to sharply increased algorithmic complexity , the performance of simulation algorithms based on product formulas is limited .for instance , the lie - trotter - suzuki formulas , which is high - order product formulas , yields a new efficient approach to approximate the time evolution using a product of unitary operations whose length scales exponentially with the order of the formula . in contrast , classical methods based on multi - product formulas require a sum of unitary operations only in polynomially scales to achieve the same accuracy .however , due to the unclosed property of unitary operations under addition , these classical methods can not be directly implemented on a quantum computer .the duality quantum computer can be used as a bridge to transform classical algorithms in to quantum computing algorithms .duality parallelism in the duality quantum computer enables us to perform the non - unitary operations .moreover , duality quantum gate has the form .this is the linear combinations of unitary operations .duality quantum computer is naturally suitable for the simulation algorithms of hamiltonians based on multi - product formulas .childs and wiebe proposed a new approach to simulate hamiltonian dynamics based on implementing linear combinations of unitary operations .the resulting algorithm has superior performance to existing simulation algorithms based on product formulas and is optimal among a large class of methods .their main results are as follows [ thm : mainresult ] let the system hamiltonian be where each is hermitian and satisfies for a given constant .then the hamiltonian evolution can be simulated on a quantum computer with failure probability and error at most as a product of linear combinations of unitary operators . in the limit of large , this simulation uses elementary operations and exponentials of the . considering this simulation algorithmis based on implementing linear combinations of unitary operations , it can be implemented by duality quantum computer .now , we give the duality quantum computer description of this simulation algorithm . the evolution operator satisfies the schrdinger equation [ eq : schrodinger ] and time evolution operator can be formally expressed as .the lie trotter suzuki formulas approximate time evolution operator for as a product of the form these formulas can be defined for any integer by )\left(s_{\chi-1}(s_{\chi-1}t)\right)^2\label{eq:12},\end{aligned}\ ] ] where for any integer .this choice of is made to ensure that the taylor series of matches that of to . with the values of large enough and the values of small enough, the approximation of can reach arbitrary accuracy .childs et al have simulated using iterations of for some sufficiently large : where represent distinct natural numbers and . in ,the and are defined as and the formula is accurate to order , namely , the basic idea of this simulation algorithm is that dividing evolution time into segments and approximating each time evolution operator segment by a sum of multi - product formula , namely , now , we give a duality quantum computer description of the implementation of this simulation algorithm of time evolution .the quantum circuit is the same as fig .[ f2 ] . according to, is an unitary operation .let and , can be rewritten as where is an unitary operation .it is obvious that is a duality quantum gate .the qwd is simulated by the unitary operation and the qwc is simulates by unitary operation on a qudit .the auxiliary qudit controlled operations is .the matrix element of the unitary matrix and the matrix element of the unitary matrix satisfy : as defined in , is the product of two unitary matrix elements : the sum of s is in the special case , the simulation algorithm has the maximum success probability .the expression of and can be simplified into the form : after implementing the qwd operation , the auxiliary qudit controlled operations and the qwc operation , detecting the final wave function when the auxiliary qudit is in state .the initial state has been transformed into the approximated evolution operator is implemented successfully by the duality quantum computer .implementing segments of , we can get the approximation of by .thus , this algorithm is clearly realized by the duality quantum computer in straightforward way .the essential idea of this algorithm is an iterated approximation , with each controlled adding an additional high order approximation to the evolution operator .berry and childs provided a quantum algorithm for simulating hamiltonian dynamics by approximating the truncated taylor series of the evolution operator on a quantum computer .this method is based on linear combinations of unitary operations and it can simulate the time evolution of a class of physical systems .the performance of this algorithm has exponential improvement over previous approaches in precision .hamiltonian can be decomposed into a linear combinations of unitary operations : dividing the evolution time into segments of length .the time evolution operator of each segment can be approximated as where the taylor series is accurate to order . substituting the hamiltonian in terms of a sum of into, we can rewrite the truncated taylor series as for convenience , we can set each .considering is an unitary operation , we can conclude that the approximation is a linear combinations of unitary operations .the expression has a quantum duality gate form .the truncated taylor series index can be defined as then , the expression of can be simplified as where \alpha_{\ell_1 } \cdots\alpha_{\ell_k}$ ] and .it should be noted that is not normalized .we define the normalization constant as .according to , is a quantum duality gate .we let , , then it comes back to the duality quantum gate form in , where . to give the duality quantum computer description ,we need to realize the following processing is the initial state of duality quantum computer and there are of auxiliary controlling qubit and an level auxiliary qudit auxiliary controlling qudit .unitary operations are activated only when the qubit and qudit holds the respective values indicated in circles .the each unitary operation is composed of .[ f3],width=340 ] in fig .[ f3 ] , we give an illustration for our method to perform the algorithm in the form of quantum circuit .the unitary operation corresponds to the decomposing form of hamiltonians : and the quantum circuit in fig .[ f3 ] implements .the implementation of operation need an level auxiliary qudit and auxiliary qubits which correspond to implementation of two qwd operations and qwc operations .actually , the equation of indicates that we need summarize twice to realize the right side of this equation .we express the initial state as firstly , we transform the part of the initial state into the normalized state using the qwd operation .we have we let and define the qwd as , which can be expressed as a matrix .the elements of the matrix satisfy where after implementing the first unitary operations in the part of the initial state , we can get normalized state . secondly , using the qwd operation once again to transform the part of initial state into the normalized state .we let and define the second qwd operation as , which can be expressed as a matrix .the elements of the matrix satisfy where after implementing the second unitary operations on part of initial state , we can get normalized state .we perform the level auxiliary qudit and auxiliary qubits controlled operation on the computer .the processing can be described as corresponding to the qwd operations and respectively .we set , and perform the the qwc operations on the state and , respectively .after qwc operations , we detect the final wave function when the auxiliary system is in state . in the final state, we only focus our attention on the terms with the level auxiliary qudit in state and auxiliary qubits in state .we have the following it should be noted that the summation parts and already have been combined with .the initial state is transformed into latexmath:[\[\begin{aligned } where corresponds to some and .it is obvious that and .consequently , we have successfully realized the following process : latexmath:[\[\begin{aligned } finally , the robust form of obvious amplitude amplification procedure of enables us to deterministically implement through amplifying the amplitude of .the approximation accuracy of can be quantified by approximation error . according to the chernoff bound , as studied in ,the query complexity is and the error of approximation in each segments satisfy : the total number of gates in the simulation for time in each segment is in the duality quantum computer description , our method gives a slight improvement than , which uses gates . thus , we have given a standard program in the duality quantum computer to realize the simulation methods of hamiltonians based on linear combinations of unitary operations .the physical picture of our description of the algorithm is clear and simple : each of qwd and qwc operations will lead to one summation of linear combinations of unitary operations .our method is intuitive and can be easily performed based on the form of time evolution operator .in the present paper , we have briefly reviewed the duality quantum computer .quantum wave can be divided and recombined by the qwd and qwc operations in a duality quantum computer .the divider and combiner operations are two crucial elements of operations in duality quantum computing and they are realized in a quantum computer by unitary operators . between the dividing and combining operations ,different computing gate operations can be performed at the different sub - wave paths which is called the duality parallelism .it enables us to perform linear combinations of unitary operations in the computation , which is called the duality quantum gates or the generalized quantum gates .the duality parallelism may exceed quantum parallelism in quantum computer in the precision aspect .the duality quantum computer can be perfectly simulated by an ordinary quantum computer with -qubit and an additional qudit , where a qudit labels the slits of the duality quantum computer .it has been shown that the duality quantum computer is able to simulate any linear bounded operator in a hilbert space , and unitary operators are just the extreme points of the set of generalized quantum gates .simulating the time evolution of quantum systems or the dynamics of quantum systems is a major potential application of quantum computers .the property of duality parallelism enables duality quantum computer to simulate the dynamics of quantum systems using linear combinations of unitary operations .it is naturally suitable to realize the simulation algorithms of hamiltonians based on multi - product formulas which are usually adopted in classical algorithms .the duality quantum computer can be used as a bridge to transform classical algorithms into quantum computing algorithms. we have realized both childs - wiebe algorithm and berry - childs simulation algorithms in a duality quantum computer .we showed that their algorithm can be described straightforwardly in a duality quantum computer .our method is simple and has a clearly physical picture .consequently , it can be more easily realized in experiment .this work was supported by the national natural science foundation of china grant nos .( 11175094 , 91221205 ) , the national basic research program of china ( 2011cb9216002 ) , the specialized research fund for the doctoral program of education ministry of china .brandt , h.e .: aspects of the riemannian geometry of quantum computation .b. * 26 * , 1243004 ( 2012 ) long , g.l .: general quantum interference principle and duality computer .. phys . * 45 * , 825 - 844 ( 2006 ) ; also see arxiv : quant - ph/0512120 . it was briefly mentioned in an abstract ( 5111 - 53 ) ( tracking no .fn03-fn02 - 32 ) submitted to spie conference `` fluctuations and noise in photonics and quantum optics '' in 18 oct 2002 .gudder , s. : mathematical theory of duality quantum computers .quantum inf . process .* 6 * , 37 - 48 ( 2007 ) long , g.l .: mathematical theory of the duality computer in the density matrix formalism .inf . process .* 6*(1 ) , 49 - 54 ( 2007 ) zou , x.f . , qiu , d.w ., wu , l.h . , li , l.j . ,: on mathematical theory of the duality computers .quantum .inf . process .* 8 * , 37 - 50 ( 2009 ) cui , j.x . ,zhou , t. , long , g.l . :density matrix formalism of duality quantum computer and the solution of zero - wave - function paradox .quantum . inf . process .* 11 * , 317 - 323 ( 2012 ) long , g.l . :duality quantum computing and duality quantum information processing . int .50 * , 1305 - 1318 ( 2011 ) du , h.k . , wang , y.q ., xu , j.l .: applications of the generalized lders theorem .phys . * 49 * , 013507 ( 2008 ) zhang , y. , cao , h.x . , li , l. : realization of allowable qeneralized quantum gates .sci china - phys mech astron * 53 * , 1878 - 1883 ( 2010 ) long , g.l . ,liu , y. : duality quantum computing . front . comput .* 2 * , 167 - 178 ( 2008 ) long , g.l . , liu , y. : general principle of quantum interference and the duality quantum computer .* 28 * , 410 - 431 ( 2008)(in chinese ) li , c.y . , li , j.l . :general : allowable generalized quantum gates using nonlinear quantum optics .53 * , 75 - 77 ( 2010 ) liu , y. , zhang , w.h , zhang , c.l ., long , g.l . :quantum computation with nonlinear optics .. phys . * 49 * , 107 - 110 ( 2008 ) chen , z.l , cao , h.x . : a note on the extreme points of positive quantum operations .48 * , 1669 - 1671 ( 2010 ) hao , l. , liu , d. , long , g.l .: an n/4 fixed - point duality quantum search algorithm .sci china - phys mech astron * 53 * , 1765 - 1768 ( 2010 ) liu , y. : deleting a marked state in quantum database in a duality computing mode .sci . bull . * 58 * , 2927 - 2931 ( 2013 ) hao , l. , long , g.l .: experimental implementation of a fixed - point duality quantum search algorithm in the nuclear magnetic resonance quantum system .sci china - phys mech astron .* 54 * , 936 - 941 ( 2011 ) .feynman , r.p.:simulating physics with computers .* 21 * , 467 ( 1982 ) benioff , p. : the computer as a physical system : a microscopic quantum mechanical hamiltonian model of computers as represented by turing machines .* 22 * , 563 - 591 ( 1980 ) .shor , p.w . : polynomial - time algorithms for prime factorization and discrete logarithms on a quantum computer .siam j. comput .* 26 * , 1484 - 1509 ( 1997 ) lu , y. , feng , g.r , li , y.s , long , g.l . : experimental digital quantum simulation of temporal spatial dynamics of interacting fermion system . sci . bull . * 60 * , 241 - 248 ( 2015 ) sornborger , a.t .: quantum simulation of tunneling in small systems .* 2 * , 597 ( 2012 ) .childs , a.m. , cleve , r. , deotto , e. , farhi , e. , gutmann , s. , spielman , d.a . :exponential algorithmic speedup by quantum walk . in proceedings of the 35th acm symposium on theory of computing , pp.59 - 68 ( 2003 ) aharonov , d. , ta - shma , a. : adiabatic quantum state generation and statistical zero knowledge . in proceedings of the 35th acm symposium on theory of computing , pp .20 - 29 ( 2003 ) feng , g.r , xu , g.f , long , g.l . : experimental realization of nonadiabatic holonomic quantum computation .lett . * 110 * , 190501 ( 2013 ) .suzuki , m. : general theory of fractal path integrals with applications to many - body theories and statistical physics . j. math .phys . * 32 * , 400 ( 1991 ) blanes , s. , casas , f. , ros , j. : extrapolation of symplectic integrators.celest .* 75 * , 149 ( 1999 ) wiebe , n. , kliuchnikov , v.:floating point representations in quantum circuit synthesis .new j. phys . * 15 * , 093041 ( 2013 ) berry , d.w . ,childs , a.m. , cleve , r. , kothari , r. , somma , r.d . : in proceedings of the 46th annual acm symposium on theory of computing , new york , 2014(acm press , new york , pp .283292 ( 2014 )
|
in this paper , we firstly briefly review the duality quantum computer . distinctly , the generalized quantum gates , the basic evolution operators in a duality quantum computer are no longer unitary , and they can be expressed in terms of linear combinations of unitary operators . all linear bounded operators can be realized in a duality quantum computer , and unitary operators are just the extreme points of the set of generalized quantum gates . a d - slits duality quantum computer can be realized in an ordinary quantum computer with an additional qudit using the duality quantum computing mode . duality quantum computer provides flexibility and clear physical picture in designing quantum algorithms , serving as a useful bridge between quantum and classical algorithms . in this review , we will show that duality quantum computer can simulate quantum systems more efficiently than ordinary quantum computers by providing descriptions of the recent efficient quantum simulation algorithms of childs et al [ quantum information & computation , 12(11 - 12 ) : 901 - 924 ( 2012 ) ] for the fast simulation of quantum systems with a sparse hamiltonian , and the quantum simulation algorithm by berry et al [ phys . rev . lett . * 114 * , 090502 ( 2015 ) ] , which provides exponential improvement in precision for simulating systems with a sparse hamiltonian .
|
in mark hoffman s words , post - structuralist ir asks `` how '' questions , rather than `` what '' or `` why '' questions .those `` how '' questions include : `` how are structures and practices replicated ?how is meaning fixed , questioned , reinterpreted , and refixed ? '' to answer these `` how '' questions , post - structuralist ir rejects `` the modernist belief in our ability to rationally perceive and theorize the world '' in favor of `` dis - belief in unproblematic notions of modernity , enlightenment , truth , science , and reason . ''this move leads post - structuralists , methodologically , to a sort of scholarship which `` does not look for a continuous history , but for discontinuity and forgotten meanings ; it does not look for an origin , indeed , it is assumed one can not be found ; and it does not , finally , focus on the ` object of geneaology ' itself , but on the conditions , discourses , and interpretations surrounding it . ''this scholarship pays particular attention to discourses in global politics , where ( actual and conceptual ) `` boundaries are constantly being redrawn and transgressed . ''we argue that this _ methodological _ approach can be matched fairly easily with the capacities of the _ methods _ of geometric and computational topology .if critical ir rejects the objective and the linear , and with them their dichotomized frame of reference , topological analysis can accommodate significantly more complexity .if critical ir tries to unmask and deconstruct hidden meanings , there could be some power in representing those meanings geometrically .if critical ir embraces undecidability , is at home with liminality , is wary of metanarratives , and is attached to the `` how '' questions discussed above , the instability , creativity , and formalism of theoretical geometry may be a good fit .in fact , this is not the first time a critical theorist has suggested that there might be some benefit to thinking geometrically about the concepts being explored and critiqued .deleuze and guattari see concepts as rhizomes , biological entities endowed with unique properties .they see concepts as spatially representable , where the representation contains `` principles of connection and heterogeneity : any point of a rhizome must be connected to any other . ''deleuze and guattari list the possible benefits of spatial representation of concepts , including the ability to represent complex multiplicity , the potential to free a concept from foundationalism , and the ability to show both breadth and depth . in this view , geometric interpretations move away from the insidious understanding of the world in terms of dualisms , dichotomies , and lines , to understand conceptual relations in terms of space and shapes .the _ ontology _ of concepts is thus , in their view , appropriately geometric a multiplicity `` defined not by its elements , nor by a center of unification and comprehension '' and instead measured by its dimensionality and its heterogeneity .the conceptual multiplicity , `` is already composed of heterogeneous terms in symbiosis , and is continually transforming itself '' such that it is possible to follow , and map , not only the relationships between ideas but how they change over time .in fact , the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference .they outline the unique contribution of geometric models to the understanding of contingent structure : principle of cartography and decalcomania : a rhizome is not amenable to any structural or generative model .it is a stranger to any idea of genetic axis or deep structure .a genetic axis is like an objective pivotal unity upon which successive stages are organized ; deep structure is more like a base sequence that can be broken down into immediate constituents , while the unity of the product passes into another , transformational and subjective , dimension . "( deleuze and guattari 1987 , 12 ) the word that deleuze and guattari use for ` multiplicities ' can also be translated to the topological term ` manifold . 'this is how we propose looking at concepts : as manifolds .with such a dimensional understanding of concept - formation , it is possible to deal with complex interactions of like entities , and interactions of unlike entities .critical theorists have emphasized the importance of such complexity in representation a number of times , speaking about it in terms compatible with mathematical methods if not mathematically .for example , michel foucault s declaration that practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in ( apparently simple ) concepts deployed both in global politics and ir scholarship .david campbell s reading of the state in writing security is a good example of this : campbell makes the argument that the notion of the state appears to be both simple and _ a priori _ , it is really danger built over other danger where the constant articulation of danger through foreign policy is thus not a threat to a state s identity or existence : it is its condition of possibility .this leads to a shift in the concept of danger as well , where danger is not an objective condition but an effect of interpretation . critical thinking about how - possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses , sending a wave of added complexity through other concepts as well .this work _ seeking complexity_ serves one of the major underlying functions of critical theorizing : finding invisible injustices in ( modernist , linear , structuralist ) givens in the operation and analysis of global politics . in a geometric sense , this complexity could be thought about as multidimensional mapping . in theoretical geometry, the process of mapping conceptual spaces is not primarily empirical but for the purpose of representing and reading the relationships between information , including identification , similarity , differentiation , and distance .the reason for defining topological spaces in math , the essence of the definition , is that there is no absolute scale for describing the distance or relation between certain points , yet it makes sense to say that an ( infinite ) sequence of points approaches some other ( but again , no way to describe how quickly or from what direction one might be approaching ) .this seemingly weak relationship , which is defined purely locally , i.e. , in a small locale around each point , is often surprisingly powerful : using only the relationship of approaching parts , one can distinguish between , say , a balloon , a sheet of paper , a circle , and a dot . to each delineated concept , one should distinguish and associate a topological space , in a ( necessarily ) non - explicit yet definite manner .whenever one has a relationship between concepts ( here we think of the primary relationship as being that of constitution , but not restrictively , we ` specify ' a function ( or inclusion , or relation ) between the topological spaces associated to the concepts ) . in these terms , a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented . such an approach can be leveraged for thinking about conceptual components , dimensionality , and structure . in these terms, dimensions can be thought of as properties or qualities , each with their own ( often - multidimensional ) properties or qualities .a key goal of the modeling of conceptual space being representation means that a key ( mathematical and theoretical ) goal of concept space mapping is associationism , where associations between different kinds of information elements carry the main burden of representation . " to this end , objects in conceptual space are represented by points , in each domain , that characterize their dimensional values ." these dimensional values can be arranged in relation to each other , as gardenfors explains that distances represent degrees of similarity between objects represented in space " and therefore conceptual spaces are suitable for representing different kinds of similarity relation . "these similarity relationships can be explored across ideas of a concept and across contexts , but also over time , since with the aid of a topological structure , we can speak about continuity , e.g. , a _ continuous change _ " a possibility which can be found _ only _ in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations .such an approach is both complex and _ anexact _ suiting it well for the contingent explorations of critical ir . _a formalization of concept relationships _ the first step might be to gain information about the ( actual , representational , or potential ) relationship between a concept being examined and another concept that contributes something to the essence of how it is understood .assume a complex concept k composed of ( but not necessarily limited to ) component parts .the concept can be explored as a simplicial homology , where an _ abstract simplicial complex _ is specified by the following data : * a vertex set ; * a rule specifying when a -simplex ] is such a simplex , we define to be the element of given by the formula ,\ ] ] where ] , ] , ] , ] , ] , ] , and for , - [ v_0 v_1 v_3 ] + [ v_0 v_2 v_3 ] - [ v_1 v_2 v_3] ] corresponds to a class that appears at and dies at .classes that live to are usually represented by the infinite interval to indicate that such classes are real features of the full complex . as an example , consider the tetrahedron with filtration defined by , , ] , ] belongs to if and only if there exists a data point such that in this case the point is called a _ witness _ for .3 . the -simplex ] . in this way, analysis of persistent homologies can be used to show both similarities among concepts _ and _ similarities across components of concepts in particular cases to which those concepts and their components are applied . for critical ir ,mapping conceptual spaces like this can provide a framework for representation " that demonstrates relationships among concepts without reaching necessary or essential conclusions about their genesis or origin ( gardenfors 1996 ) . analyzingthe geometric complexity of concepts could lead to the ability to gain leverage on how understanding conceptual dimensionality could make current research better , more interesting , or deeper , either on its own terms or in terms of understanding complexity , hybridity , marginalization , social disadvantages , or other areas of global politics of interest to critical theory .scholars in comparative politics and ir have long been interested in the nature and existence of democracy in the global political arena .scholars of comparative politics investigate the structure and function of democratic institutions in states that they see as transitioning to democracy and in states they see as developing or mature democracies .scholars of ir look to understand the ways in which states regime types may affect their foreign policy propensities , including but not limited to trade patterns , likelihood to be involved in conflict , and conflict opponents .scholars in comparative politics and international relations make a number of common assumptions about democracy that permeate a significant amount of the research which is read , cited , and engaged in their respective fields .each assumes that democracy is an extant and practiced form of government .each assumes that such a form of government is _ measurably _ different than other forms of government , which may vary but share the label non - democratic .though not all scholars make the assumption that democracy is a more desirable form of government than autocracy , oligarchy , theocracy , or other possible forms of government , many if not most of them do .that said , while scholars interested in either the internal or external politics of either democracies or democratizing states agree on the existence and distinguishability of democracy , many of them disagree on the components of democracy which are distinguishable , or on the specific places and times in which democracy exists . in other words , while the idea of democracy is common among scholars interested in comparative or international politics , many of them disagree on what makes a democracy , which countries are democracies , which components of democracy are measurable , and which measurable components of democracy are most central to the concept of democracy .critical theorists have been interested in the concept of democracy in the international arena in a number of different ways for an extended period of time .of course , neither the breadth nor theoretical depth of critical analyses of democracy ( even in ir ) can be done justice in this small section of a chapter .that said , some of the common critiques ( and resultant how - possible questions ) can be explored briefly to give a sense of what the stake(s ) in the concept of democracy may be for ( different ) critical ir researchers .critical theorists have been concerned with the meaning of the concept of democracy , about the structure and normative value of the signification of the concept , and about the potential to revise and reappropriate the concept in search of a more just global political arena .some critical theorists have been interested in the way that democracy as a concept signifies the success of ` the west ' and distinguishes that from an othered rest of the world . as amitav acharya and barry buzan note , the contemporary equivalent of ` good life ' in international relations democratic peace , interdependence and integration , and institutionalized orderliness , as well as the ` normal relationships and calculable results ' is mostly found in the west , while the non - west remains the realm of survival . "accordingly acharya and buzan characterize democracy as a western idea . " as fabrizio eva notes , these origins hold in contemporary global politics , and the model for democracy nevertheless remains the western version , which is tied in with capitalist economics .there is an acceptance , therefore , of the central ideology of liberal democracy . "many critical theorists concern with the western - centric nature of deployed concepts of democracy in the international arena causes them ( especially those interested in the question from a postcolonial perspective ) to be critical of the use of the idea of democracy _ writ large _ in global politics or the analysis thereof .for example , inayatullah and blaney argue that democracy is less a form of governance than a value that must be moderated , a set of practices to be disciplined by some prior claim to authority . "another reading of this analysis might suggest that democracy is not _ a thing _ out there to be analyzed , achieved , or deconstructed , but instead _ a signifier _ by which participants in the global political arena are organized hierarchically .roxanne doty sees this both in the policy arena and in the academic study of democracy .doty explains that work in ir interested in democracy often presumes that some subjects were definers , delimiters , and boundary setters of important and that others not capable themselves of making such definitions , would have things bestowed upon them and would be permitted to enjoy them only under the circumstances deemed suitable by the united states . "this is why tanji and lawson argue that the ` answer ' to the question of what constitutes ` true democracy ' is implicit in the model of democracy assumed by the thesis [ which is ] authoritatively assumed in advance , posited as an unassailable universal , and deployed as the foundation of the moral high ground in the global sphere . "these readings suggest a critical stance not only towards current concepts of and wieldings of democracy , but also towards the use of the word and idea in general .this work in critical ir , then , is less concerned with reviving and rehabilitating the notion of democracy in global politics and more concerned with remedying the exclusions and silences produced by its current significations .still , even the most skeptical of critical theorists pay attention to the multiple significations of the term and idea of democracy in global politics .doty draws on laclau s use of the signifier of democracy , explaining that it acquires particular meanings when it is associated with other signifiers , " that is , that anticommunist democracy and antifascist democracy are signified differently , even if each is an attempt to constitute a hegemonic formation . "internally linked significations include democracy s perceived opponents and its perceived components and benefits .for example , doty suggests that american masculinity is a key component and tie - in to 19th century notions of democracy in global politics , where american manhood was also linked to democracy " and this link served to construct a distinctly american version of masculinity that was part and parcel of american exceptionalism . " in other words , democracy and masculinity were co - constituted in a particular instantiation of democracy in global politics .more recently , zalewski and runyan suggest that the signification of democracy has come to be tied to how states treat their women , where gender quotas have been enacted by a range of states as a sign of democracy and a method for reducing government corruption . "richard ashley suggests that democracy can also be conceptually linked with its goals , using the democratic peace as an example .ashley explains that the academically certified version of the democratic peace has led to a securitization of democracy " which is deeply problematic .andrew linklater , on the other hand , suggests that it is the tie to western liberalism that can be most insidious for the concept of democracy , and advocates for theorizing democracy without assuming that western liberal democracy is the model of government which should apply universally . " while many critical theorists agree that , in a variety of ways , the notion of democracy that is deployed in contemporary global politics and in contemporary ir research is both empirically and normatively problematic , they disagree strongly on how to handle it .some suggest that the concept of democracy is now itself part of the problem ( perhaps what baudrillard in _ the mirror of production _ would call a repressive simulation ) , while others are interested in reviving a different understanding of democracy . for example, bieler and morton suggest that the problem is not the concept of democracy itself , but the hollowing of that concept .they argue that there is a politics of supremacy that has come to replace democracy which involves a hollowing out of democracy and the affirmation , in matters of political economy , of a set of macro - economic policies such as market efficiency , discipline and confidence , policy credibility and competitiveness . "some critical ir theorists , then , look to rescue the concept of democracy from that hollowness .for example , ken booth ( 2007 : 55 , citing murphy 2001 : 67 ) suggests that what unifies critical theory in ir in addition to its post - marxist sensibility , is democracy .craig murphy got it exactly right when he saw this emerging critical theory project being ` today s manifestation of a long - standing democratic impulse in the academic study of international affairs . ' in other words , it was academe s contribution to ` egalitarian practice . ' " to follow up , booth goes over a number of different types of possible democracy , arguing that finding a good notion of democracy is key to the emancipatory mission he attributes to critical ir .booth explains that there will be no emancipatory community without dialogue , no dialogue without democracy . " in emancipatory critical theorists terms , though , this is a different type of democracy , one that begins with greater recognition , representation , and access within existing institutions and demands new mechanisms for popular control of local , global , and security issues . "following william connolly , richard shapcott describes the democracy favored by critical ir as a democratic ethos " which is an ethos of pluralisation " focused on creating room for difference .this leads shapcott to express interest in an attempt to provide an account of democracy that does not privilege the ` abstract ' other and a universal subjectivity or the territorial restrictions of the nation - state . " while the dividing line is not perfect , it might be worth thinking about these differences in terms of poststructuralist and emancipatory critical theory . both argue that there are problematic significations of current deployments of the notion of democracy in global politics .the latter is interested in reviving a more just concept of democracy , where the former is more interested in mapping the injustice that may well be inherent in the utterance and reification of the concept .what both share , in addition to critiquing current instantiations of the concept , is an interest in how democracy is being constituted , read , reproduced , and reified as a concept , both among states in global politics and among scholars of global politics interested in understanding state ( and non - state ) interaction .it is possible , then , to find a number of how - possible questions in critical ir analyses of democracy .what are the conditions of possibility of current understandings of what constitutes democracy ?how are the indicators of democracy that are recognized by various scholars and policy makers chosen to the exclusion of those which are not recognized ?what are the relationships between various ( recognized and unrecognized ) indicators of democracy? how is the concept of democracy deployed ( and deployable ) for ( and against ) certain political interests ?what if anything is it about the idea of democracy that allows for hollowing , encroachment , supremacy , and/or western dominance , if such moves happen ?how are relationships between the concept of democracy and its antagonists , its component parts , and/or its results formed and cemented ? what is possible ( or impossible ) with particular conceptions of what democracy is that could change with the change ( or even elimination ) of the idea ? certainly, answers to these how - possible questions can not be supplied easily with extant research , much less in the scope of this chapter .that said , what the how - possible questions listed above share is an interest in _ how democracy is being read _ across a variety of audiences in a variety of different ways .the remainder of this chapter suggests the plausibility and particular advantages of the formalization of concept relationships for gaining leverage on different questions about how democracy is being read .we collected more than 100 indicators used to measure democracy over eight datasets in order to gain interpretive leverage over what political scientists tend to think democracy is , how they tend to measure it , and how countries come to be classified as democracies and non - democracies . [2010 ] 426 - 449 ) . ] for the purposes of our pilot analysis , we used 14 of the variables for a particular year to map country - data - points and look for commonalities . the variables that we used were from the polity iv and miller - boix - rosato datasets . from the polityiv dataset , we used chief executive recruitment regulation , competitiveness of executive recruitment , openness of executive recruitment , executive constraints , participation regulation , participation competitiveness , as well as concept indicators for executive recruitment , executive constraint , and political competition .we used these executive - specific , individual level variables next to the miller - boix - rosato macro - level variables about democratic status and change over time , including a dichotomous measure of democracy , a measure of sovereignty , a measure of democratic transition , the previous number of democratic breakdowns , and the duration of democracy in the state ( consecutive years of a particular regime type ) .geometrically , there are a number of countries that represent the same point that is that their values on all 14 included indicators are the same . for the purposes of differing interpretations of what democracy is and the indicators of democracy , then , the countries that represent the same data point are flat : they represent the same configuration of indicators , definitions , variables , and conclusions for the purposes of understanding the dimensionality of the concept of democracy . using the 14 indicators that we selected in the test - year of 2007 ,we find 88 unique data points that is , 88 different configurations of the 14 variables .we then looked to analyze the relationships between those data points geometrically . using the javaplex package in matlab, we computed the persistent homology of the rips complex for the 88 data points .the topology of that rips complex shows a number of distinct features .first , there are 10 connected components , represented by the u.s . ,cuba , the dominican republic , equatorial guinea , swaziland , ivory coast , mauritania , togo , tanzania , and guinea .a connected component is a maximal subset of a space that can not be covered by the union of two disjoint open sets in other words , it is a distinct and distinguishable group .eight of those connected components are contractible that means that they have stronger relationships than non - contractible connected components would . the barcodes for the relationships between datapoints can be seen in figure [ 14dbarcodes ] .the barcodes for the -dimensional data , width=384 ] what these barcodes represent is the durability of certain relationships over the addition of data - points in the construction of a multi - dimensional space descriptive of the concept of democracy _ through _ collected data about its indicators . within the topology of the data what we analyzed ,there are seven two - dimensional spheres .six of them lie in the connected component represented by the dominican republic while the seventh lies in the component represented by swaziland .the representatives of components are the states that have the most in common with the most states within the geometric component , so they matter a little as signifiers . here , dominican republic is generally considered a durable if imperfect democracy , and swaziland is generally considered autocratic .some of the shapes formed , then , have both clear relationships and clear implications about what ideas of democracy might be within the limitations of our truncated pilot study .tetrahedrons formed by the dominican republic , albania , latvia , and botswana and by the dominican republic , latvia , botswana , and comoros are groupings of democracies with some durability and sustainability , but with ( perceived ) weakness on one or more indicators here , largely , either indicators that have to do with the availability and health of political competition , or having had a democratic breakdown in the past . in this sense , they are differentiable from the democracies that constitute a single point both in theory empirically ( assuming the measurements are accurate ) and conceptually ( that is , that there is a substantive difference about what sort of democracy a state is based on a difference on those indicators ) .another tetrahedron is composed of paraguay , ukraine , malawi , and east timor . using their polity iv scores ( not included in the geometric analysis ) , those countries are considered democracies , but do not score as strongly as other countries on the democracy scale .polity iv scores range from ( purely autocratic ) to ( understood as a full democracy ) , but many studies that analyze democratic behavior require a score of or higher to consider a country as a democracy .these four states often rank right around a 6 ( either a little above or a little below ) .these states differ from those ranked higher on the polity iv scale primarily on three of the 14 indicators on used in this pilot study : competitiveness of participation in governance , regulation of participation in governance , and breakdowns in democratic governance . on the other indicators ,their scores are the same or substantially the same as ` full ' democracies .thinking about what the components of democracy are , this suggests that democratic breakdowns distinguish countries that have had them from countries that have not , even among democracies .it also suggests that different levels of struggle with competitiveness in participation have different significations for problems with democracy .a fourth tetrahedron is formed by russia , congo kinshasa , mozambique , and namibia .this tetrahedron has similar polity iv scores to the group that was just discussed , but fares less well on a number of the indicators that we randomly selected . in the other data from the polity dataset, there must be a counterbalance to these countries negative scores not only on competitiveness and democratic breakdowns but also on openness and regulation of competition .three more complex shapes also emerge , and are depicted in figure [ 2cycles ] .the first is an octahedron formed by the dominican republic , el salvador , colombia , guyana , georgia , and sierra leone .this is the figure to the left in figure [ 2cycles ] .these countries match each other perfectly in regulation of participation in executive elections , competitiveness of executive elections , openness of executive elections , and the competitiveness of executive recruitment .they exhibit small variations on competitiveness and regulation of participation in elections .these traits make them closely related but not collapsible into one data point .the six states differ on the existence of democratic breakdowns in their recent history , which is one of the factors that creates space among them .what distinguishes this group as a group from other democracies is imperfect scores on participation competitiveness and regulation of executive recruitment so it is a group generally understood to be democracies with particular weaknesses vis a vis certain indicators of democracy .the second more complex shape is an unnamed irregular polyhedron with ten triangular faces , composed on dominican republic , colombia , bolivia , albania , brazil , solomon islands , and sierra leone .this is the figure in the center of figure [ 2cycles ] .it shares a triangular face with the octahedron on the left .this means that the two are related , but not the same .this shape includes three of the countries in the octahedron above , with four different ones .while , in the octahedron , those countries ( which score a 6 on both executive constraints indicators , that is , slightly more constrained than not ) are paired with countries that score a 5 ( that is , in the middle of the scale ) , in this irregular polyhedron , they are grouped with countries that score a 7 on those same indicators .the combination of these shape - relationships suggests that there is both a middle ground and a threshold level for executive constraints that may be meaningful in the constitution what makes a democracy .the third more complex shape is unrelated to the first two .it is a triangular bipyramid of swaziland , morocco , kuwait , bahrain , and oman .this is the figure on the right of figure [ 2cycles ] .these states rank low on most of the indicators of democracy that were included in our pilot study . that said , their low scores vary and are substantially less low in the area of regulation of participation in executive selection . in other words , these are countries considered autocratic that lean more in the direction of democracy when it comes to the clarity of rules of executive selection .while that does not stop them from being classified as non - democracies , it does place them in a group among those non - democracies distinguishable either from states with mediocre scores on all of the indicators or those with low scores on all of the indicators . some -cycles in the -dimensional data .note that these are representations of the figures in -dimensional space ; the actual cycles are embedded in .,width=528 ] this is , of course , a very limited exploration of a few relationships between a few states on a few indicators over the course of one year .and one might ask the advantage of this sort of analysis over just looking at the state s polity iv score , or some other aggregation of these individual indicators , rather than looking at indicators that are used to make composite measures .after all , are composite measures not their composers full definitions of democracy , and the ( sometimes weighted ) component parts their understanding of the dimensionality ? would more information not be gained by comparing composite measures , then ? certainly , such a comparison would be fruitful , and is in our future plans .the information such analysis would provide would be different , and no doubt an addition , as would being able to use the other hundred indicators that we collected .the nature of this pilot study and therefore the information it can provide is very limited .it was able to show some contours of groups of states on the basis of some commonality about particular indicators of democracy that may serve as grouping , or even tipping , points . to know _ a meaning _, much less _ the meaning _ of democracy to ir scholars would require significantly more in - depth analysis .yet such a project is well within the methodological capacity of this approach .for example , if composite measures are by definition a combination of indicators _ flattened _ into one number , this sort of analysis can replicate those composite measures in geometric analysis so that a composite score means more than its statistical significance .it would then be possible to group states based not only on their composite scores but on the indicators on which they have the most similarity , or even on indicators on which states with widely different composite numbers share common ground .bending those models over time can show complex configurations that may be indicators of the change of form of a state .in other words , this sort of complex mapping is capable of expanding the possibilities for categorization of states among the multiple meanings of ` democracy , ' and therefore of providing insight into multiple ways in which the concept is ( and could be ) thought about that are not always explicit in quantitative or qualitative analysis in comparative politics and ir .the biggest possible payoff in terms of looking at understandings of democracy , however , is the next four possible steps .first , looking at the data on democracy as abstract simplical complexes and examining the filtrations and barcodes that comes out of those constructions for the relationships _ among states on indicators _ can provide a basis for multidimensional mapping _ of indicators in relation to each other_. in other words , it would be possible to go beyond the positivist tendency to test collinearity in order to look at the relationships among particular variables held to be indicators of democracy both definitionally and as operationalized and applied in the field .this could be done with existing data on democracy in the field of political science .a second step would be to compare the homological analysis of the indicators , measures , and definitions of democracy in political science to those used in the policy world using critical discourse analysis to collect data from state policy statements , press releases , and leader quotes in news publications .a third step could be comparing that to survey data collected on the ground in a number of states around the world , where respondents were asked about what democracy is , how you can tell that a democracy exists , and whether neighboring states are democracies or not .a fourth step could compare these concept maps to concept maps for other , related but distinguishable , concepts , like those discussed by doty , zalewski and runyan , and ashley .it is possible to be critical of these ideas for topological data analysis of democracy in ir by asking what is going on here ?is it not just a hyperactive process to critique ( or even perfect ) the positivist operationalization of democracy ?certainly , these methods could be used to such an end , making space for hypertechnical representations in regressions looking to figure out how regime type influences foreign policy , or in predictions about the evolutions of forms of state government .but that is neither our intent , nor , in our view , the primary benefit of this methodological innovation .instead , we see the primary utility of an expansion of this sort of analysis in shapcott s understanding that democracy must forever be questioning itself and the boundaries that it invokes ." mapping meanings of democracy , measurements of democracy , and comparisons on indicators of democracy in multidimensional space can help us understand the ways that various concepts are leveraged in favor of , and related to , certain notions of democracy , as well as maps and relationships of inclusion and exclusion .if it is possible to have some understanding of axes of rotation and points of engagement by just looking at a few datapoints on a few indicators in one year , the analytical possibility of a full exploration mapping understandings of democracy is almost unlimited .such mappings could contribute not only to the analysis of some of the specific _ how - possible _ questions above , but also to other questions yet unasked interested in relationships ( tensions , similarities , and the simultaneous presence of both ) between different ways democracy is read in global politics and in disciplinary ir .if the above formalizations of concept vectors , concept spaces , concept topologies , and contexts are applicable to any concept in any theoretical context ( in the informal sense ) , why deploy them for use in poststructuralist ir ?in other words , even if these methods _ could work _ for poststructuralist analysis , and provide some value added , why use them ? is the value - added enough ?after all , even if the ontological and epistemological positions of mathematical formalism and poststructuralist ir have commonalities , those commonalities do not dictate the fruitfulness of the two working together . while those commonalities ( which we point out above ) are the basis for our claim that the two are compatible despite a general association of quantitative work ( of whatever flavor ) and positivism , and poststructuralism with qualitative methods , our argument that this analysis is usefully employed in poststructuralist analysis is more based on the capacities that these sorts of representations have that the tools traditionally available to poststructuralist scholars do not easily replace and/or replicate . particularly , we argue that there are three principal potential benefits to the deployment of this methodology for poststructuralist ir .the first is that the complex concept modeling can be used to reveal dimensions of concepts ( formally and informally ) previously underexamined , either in mainstream or in critical analysis .thinking about the contours of concepts helps to understand not only the ideas that go into them and/or their underlying assumptions .if topological concept mapping can capture the complexity of relations between features , then this provides a different way to think about the underlying assumptions , building blocks , and inscriptions , and fixings of meanings in poststructuralist terms .multidimensional concept modeling also provides a tool to think about the change of concepts over time , over place , and in the ways that they are thought about either in the discipline or in the policy world .second , and perhaps more interestingly , the tool of topological concept mapping can be based in empirical and representative studies but is not confined to them .that is , a model built to represent the dimensionality of a concept can be studied with changes to that dimensionality to see about potential changes in the concept .this serves the purposes of emancipatory critical theorizing would the world be a better place if we thought about things differently ?it also serves the purposes of poststructuralist critical theorizing how do concept structures become sticky and reified ?what would it look like to un - stick a particular dimension of a concept ? if concept mapping can be manipulated temporally , utilized to analyze transitional effects , tessellated to unpack the relations between different compiled meanings , morphologized to explore metaphorical relationships , and translated to fuzzy geometry to understand liminality , there is significant potential for developing critical analysis of what global politics is , how it is possible , and how it is constituted , reified , and performed .this could be done in a way that emphasizes relative relationality which , in our view , is the third major potential payoff for critical theorizing .while the descriptors for conceptual relationships are limited in terms of the sorts of relationships we can think about in between , close to , far from , etc ; the topological descriptors are in theory both unlimited , and more clearly specifiable ( given the potential for multidimensionality ) .in fact , the complexity both of representation and exploration is in theory unlimited . in practice ,it is limited only by the possible accessibility of time and information , and the possible specification of data points .neither of these limits is concerning , though , given that even the least complex representations have potential exploratory value if not empirical value .
|
we use the theory of persistent homology to analyze a data set arising from the study of various aspects of democracy . our results show that most mature " democracies look more or less the same , in the sense that they form a single connected component in the data set , while more authoritarian countries cluster into groups depending on various factors . for example , we find several distinct -dimensional homology classes in the set , uncovering connections among the countries representing the vertices in the representative cycles . as was discussed in the introduction to this book , the ` quantitative ' methods traditionally used in the social sciences represent a limited subset of available methods in mathematics , statistics , and computational analysis , and the positivist ends for which they are usually deployed in the social science community represent a limited subset of the purposes for which they are intended and deployed in the philosophy of mathematics . if it were to be oversimplified for explanatory purposes , math is the study of patterns . discrete ( if such things exist ) ones are arithmetic . continuous ones are geometry . immeasurable ones are symbolic logic . patterns not dependent on the empirical existence of their component parts are formalist . in the philosophy - of - math sense , this chapter takes a formalist approach to mathematical symbolism . accessed 15 march 2014 at http://plato.stanford.edu/entries/formalism-mathematics/ ) , such an approach does not see mathematics as `` a body of propositions representing an abstract sector of reality , '' but `` much more akin to a game , bringing with it no commitment to an ontology of objects or properties . '' in this sense , formalists `` do not imply that these formulas are true statements '' but instead see that `` such intelligibility as mathematics possesses derives from the syntactical or metamathematical rules governing those marks . '' ( nelson goodman and w. v. quine , `` steps toward a constructive nominalism , '' journal of symbolic logic 12 [ 1947]:97 - 122 , 122 , 111 ) . ] for the purpose of this analysis , that means that we argue that mathematical work is axioms with rules of inference that make possible thought experiments and string manipulation games of almost infinite complexity . there is no ` true meaning ' underlying mathematical symbols instead equations , formalizations , and quantifications are representations from which we learn about relationships -homologies , homeomorphisms , maps , dimensionality , commutativity , factorization . in this view , the rules , laws , and procedures of mathematics are socially constructed interested not only in quantity but also in structure , space , change , stochasticity , relationality , and formalization for its own sake . there are , for sure , many practitioners of math who think of it as science , as discovery , and as progress ; but there are also many practitioners of math who see it as art , as creation , as signification , and as representation . the overwhelming majority of quantitative methods that are used in international relations ( ir ) are statistical in nature . when the tools of theoretical mathematics are used , they are most often deployed in predictive , descriptive , or heuristic uses of game theory . that is , the tools of mathematics are used to try to gain leverage on the causal empirical realities that neopositivist ir scholars see as the important substance of global politics to know and understand . in this chapter , we argue that , not only can the tools of theoretical mathematics be utilized for post - positivist ends , but that is where many of those tools would be most at home in ir . particularly , in this chapter , we look to make an initial case for the argument that the tools of computational topology can be effectively utilized to explore questions of constitution , textuality , and performativity for critical ir . to that end , the first section lays out an argument about the possible utility of thinking geometrically about concept formation and reification for post - structuralist ir , and possible ways to do that work . the second section introduces the concept of democracy in ir , and argues that it might be possible to gain leverage on the dimensions of the concept using computational topology to evaluate existing data . the third section shows the method in action , and sketches out some of the possible ramifications for studying democracy from a critical perspective . the concluding section makes a case for the value - added both for political methodology and critical theory of methodological explorations like this .
|
polar codes , first discovered by arkan , are the first capacity - achieving codes for binary - input discrete memoryless channels with an explicit and deterministic structure .in addition , it was shown that a simple successive cancellation ( sc ) decoder asymptotically achieves the capacity with low complexity , of order where is the block - length . due to these extraordinary properties ,polar codes have captured the attention of both academia and industry alike . motivated by the fact that the sc decoder tends to exhibit less promising performance with finite - length block codes , an important line of current research is to seek efficient decoders with better performance for polar codes . in and ,the authors proposed the successive cancellation list ( scl ) decoder , which was shown to approach the performance of maximum - likelihood ( ml ) decoding in the high signal - to - noise ratio ( snr ) regime , albeit at the cost of higher processing complexity of , where is the list size . later in , it was further demonstrated that polar codes concatenated with a high rate cyclic redundancy check ( crc ) code outperform turbo and ldpc codes by applying an adaptive scl decoder with sufficiently large list size .trading storage complexity for computational reduction , the authors in and proposed the successive cancellation stack ( scs ) decoder , which was shown to have much lower computational complexity compared with the scl decoder , especially in the high snr regime , where its complexity becomes close to that of the sc decoder . more recently ,a novel successive cancellation hybrid decoder was proposed in , which essentially combines the ideas of scl and scs decoders and provides a fine balance between the computational complexity and storage complexity . as discussed above, the scl decoder achieves superior performance compared to the sc decoder at the price of increased complexity , especially when the list size is very large , which has prohibited its widespread implementation in practice . as such , reducing the computational complexity of the scl decoder is of considerable importance , motivating the current research . for the conventional scl decoder , each decoding pathwill be split into two paths when decoding an unfrozen bit and the number of `` best paths '' remains at until the termination of decoding , which causes an increased complexity of . to reduce the decoding complexity, we argue that it is unnecessary to split all the decoding paths , supported by the key observation that splitting can be avoided if the reliability of deciding the unfrozen bit or is sufficiently high .a direct consequence of such a split - reduced approach is that many fewer paths are likely to survive after pruning , i.e. , the number of `` best paths '' is much smaller than the list size , which results in further complexity reduction .the main contributions of this paper are summarized as follows : 1 .taking advantage of the fact that splitting is unnecessary if the unfrozen bit can be decoded with high reliability , a novel splitting rule is defined . moreover ,the behavior of the correct and incorrect decoding paths are characterized under the new splitting rule .based on which , a split - reduced scl decoder is proposed . by avoiding unnecessary path splitting as well as efficiently reducing the number of surviving paths ,the proposed split - reduced scl decoder can achieve significant reduction of complexity while retaining a similar error performance compared with the conventional scl decoder .2 . furthermore , we prove the existence of a particular unfrozen bit , after which the sc decoder achieves the same error performance as the ml decoder if all the prior unfrozen bits are correct , and show how to locate the particular unfrozen bit .then , exploiting this crucial property , an enhanced version of the split - reduced scl decoder is proposed .the rest of the paper is organized below . in section, we provide some basic concepts and notation for polar codes and the scl decoder . in section, we present a novel split - reduced scl decoder and provide an analysis of its decoding behavior .an enhanced version of the split - reduced scl decoder is proposed in section while the simulation results are provided in section .finally , section gives a brief summary of the paper .in this section , we provide a brief introduction to polar codes , the sc decoder and the scl decoder , and explain the notation adopted in the paper . for a polar code with block - length and dimension , the generator matrix can be written as , where ] , can be calculated in a similar manner .then , based on the assumption that satisfies the gaussian distribution , the error probability of each subchannel can be calculated by using the -function as /2}),\end{aligned}\ ] ] where . since ] with , where is the noise variance , it becomes clear that is also snr dependent .in addition , it is worth pointing out that , given , can be calculated in an off - line manner .having defined both the measure of reliability and the threshold , the splitting rule is given as follows : if either of the following two inequalities holds : the -th path does not split , otherwise , the -th path splits into two paths . for instance , if eq .( [ p_e(u_i0 ) ] ) holds , then we directly set instead of splitting the -th path . according to bayes rule , a more convenient splitting rule can be found , as follows where denotes the llr of in the -th decoding path and can be calculated in a recursive manner . for simplicity , we drop the subscript in the ensuing analysis .we now investigate the implications of the newly defined splitting rule . as we mainly focus on awgn channels ,the gaussian approximation method is adopted in the ensuing analytical derivation , i.e. , all the propositions in this subsection are based on the assumption that the llr follows gaussian distribution . for the purpose of clear exposition , we assume that an all - zero codeword is transmitted . please note that , according to the following proposition , using the all - zero codeword does not cause any loss of generality of the ensuing analysis , since the distribution of is symmetric for and . under the gaussian approximation , i.e. , ,2|\textbf{e}[l(u_i)]|) ] where ] , and .the first interval denotes the event in which no splitting is performed and is incorrectly decoded , i.e. , .the probability of such event occurring can be computed as +\mathrm{log}(1-p_e(u_i))-\mathrm{log}(p_e(u_i))}{\sqrt{2\textbf{e}[l_0(u_i)]}}\big)\\ & = q\big(\sqrt{\frac{\textbf{e}[l_0(u_i)]}{2}}+\frac{\mathrm{log}(1/q(\sqrt{\frac{\textbf{e}[l_0(u_i)]}{2}})-1)}{\sqrt{2\textbf{e}[l_0(u_i)]}}\big)\\\label{perror } & = q\big(q^{-1}(p_e(u_i))+\frac{\mathrm{log}(1/p_e(u_i)-1)}{2q^{-1}(p_e(u_i))}\big ) .\end{aligned}\ ] ] under the gaussian approximation . ]similarly , the last interval corresponds to the event in which no splitting is performed and is correctly decoded , i.e. , , and the probability associated with such event can be computed as now , let and be the mean and standard deviation of when the all - zero codeword is transmitted , respectively , i.e. , ]. then we have since the proposed splitting rule becomes activated when the decoding reliability is high , i.e. , is small , it is of particular interest to see the error performance in this regime , and we have the following important results . [ lemmap ] under the gaussian approximation , i.e. , ,2|\textbf{e}[l(u_i)]|) ] for the -th path at stage ( corresponding to ) , which counts the number of stages that the -th path survives without splitting . for the -th path ,if it proceeds to without splitting , then =\omega_l[i-1]+1 ] . now , utilizing the fact that the correct path seldom splits , while the incorrect path tends to split at a certain stage, we argue that if ] and =\frac{2}{\sigma_n^2} ] and =-(-\frac{2}{\sigma_n^2})+\frac{2}{\sigma_n^2}=\textbf{e}[l_0(u_2)] ] , i.e. , /2}=\sigma/2 ] to denote the component with index , and we have =(1 - 2\beta_{v_1}[i])\textbf{e}[\alpha_{v}[2i]]+\textbf{e}[\alpha_v[2i+1]] ] holds for any , and thus =0 ] . in ( [ elu ] ) , one can check that if only ] ) is zero , we have =0 ] ( or =\textbf{e}[l(y_1)] ] and =0 ] .thus , the number of llrs whose means are zero - valued remains the same after the calculation defined by ( [ elu ] ) .as node would pass another two llr vectors computed according to ( [ elu ] ) to its left child node and right child node respectively , by some simple induction , we can conclude that there would be at least leaf nodes that have zero - valued means .for an unfrozen bit , =0 ] to denote the related llrs .the codeword of root node just corresponds to , the last stage output of the encoder , while ,l_a[2], ... ,l_a[n]) ] , which is known since ] equals the llr calculated by the sc decoder according to equation ( 76 ) in .obviously , to maximize the summation in ( [ llr_metric_for ml_decoding ] ) , it requires that the binary codeword of are decided according to the signs of , l_b[2 ] , ... , l_b[k_1])$ ] , which are just equivalent to the one - by - one hard decisions in sc decoding , except that an inverse encoding operation is needed to obtain the desired .therefore , sc decoder achieves exactly the same performance as ml decoder provided the real values of are known .it is obvious that before is processed , the enhanced split - reduced scl decoder achieves exactly the same performance as the original one .suppose that paths survive when is reached . for each surviving path , there should be possible paths which all originate from the nodes at the -th level ( just corresponds to the -th bit , see ) in the list decoding framework .according to theorem [ theorem_mlvssc ] , for any particular path , the conventional sc decoding suffices to achieve the ml decoding performance ( note that this is not the overall ml decoding performance since the estimated unfrozen bits before are not guaranteed to be correct ) .thus , for each particular path arriving at , the conventional sc decoding algorithm would select the best path among all possible ones , i.e. , the best estimate for each surviving path can be obtained directly .thus , the overall best estimate of must be involved in these surviving candidate codewords .finally , the candidate codeword that has the smallest distance from the received symbols is selected as the decoding output .the authors sincerely thank the guest editor and the anonymous reviewers for their constructive suggestions which helped us to improve the manuscript .e. arkan , `` channel polarization : a method for constructing capacity - achieving codes for symmetric binary - input memoryless channels , '' _ ieee trans .inf . theory _ ,55 , no . 7 , pp . 3051 - 3073 , jul .i. tal and a. vardy , list decoding of polar codes , " in _ proc .inf . theory ( isit ) _ , pp . 1 - 5 , aug .2011 .i. tal and a. vardy , how to construct polar codes , " _ ieee trans .inf . theory _ ,6562 - 6582 , oct .p. trifonov , efficient design and decoding of polar codes , " _ ieee trans . commun .60 , no . 11 , pp .3221 - 3227 , nov .d. wu , y. li , and y. sun , construction and block error rate analysis of polar codes over awgn channel based on gaussian approximation , " _ ieee commun .18 , no . 7 , pp .1099 - 1102 , jul . 2014 . t. j. richardson , a. shokrollahi , and r. urbanke , design of capacity - approaching low - density parity - check codes , " _ ieee trans .inf . theory _ ,47 , pp . 619 - 637 , feb . 2001 .chung , t. j. richardson , and r. urbanke , analysis of sum - product decoding of low - density parity - check codes using a gaussian approximation , " _ ieee trans .inf . theory _ ,657 - 670 , feb . 2001 .a. alamdar - yazdi and f. r. kschischang , a simplified successive - cancellation decoder for polar codes , " _ ieee commun .1378 - 1380 , dec . 2011s. h. hassani and r. urbanke , on the scaling of polar codes : i. the behavior of polarized channels , " in _ proc .inf . theory ( isit ) _ , pp .874 - 878 , jun . 2010 .s. h. hassani , k. alishahi , and r. urbanke , on the scaling of polar codes : ii .the behavior of un - polarized channels , " in _ proc .inf . theory ( isit ) _ , pp .879 - 883 , jun . 2010 .j. g. proakis , _ digital communications ._ mcgraw hill , 1995 .
|
this paper focuses on low complexity successive cancellation list ( scl ) decoding of polar codes . in particular , using the fact that splitting may be unnecessary when the reliability of decoding the unfrozen bit is sufficiently high , a novel splitting rule is proposed . based on this rule , it is conjectured that , if the correct path survives at some stage , it tends to survive till termination without splitting with high probability . on the other hand , the incorrect paths are more likely to split at the following stages . motivated by these observations , a simple counter that counts the successive number of stages without splitting is introduced for each decoding path to facilitate the identification of correct and incorrect path . specifically , any path with counter value larger than a predefined threshold is deemed to be the correct path , which will survive at the decoding stage , while other paths with counter value smaller than the threshold will be pruned , thereby reducing the decoding complexity . furthermore , it is proved that there exists a unique unfrozen bit , after which the successive cancellation decoder achieves the same error performance as the maximum likelihood decoder if all the prior unfrozen bits are correctly decoded , which enables further complexity reduction . simulation results demonstrate that the proposed low complexity scl decoder attains performance similar to that of the conventional scl decoder , while achieving substantial complexity reduction . polar codes , gaussian approximation , split - reduced successive cancellation list decoder .
|
_ securities markets _ play a fundamental role in economics and finance .a securities market offers a set of _ contingent securities _ whose payoffs each depend on the future state of the world .for example , an arrow - debreu security pays 0 otherwise .consider an arrow - debreu security that will pay off in the event that a category 4 or higher hurricane passes through florida in 2011 . a florida resident who is worried about his homebeing damaged might buy this security as a form of insurance to hedge his risk ; if there is a hurricane powerful enough to damage his home , he will be compensated .additionally , a risk neutral trader who has reason to believe that the probability of a category 4 or higher hurricane landing in florida in 2011 is should be willing to buy this security at any price below ( or sell it at any price above ) to capitalize his information .for this reason , the market price of the security can be viewed as the traders collective estimate of how likely it is that a powerful hurricane will occur .securities markets thus have dual functions : risk allocation and information aggregation .insurance contracts , options , futures , and many other financial derivatives are examples of contingent securities .a _ prediction market _ is a securities market primarily focused on information aggregation . for a future event with mutually exclusive and exhaustive possible outcomes ,a typical prediction market offers arrow - debreu securities , each corresponding to a particular outcome .the prices of these securities form a probability distribution over the outcome space of the event , and can be viewed as the traders collective estimate of the likelihood of each outcome .market - based probability estimates have proved to be accurate in a variety of domains including business , entertainment , and politics .denote a set of mutually exclusive and exhaustive states of the world as .a securities market is _ complete _ if there are linearly independent securities .for example , a prediction market with arrow - debreu securities for an -outcome event is complete . with a complete securities market ,any desired future payoff over the state space can be constructed by linearly combining these securities , which allows a trader to hedge any possible risk he may have .furthermore , traders can change the market prices to reflect any valid probability distribution over the state space , allowing them to reveal any information .completeness therefore provides expressiveness for both risk allocation and information aggregation , making it a desirable property .however , completeness is not always achievable . in many real - world settings, the state space can be exponentially large or infinite .for instance , a competition among candidates results in a state space of rank orders , while the future price of a stock has an infinite state space . in such situations ,operating a complete securities market is not practical due to the notorious difficulties that humans have estimating small probabilities and the computational intractability of managing a large security set .it is natural to offer a smaller set of structured securities instead .for example , instead of having one security for each rank ordering , pair betting allows securities of the form `` 1 if and only if candidate a beats candidate b '' ) or subset bets ( e.g. , `` 1 if and only if a democrat wins florida and ohio '' ) .this line of research has led to some positive results when the uncertain event enforces particular structure on the outcome space . in particular , for a single - elimination tournament of teams , securities such as `` 1 if and only if team a beats team b given they face off '' can be priced efficiently in lmsr . for a taxonomy tree on some statistic where the value of the statistic of a parent node is the sum of those of its children , securities such as `` ] '' can be priced efficiently in lmsr .our paper takes a drastically different approach . instead of searching for supportable spaces of securities for existing market makers , we design new market makers tailored to any security space of interest . additionally , rather than requiring that securities have a fixed ] .let denote a convex hull .we characterize the form of the cost function under these conditions . under conditions [ cond : smooth]-[cond :express ] , must be convex with .[ thm : characterization ] specifically , the existence of instantaneous prices implies that is well - defined .the incorporation of information condition implies that is convex .the convexity of and the no arbitrage condition imply that .finally , the expressiveness condition is equivalent to requiring that .this theorem tells us that to satisfy our conditions , the set of reachable prices of a market should be _ exactly _ the convex hull of . for complete markets , this would imply that the set of reachable prices should be precisely the set of all probability distributions over the outcomes .the natural conditions we introduced above imply that to design a market for a set of securities with payoffs specified by an arbitrary payoff function , we should use a cost function based market with a convex , differentiable cost function such that .we now provide a general technique that can be used to design and compare properties of cost functions that satisfy these criteria . in order to accomplish this ,we make use of tools from convex analysis . it is well known that any closed, convex , differentiable function can be written in the form for a strictly convex function called the _ conjugate _ of .( the strict convexity of follows from the differentiability of . )furthermore , any function that can be written in this form is convex . as we will show in section [ sec : conjprops ], the gradient of can be expressed in terms of this conjugate : . to generate a convex cost function such that for all for some set , it is therefore sufficient to choose an appropriate conjugate function , restrict the domain of to , and define as we call such a market a _complex cost function based market_. to generate a cost function satisfying our five conditions , we need only to set and select a strictly convex function .this method of defining is convenient for several reasons .first , it leads to markets that are efficient to implement whenever can be described by a polynomial number of simple constraints .similar techniques have been applied to design learning algorithms in the online convex optimization framework , where plays the role of a regularizer , and have been shown to be efficient in a variety of combinatorial applications , including online shortest paths , online learning of perfect matchings , and online cut set .second , it yields simple formulas for properties of markets that help us choose the best market to run .two of these properties , worst - case monetary loss and worst - case information loss , are analyzed below .note that both the lmsr and quad - scpm are examples of complex cost function based markets , though they are designed for the complete market setting only . before discussing market properties , it is useful to review some helpful properties of conjugates .the first is a convenient duality : for any convex , closed function , the conjugate of the conjugate of is itself .this implies that if is defined as in equation [ eqn : costfunc ] , we may write . since this maximization is unconstrained , the maximum occurs when .( note that this may hold for many different values of . )suppose for a particular pair we have .we can then rewrite this equation as , which gives us that . from equation [ eqn : costfunc ] ,this tells us that must be a maximizer of .in fact , it is the unique maximizer due to strict convexity .this implies , as mentioned above , that . by a similar argumentwe have that for any , if then maximizes and therefore , as we have just shown , .however , the fact that _ does not _ imply that ; in the markets we consider , it is generally the case that for multiple .we also make use of the notion of bregman divergence .the _ bregman divergence _ with respect to a convex function is given by .it is clear by convexity that for all and .when comparing market mechanisms , it is useful to consider the market maker s worst - case monetary loss , .this quantity is simply the worst - case difference between the maximum amount that the market maker might have to pay the traders ( ) and the amount of money collected by the market maker ( ) .the following theorem provides a bound on this loss in terms of the conjugate function .consider any complex cost function based market with .let denote the vector of quantities sold and denote the true outcome .the monetary loss of the market maker is no more than consequently , the worst - case market maker loss is no more than .[ thm : worstcaseloss ] this theorem tells us that as long as the conjugate function is bounded on , the market maker s worst - case loss is also bounded .furthermore , it quantifies the intuitive notion that the market maker will have higher profits when the distance between and the final vector of prices is large .viewed another way , the market maker will pay more when is a good estimate of .information loss can occur when securities are sold in discrete quantities ( for example , single units ) , as they are in most real - world markets . without the ability to purchase arbitrarily small bundles, traders may not be able to change the market prices to reflect their true beliefs about the expected payoff of each security , even if expressiveness is satisfied .we will argue that the amount of information lost is captured by the market s bid - ask spread for the smallest trading unit .given some , the current bid - ask spread of security bundle is defined to be .this is simply the difference between the current cost of buying the bundle and the current price at which could be sold . to see how the bid - ask spread relates to information loss ,suppose that the current vector of quantities sold is .if securities must be sold in unit chunks , a rational , risk - neutral trader will not buy security unless she believes the expected payoff of this security is at least .similarly , she will not sell security unless she believes the expected payoff is at most .if her estimate of the expected payoff of the security is between these two values , she has no incentive to buy or sell the security . in this case , it is only possible to infer that the trader believes the true expected payoff lies somewhere in the range ] , where is the final position of , with being best , and ] .unfortunately , the computation of this convex hull is necessarily hard : if given only a separation oracle for the set , we could construct a linear program to solve the `` minimum feedback arcset '' problem , which is known to be np - hard . on the positive side ,we see from the previous section that the market maker can work in a larger feasible price space without risking a larger loss .we thus relax our feasible price region to the set of matrices satisfying \\x(i , j ) & = 1 - x(j , i ) & \forall i , j \in [ n ] \\ x(i ,j ) + x(j , k ) + x(k , i ) & \geq 1 & \forall i , j , k \in [ n ] \end{aligned}\ ] ] this relaxation was first discussed by meggido , who referred to such matrices as _generalized order matrices_. he proved that , for , we do have , but gave a counterexample showing strict containment for . by using this relaxed price space , the market maker allows traders to bring the price vector outside of the convex hull , yet includes a set of basic ( and natural ) constraints on the prices .such a market could be implemented with any strongly convex conjugate function ( e.g. , quadratic ) .let us return our attention to proposition [ prop : wcl_relax ] , which bounds the worst - case loss of the market maker .notice the term is strictly positive , and accounts for the market maker s risk in offering the market .on the other hand , the term is non - positive , representing the potential profit that the market maker can earn if the final price vector is far from . the potential to make a profit may be appealing , but note that this term will approach if approaches as traders gain information .( consider the behavior of traders in an election market as votes start to be tallied . ) as discussed in section [ sec : infoloss ] , the market maker can reduce his worst - case loss by adjusting the depth parameter , but this will result in a shallow market with a larger bid - ask spread . to combat these problems , we propose a technique that allows us to guarantee lower worst - case loss without creating a shallow market by relaxing the feasible price region .this relaxation is akin to introducing a transaction cost .by constructing the correct conjugate function , we may obtain a market with a bounded worst - case loss , a potential profit , and market depth that grows with the number of trades . for simplicity, we consider the complete market scenario in which the marker maker offers an arrow - debreu security for every outcome .let denote outcome and . in this case , , and , the -simplex .we define , where is some maximal transaction cost .the transaction cost is not imposed on individual traders or individual securities , but is split among all securities .we also introduce the requirement that traders can only purchase _ positive _ bundles . in general , this restriction could prevent the traders from expressing `` negative '' beliefs , as we are disallowing the explicit shorting of securities , but in this particular market traders still have the ability to effectively short a security by purchasing equal quantities of the other securities .we must now choose any conjugate function satisfying the the following conditions : 1 . grows no larger than a constant within , so the worst - case loss remains small .2 . outside of , becomes increasingly curved as approaches the constraint .( notice that the price vector is guaranteed to approach this constraint as purchases are made since we allow only positive bundles to be purchased and thus can only grow . ) hence , the smallest eigenvalue of , which is the market depth at , must grow large as approaches this boundary .the construction we have proposed here has several nice properties .it has bounded worst - case loss due to condition 1 , and increasing market depth by condition 2 .it imposes a transaction cost by letting the prices leave the simplex , but it does so in a smooth fashion ; the sum of prices only approaches the value after many trades have occurred , and at this point the market depth will have become large .lastly , as a result of condition 2 , we know that the market maker s earnings increase as more trades occur , since must eventually increase as the price vector approaches the constraint .the idea of introducing transaction costs to allow increasing market depth was also proposed by othman et al . , who introduced a modified lmsr market maker with a particular cost function .their market can be viewed as a special case of our approach , although it is not defined via conjugate duality .in particular , they set the feasible price region of their market maker as a convex subset of for a positive parameter .below we provide all proofs that were omitted from the paper .when , this holds trivially .assume that equation [ eq : cbased_ind ] holds for all bundle sequences of any length .by condition [ cond : pind ] , and we see that equation [ eq : cbased_ind ] holds for too . we first prove convexity .assume is non - convex somewhere .then there must exist some and such that .this means , which contradicts condition [ cond : info ] , so must be convex .now , condition [ cond : smooth ] trivially guarantees that is well - defined for any . to see that , let us assume there exists some for which .this can be reformulated in the following way : there must exists some halfspace , defined by a normal vector , that separates from every member of .more precisely on the other hand , letting , we see by convexity of that . combining these last two inequalities, we see that the price of bundle purchased with history is always smaller than the payoff for _ any _ outcome .this implies that there exists some arbitrage opportunity , contradicting condition [ cond : arbfree ] .since is the final quantity vector , is the final vector of instantaneous prices . from equation [ eqn : costfunc ], we have that and .the difference between the amount that the market maker must pay out and the amount that the market maker has previously collected is then where is the bregman divergence with respect to , as defined above .the inequality follows from the first - order optimality condition for convex optimization : the bound on the bid - ask spread follows immediately from lemma [ lem : dcbound ] and the argument above . the value lower - bounds the eigenvalues of everywhere on .hence , if we do a quadratic lower - bound of from the point with hessian defined by , then we see that . in the worst - case , , which finishes the proof . consider some outcome such that .the feasible price set is compact . because , there exists a hyperplane that _ strongly _ separates and .in other words , there exists an such that .when outcome is realized , is the market maker s loss given .we have , which represents the instantaneous change of the market maker s loss . for infinitesimal ,let . then , =b(\q ) + \epsilon||\rho(\o ) - \nabla c(\q)||^2 \leq b(\q ) + \epsilon k^2.\ ] ] this shows that for any we can find a such that the market maker s worst - case loss is at least increased by .this process can continue for infinite steps .hence , we conclude that the market maker s loss is unbounded .this proof is nearly identical to the proof of theorem [ thm : worstcaseloss ] .the only major difference is that now instead of , but this is equivalent since we have assumed that . is still well - defined and finite since we have assumed that . a trader looking to earn a guaranteed profit when the current quantity is hopes to purchase a bundle so that the worst - case profit is as large as possible .notice that this quantity is strictly positive since , which always has 0 profit , is one option .thus , a trader would like to solve the following objective : the first equality with the swap holds via sion s minimax theorem .the last inequality was obtained using the first - order optimality condition of the solution for the vector which holds since .
|
building on ideas from online convex optimization , we propose a general framework for the design of efficient securities markets over very large outcome spaces . the challenge here is computational . in a complete market , in which one security is offered for each outcome , the market institution can not efficiently keep track of the transaction history or calculate security prices when the outcome space is large . the natural solution is to restrict the space of securities to be much smaller than the outcome space in such a way that securities can be priced efficiently . recent research has focused on searching for spaces of securities that can be priced efficiently by existing market mechanisms designed for operating complete markets . while there have been some successes , much of this research has led to hardness results . in this paper , we take a drastically different approach . we start with an arbitrary space of securities with bounded payoff , and establish a framework to design markets tailored to this space . we prove that any market satisfying a set of intuitive conditions must price securities via a convex potential function and that the space of reachable prices must be precisely the convex hull of the security payoffs . we then show how the convex potential function can be defined in terms of an optimization over the convex hull of the security payoffs . the optimal solution to the optimization problem gives the security prices . using this framework , we provide an efficient market for predicting the landing location of an object on a sphere . in addition , we show that we can relax our `` no - arbitrage '' condition to design a new efficient market maker for pair betting , which is known to be # p - hard to price using existing mechanisms . this relaxation also allows the market maker to charge transaction fees so that the depth of the market can be dynamically increased as the number of trades increases .
|
this work is motivated by the availability of very large data sets to compare biological species , and by the current lack of asymptotic theory for the models that are used to draw inference from species comparisons .for instance , studied the evolution of body size in mammals using data from 3473 species whose genealogical relationships are depicted by their family tree in figure [ figmammaltree ] . even from this abundance of data , cooper and purvis found a lack of power to discriminate between a model of neutral evolution versus a model with natural selection . .branch lengths indicate estimated diversification times on the horizontal axis .the cretaceous / tertiary mass extinction event marked the extinction of dinosaurs 65.5 million years ago .cooper and purvis ( ) used body mass data available for 77% of these species to infer the mode of evolution : neutral evolution ( bm ) versus natural selection ( ou ) . ] to model neutral evolution , body size is assumed to follow a brownian motion ( bm ) along the branches of the tree , with observations made on present - day species at the tips of the tree . to modelnatural selection , body size is assumed to follow an ornstein uhlenbeck ( ou ) process , whose parameters represent a selective body size ( ) and a selection strength ( ) .the lack of power observed by cooper and purvis suggests a nonstandard asymptotic behavior of the model parameters , which is the motivation for our work .and are parametrized in the ou model as a function of the tree distance between and and of the length of their shared path from the root .for instance , cooper and purvis ( ) considered body mass ( ) across 3473 mammal species ( ) . ]hierarchical autocorrelation , as depicted in the mammalian tree , arises whenever sampling units are related to each other through a vertical inheritance pattern , like biological species , genes in a gene family or human cultures . in the genealogical tree describing the relatedness between units , internal nodes represent ancestral unobserved units ( like species or human languages ) .branch lengths measure evolutionary time between branching events and define a distance between pairs of sampling units .this tree and its branch lengths can be used to parametrize the expected autocorrelation . for doing so ,the bm and the ou process are the two most commonly used models .they are defined as usual along each edge in the tree . at each internal node, descendant lineages inherit the value from the parent edge just prior to the branching event , thus ensuring continuity of the process .conditional of their starting value , each lineage then evolves independently of the sister lineages .bm evolution of the response variable ( or of error term ) along the tree results in normally distributed errors and in a covariance matrix governed by the tree , its branch lengths and a single parameter .the covariance between two tips and is simply , where is the shared time from the root of the tree to the tips ( figure [ figfig1 ] ) . under the more complex ou process, changes toward a value are favored over changes away from this value , making the ou model appropriate to address biological questions about the presence or strength of natural selection .this model is defined by the following stochastic equation [ ] : where is the response variable ( such as body size ) , is the selection strength and is a bm process . inwhat follows , is called the `` mean '' even though it is not necessarily the expectation of the observations .it is the mean of the stationary distribution of the ou process , and it is the mean at the tips of the tree if the state at the root has mean . in the biology literature, is called the `` optimal '' value or `` adaptive optimum '' in reference to the action of natural selection , but this terminology could cause confusion here with likelihood optimization .the parameter measures the strength of the pull back to .high values result in a process narrowly distributed around , as expected under strong natural selection if the selective fitness of the trait is maximized at and drops sharply away from .simple mathematical models of natural selection at the level of individuals result in the ou process for the population mean [ ] . if , the ou process reduces to a bm with no pull toward any value , as if the trait under consideration does not affect fitness .while some applications focus on the presence of natural selection such as , other applications are interested in models where takes different values along different branches in the tree , to model different adaptation regimes [ e.g. , ] .other applications assume a randomly varying along the tree , varying linearly with explanatory variables [ ] . in our work, we develop an asymptotic theory for the simple case of a constant over the whole tree .the covariance between two observed tips depends on how the unobserved response at the root is treated .it is reasonable to assume that this value at the root is a random variable with the stationary gaussian distribution with mean and variance . with this assumption ,the observed process is gaussian with mean and variance matrix where is the tree distance between tips and , that is , the length of the path between and .therefore , the strength of natural selection provides a direct measure of the level of autocorrelation .if instead we condition on the response value at the root , the gaussian process has mean for tip and variance matrix , again , is the distance from the root to tip , and is the shared time from the root to tips and ( figure [ figfig1 ] ) .in contrast to autocorrelation in spatial data or time series , hierarchical autocorrelation has been little considered in the statistics literature , even though tree models have been used in empirical studies for over 25 years .the usual asymptotic properties have mostly been taken for granted .recently , showed that the maximum likelihood ( ml ) estimator of location parameters is not consistent under the bm tree model as the sample size grows indefinitely , proving that the basic consistency property should not be taken for granted .however , did not consider the more complex ou model , for which the ml estimator admits no analytical formula .in the spatial infill asymptotic framework when data are collected on a denser and denser set of locations within a fixed domain , can be consistently estimated , but can not under an ou spatial autocorrelation model in dimension [ ] .recently , has been proved to be consistently estimated under ou model when [ ] .we uncover here a similar asymptotic behavior under the ou tree model . just like in infill asymptotics, the tree structure implies that all sampling units may remain within a bounded distance of each other , and that the minimum correlation between any pair of observations does not go down to zero with indefinitely large sample sizes .it is therefore not surprising that some properties may be shared between these two autocorrelation frameworks . under infill asymptotics ,microergodic parameters can usually be consistently estimated [ see ] while nonmicroergodic parameters can not ( e.g. , ) .a parameter is microergodic when two different values for it lead to orthogonal distributions for the complete , asymptotic process [ ] . in section [ secmic ], we prove that the mean is nonmicroergodic under the ou autocorrelation framework , and we provide a lower bound for the variance of the mle of .we also give a sufficient condition for the microergodicity of the ou covariance parameters and ( or ) based on the distribution of internal node ages . the microergodic covariance parameter under spatial infill asymptotics with ou autocorrelation , , is recovered as microergodic if is a limit point of the sequence of node ages , that is , with dense sampling near the tips .our condition for microergodicity suggests that some parameters may not be estimated at the same rate as others . in section [ secrates ], we illustrate this theoretically for a symmetric tree asymptotic framework , where we show that the reml estimator of converges at a slower rate than that of the generally microergodic parameter .we also illustrate that the ml estimate convergence rate of is slower than that of , through simulations on a large 4507-species real tree showing dense sampling near the tips .in most of this work , we only consider ultrametric trees , that is , trees in which the root is at equal distance from all the tips .this assumption is very natural for real data .we also focus on model ( [ eqrandomrootv ] ) , because the model matrix is not of full rank under model ( [ eqfixedrootv ] ) on an ultrametric tree .trees have already been used for various purposes in spatial statistics .when considering different resolution scales , the nesting of small spatial regions into larger regions can be represented by a tree .the data at a coarse scale for a given region is the average of the observations at a finer scale within this region .for instance , use this `` resolution '' tree structure to obtain consistent estimates at different scales , and otherwise use a traditional spatial correlation structure between locations at the finest level .in contrast , the tree structure in our model is the fundamental tool to model the correlation between sampling units , with no constraint between values at different levels .trees have also been used to capture the correlation among locations along a river network [ , and discussion ] .a river network can be represented by a tree with the associated tree distance . to ensure that the covariance matrix is positive definite , moving average processeshave been introduced , either averaging over upstream locations or over downstream locations , or both .there are two major differences between our model and these river network models .first , the correlation among moving averages considered in and decreases much faster than the correlation considered in this work .most importantly , any location along the river is observable , while observations can only be made at the leaves of the tree in our framework .the concept of microergodicity was formalized by in the context of spatial models .this concept was especially needed in the infill asymptotic framework , when some parameters can not be consistently estimated even if the whole process is observed .specifically , consider the complete process where is the space of all possible observation units .in spatial infill asymptotics , can be the unit cube ^d ] .the condition in [ thmaddmicc ] implies that or }}(t_i - t_0)^2=\infty ] , hence } } ( t_i - t)^2 \leq \sum_{k=1}^m ( t_{i_k}-t)^2 + ( t_{i'_k}-t)^2 \leq2 \sum_{k=1}^m ( t_{i_k}-t)^2 = 2 \sum_{c \in\mathscr{c } } ( t_c - t)^2.\ ] ] \(b ) contrasts are chosen by induction , starting with .let be the root of .if } ] .let be the sibling of in ( could be a leaf ) . by construction , .let be the set of contrasts obtained from .we have } \subset\ { r^{\mathbb{t } } , i_1 , i_2\ } \bigcup_{k=1}^l \mathscr{i}^{\mathbb { t}_k}_{(t , t ] } \cup\ { s_k \} ] where ] .therefore there exists such that where the term is bounded uniformly in . we can now combine this with ( [ eqlambdakm ] ) , where and only depend on and are defined by , and . because the values are bounded as grows , we get where the term is bounded uniformly in , and the same formula holds when and are switched .lemma [ lemmic02 ] then follows immediately because we assume that .[ [ criterion - for - the - consistency - and - asymptotic - normality - of - reml - estimators ] ] criterion for the consistency and asymptotic normality of reml estimators ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ in appendix [ appendix01 ] , we showed that is an eigenvector of for symmetric trees , independently of .therefore , the reml estimator of based on is the ml estimator of based on the transformed data where is the matrix of all eigenvectors but . is gaussian centered with variance where is the diagonal matrix of all eigenvalues of but .following and like , we use a general result from .the following conditions , c1c2 , ensure the consistency and asymptotic normality of the ml estimator [ reworded from ] .assume there exists nonrandom continuous symmetric matrices such that : 1 .\(i ) as goes to infinity converges to .+ \(ii ) converges in probability to a positive definite matrix , where is the second - order derivative of the negative log likelihood function . is twice continuously differentiable on with continuous second derivatives . under these conditions ,the mle satisfies .a standard choice for is the inverse of the square - root of the fisher information matrix . because ( c1)(ii ) is usually difficult to verify , suggest using a stronger -convergence conditionthis approach was later taken by cressie and lahiri ( ) .unfortunately , their conditions for establishing ( c1 ) do not hold here , because the largest eigenvalues and the ratio of the largest to the smallest eigenvalues are both of order . inwhat follows , we will check ( c1 ) for the particular choice of and and where we replace ( c1)(ii ) by the stronger condition _ proof of theorem [ thmreml01 ] ._ it is convenient here to re - parametrize the model using .the diagonal elements in are with multiplicity .the smallest is ( for ) with multiplicity , which is conveniently independent of . with this parametrization ,the inverse of the fisher information matrix is the symmetric matrix where , and the variance is taken with respect to . when the degree at the last level near the tips becomes large then , that is , the distribution is concentrated around the high end .it is then useful to express where the expectation and variance are now taken with respect to for , that is , . to verify conditions ( c1)(i ) and( ii ) , we will use the following lemmas ._ proof of lemma [ lemreml01 ] ._ denote .it is easy to see that then for all .it follows that now let for and let . by applying the previous inequality with , and , we get that recall that .the monotonicity of in follows easily from combining the inequality above with the fact that if and if , then .the proof of the second part of lemma [ lemreml01 ] is easy and left to the reader .the following lemma results directly from lemma [ lemreml01 ] .[ correml02 ] with fixed and parametrization , the quantities , and the trace of are bounded in uniformly on any compact subset of . therefore , and are satisfied if is of order greater than , that is , if . it is easy to see that with defined later .indeed , converges to when , where is the largest level such that goes to infinity and are fixed . for , converges to , and converges to for .note that are the asymptotic relative frequencies of node ages at levels .if goes to infinity , then with .if is fixed , clearly , because is fixed and is easily checked .so is of order .the consistency and asymptotic normality of follows from applying lemma [ correml02 ] . for the second part of the theorem, we obtain the asymptotic normality of through that of for every . for thiswe apply the following -method .its proof is similar to that of the classical -method [ ] and is left to the reader .[ lemreml04 ] assume that converges in distribution to , with , and .suppose that is a continuous differentiable function such that .then also converges to a centered normal distribution with variance .
|
hierarchical autocorrelation in the error term of linear models arises when sampling units are related to each other according to a tree . the residual covariance is parametrized using the tree - distance between sampling units . when observations are modeled using an ornstein uhlenbeck ( ou ) process along the tree , the autocorrelation between two tips decreases exponentially with their tree distance . these models are most often applied in evolutionary biology , when tips represent biological species and the ou process parameters represent the strength and direction of natural selection . for these models , we show that the mean is not microergodic : no estimator can ever be consistent for this parameter and provide a lower bound for the variance of its mle . for covariance parameters , we give a general sufficient condition ensuring microergodicity . this condition suggests that some parameters may not be estimated at the same rate as others . we show that , indeed , maximum likelihood estimators of the autocorrelation parameter converge at a slower rate than that of generally microergodic parameters . we showed this theoretically in a symmetric tree asymptotic framework and through simulations on a large real tree comprising 4507 mammal species .
|
capillary waves represent a conceptual problem for the interpretation of the properties of liquid - liquid or liquid - vapor planar interfaces , because long - wave fluctuations are smearing the density profile across the interface and all other quantities associated to it .this is usually overcome by calculating the density profile using a local , instantaneous reference frame located at the interface , commonly referred to as the intrinsic density profile , , where ( ,, ) is the position of the -th atom or molecule , and the local elevation of the surface is , assuming the macroscopic surface normal being aligned with the z axis of a simulation box with cross section area . during the last decadeseveral numerical methods have been proposed to compute the intrinsic density profiles at interfaces . despite several differences in these approaches , they are , in general , providing consistent distributions of interfacial atoms or molecules and density profiles . among these methods ,itim proved to be an excellent compromise between computational cost and accuracy , but it is limited to macroscopically flat interfaces , therefore there is a need to generalize it to arbitrary interfacial shapes .before these works , albeit for other purposes , several surface - recognition algorithms have been devised , and will be briefly mentioned below .all of them are possible starting points for the sought generalization under the condition that , once applied to the special case of a planar interface , they lead to consistent results with existing algorithms for the determination of intrinsic profiles .historically , the first class of algorithms addressing the problem of identifying surfaces was developed to determine molecular areas and volumes .the study of solvation properties of molecules and macromolecules ( usually , proteins ) might require the identification of molecular pockets , or the calculation of the solvent - accessible surface area for implicit solvation models .two intuitive concepts are commonly used to describe the surface properties of molecules , namely , that of solvent - accessible surface ( sas ) , and that of molecular surface ( ms , also known as solvent excluded surface , or connolly surface ) .the ms can be thought as the surface obtained by letting a hard sphere roll at close contact with the atoms of the molecule , to generate a smooth surface made of a connection of pieces of spheres and tori , which represents the part of the van der waals surface exposed to the solvent . during the process of determining the surface ,interfacial atoms can be identified using a simple geometrical criterion .many approximated or analytical methods have been developed to compute the ms or the sas . in general , these methods are based on discretization or tessellation procedures , requiring therefore the determination of the geometrical structure of the molecule .other methods which allow to identify molecular surfaces include the approaches of willard and chandler or the circular variance method of mezei .incidentally , the way the ms is computed in the early work of greer and bush resembles very closely the itim algorithm . from the late 1970s ,the problem of shape identification had started being addressed by a newly born discipline , computational geometry . in this different framework, several algorithms have been actively pursued to provide a workable definition of surface , and in particular the concept of -shapes showed direct implications for the determination of the molecular surfaces .the approach based on -shapes is particularly appealing due to its generality and ability to describe , besides the geometry , also the intermolecular topology of the system .noticeably , none of these methods to the best of our knowledge has ever been employed for the determination of intrinsic properties at liquid - liquid or liquid - gas interfaces .prompted by the apparent similarities between the usage of the circumsphere in the alpha shapes and that of the probe sphere in the itim method , as we will describe in the next section , we investigated in more detail the connection between these two algorithms . as a result, we developed a generalized version of itim ( gitim ) based on the -shapes algorithm .the new gitim method consistently reproduces the results of itim in the planar case while retaining the ability to describe arbitrarily shaped surfaces . in the followingwe describe briefly the alpha shapes and the itim algorithms , explain in detail the generalization of the latter to arbitrarily shaped surfaces , and present several applications .the concept of -shapes was introduced several decades ago by edelsbrunner . to date the methodis applied in computer graphics application for digital shape sampling and processing , in pattern recognition algorithms and in structural molecular biology .the starting point in the determination of the surface of a set of points in the -shapes algorithm is the calculation of the delaunay triangulation , one of the most fruitful concepts for computational geometry , which can be defined in several equivalent ways , for example , as the triangulation that maximizes the smallest angle of all triangles , or the triangulation of the centers of neighboring voronoi cells .the idea behind the -shapes algorithm is to perform a delaunay triangulation of a set of points , and then generate the so - called -complex from the union of all k - simplices ( segments , triangles and tetrahedra , for the simplex dimension k=1,2 and 3 , respectively ) , characterized by a k - circumsphere radius ( which is the length of the segment , the radius of the circumcircle and the radius of the circumsphere for k=1,2 and 3 , respectively ) smaller than a given value , ( hence the name ) .the -shape is then defined as the border of the -complex , and is a polytope which can be , in general , concave , topologically disconnected , and composed of patches of triangles , strings of edges and even sets of isolated points . in a pictorial way, one can imagine the -shape procedure as growing probe spheres at every point in space until they touch the nearest four atoms .these spheres will have , in general , different radii .those atoms that are touched by spheres with radii larger than the predefined value are considered to be at the surface .o / ccl system .the oxygen atoms at the interface between the h phase ( inner ) and ccl phase ( outer ) as recognized by the gitim algorithm are represented with an additional halo .unconnected points belong to molecules which cross periodic boundary conditions.[fig : mixture ] ] an example of the result of the -shapes algorithm in two dimensions is sketched in fig .[ fig : sketch]a .the itim algorithm is based instead on the idea of selecting those atoms of one phase that can be reached by a probe sphere with fixed radius streaming from the other phase along a straight line , perpendicular to the macroscopic surface .an atom is considered to be reached by the probe sphere if the two can come at a distance equal to the sum of the probe sphere and lennard - jones radii , and no other atom was touched before along the trajectory of the probe sphere . in practice , one selects a finite number of streamlines , and if the space between them is considerably smaller than the typical lennard - jones radius , the result of the algorithm is practically independent of the location and density of the streamlines .the same is not true regarding the orientation of the streamlines ; this is a direct consequence of the algorithm being designed for planar surfaces only .the basic idea behind the itim algorithm are sketched in fig .[ fig : sketch]b .a closer inspection reveals that the condition of being a surface atom for the itim algorithm resembles very much that of the -shapes case .quadruplets of surface atoms identified by the itim algorithm have the characteristic of sharing a common touching sphere having the same radius as the probe sphere . in this way, one can see the analogy with the -shapes algorithm , the parameter being used instead of .the most important differences in the -shapes algorithm with respect to itim are the absence of a volume associated with the atoms , and its independence from any reference frame .we devised , therefore , a variant of the -shapes algorithm that takes into account the excluded volume of the atoms . in the approach presented here the usual delaunay triangulationis performed , but the -complex is computed substituting the concept of the circumsphere radius with that of the radius of the touching sphere , thus introducing the excluded volume in the calculation of the -complex . note that this is different from other approaches that are trying to mimic the presence of excluded volume at a more fundamental level , like the weighted -shapes algorithm , which uses the so - called regular triangulation instead of the delaunay one .in addition , in order to eliminate all those complexes , such as strings of segments or isolated points , which are rightful elements of the shape , but do not allow a satisfactory definition of a surface , the search for elements of the -complex stops in our algorithm at the level of tetrahedra , and triangles and segments are not checked . in this sense gitim can provide substantially different results from the original -shapes algorithm .the equivalent of the -complex is then realized by selecting the tetrahedra from the delaunay triangulation whose touching sphere is smaller than a probe sphere of radius , and the equivalent of the -shape is just its border , as in the original -shapes algorithm .the procedure to compute the touching sphere radius is described in the appendix . in the implementation presented here , in order to compute efficiently the delaunay triangulation ,we have made use of the quickhull algorithm , which takes advantage of the fact that a delaunay triangulation in dimensions can be obtained from the ridges of the lower convex hull in dimensions of the same set of points lifted to a paraboloid in the ancillary dimension .the quickhull algorithm employed here has the particularly advantageous scaling of its computing time with the number and of input points and output vertices , respectively .a separate issue is represented by the calculation of the intrinsic profiles ( whether profiles of mass density or of any other quantity ) as the distance of an atom in the phase of interest from the surface is not calculated as straightforwardly as in the respective non - intrinsic versions . for each atom in the phase , in fact , three atoms among the interfacial ones have to be identified in order to determine by triangulation the instantaneous , local position of the interface .this issue will be discussed in sec .[ sec : comparison ] for the planar , for the spherical or quasi - spherical and for the general case : here we simply note that we turned down an early implementation of the algorithm that searches for these surface atoms , based on the sorting of the distances using algorithms like quicksort , in favor of a better performing approach , based on kd - trees , a generalization of the one - dimensional binary tree , which are still built in a time , but allow for range search in ( typically ) time .we have compared the results of the itim and gitim algorithms applied to the water / carbon tetrachloride interface composed of 6626 water and 966 ccl molecules .the water and ccl molecules have been described by the tip4p model , and by the potential of mcdonald and coworkers , respectively .the molecules have been kept rigid using the shake algorithm .this simulation , as well as the others reported in this work have been performed using the gromacs simulation package employing an integration time step of 1 fs , periodic boundary conditions , a cutoff at 0.8 nm for lennard - jones interactions and the smooth particle mesh ewald algorithm for computing the electrostatic interaction , with a mesh spacing of 0.12 nm ( also with a cut - off at 0.8 nm for the real - space part of the interaction ) .all simulations were performed in the canonical ensemble at a temperature of 300k using the nos hoover thermostat with a relaxation time of 0.1 ps .a simulation snapshot of the h / ccl interface is presented in fig .[ fig : mixture ] , where the surface atoms identified by the gitim algorithm using a probe sphere radius of 0.25 nm are highlighted using a spherical halo . ]we have used the itim and gitim algorithms to identify the interfacial atoms of the water phase in the system , for different sizes of the probe sphere . in general, gitim identifies systematically a larger number of interfacial atoms than itim for the same value of the probe sphere radius , as it is clearly seen in fig . [fig : nvsa ] . remarkably , for values of the probe sphere radius smaller than about 0.2 nm ( compare , for example , with the optimal itim parameter nm suggested in ref . ) , the interfacial atoms identified by gitim show the onset of percolation .the reason for this behavior traces back to the fact that itim is unable to identify voids buried in the middle of the phase , as it is effectively probing only the cross section of the voids along the direction of the streamlines .this difference could explain the higher number of surface atoms identified by gitim , as voids in a region with high local curvature ( or , in other words , with a local surface normal which deviates significantly from the macroscopic one ) will not be identified as such by itim . in gitim , on the contrary , probe spheres can be thought as inflating at every point in space instead of moving down the streamlines , and this is the reason why the algorithm is able to identify also small pockets inside the opposite phase .it is possible to make a rough but enlightening analytical estimate of the probability for a probe sphere of null radius in the itim algorithm to penetrate for a distance in a fluid of hard spheres with diameter and number density .using the very crude approximation of randomly distributed spheres , the probability to pass the first molecular layer , at a depth is the effective cross section , and that of reaching a generic depth can be approximated as , where defines a penetration depth .therefore , using a probe sphere with a null radius , itim will identify a ( diffuse ) surface at a depth , while gitim will identify every atom as a surface one . for water at ambient conditions , the penetration is nm , a distance smaller than the size of a water molecule itself .this could explain why in ref ., even using a probe sphere radius as small as 0.05 nm , almost only water molecules in the first layer were identified as interfacial ones by itim ( see the almost perfectly gaussian distribution of interfacial water molecules in fig.9 of ref . ) .o / ccl system in one simulation snapshot as recognized by gitim exclusively ( small spheres ) , itim exclusively ( large spheres ) or by both methods ( sphere with halo).[fig : recognition ] ] nevertheless , it is important for practical reasons to be able to match the outcome of both algorithms .it turns out that choosing so that the average number of interfacial atoms identified by both algorithms is roughly the same leads also , not surprisingly , to very similar distributions . the probe sphere radius required for gitim to obtain a similar average number of surface atom as in itimcan be obtained by an interpolation of the values reported in fig .[ fig : nvsa ] .an example showing explicitly the interfacial atoms identified by the two methods ( nm for itim and nm for gitim ) is presented in fig .[ fig : recognition ] : roughly 85% of surface atoms are identified simultaneously by both methods , demonstrating the good agreement between the two methods once the probe sphere radius has been re - gauged .the condition of identifying the same atoms as interfacial ones is much more strict that any condition on average quantities , like the spatial distribution of interfacial atoms or intrinsic density profiles .hence , it is expected that a good agreement on such quantities can also be achieved ..[fig : density ] ] the intrinsic density profiles of water and carbon tetrachloride are reported in fig .[ fig : density ] , as computed by itim and gitim , respectively , with the interfacial water molecules as reference .the procedure for identifying the local distance of an atom from the surface is in its essence the same as described in ref . .starting from the projection of the position of the given atom to the macroscopic interface plane , the two interfacial atoms closest to are found ( their position on the interface plane being and , respectively ) .the third closest atom with projection has then to be found , with the condition that the triangle contains the point .a linear interpolation of the elevation of from those of the other points is eventually performed , and employed to compute the distance which is used to compute the intrinsic density profile . efficient neighbor search for the , and candidate atoms is implemented using kd - trees as discussed before .the two pairs of profiles are very similar , besides a small difference in the position and height of the main peak of the ccl profile ( curves on the right in fig .[ fig : density ] ) and in the minimum of the water profile ( curves on the left in fig .[ fig : density ] ) right next to the surface position , which are anyway compatible with the differences observed between various methods for the calculation of intrinsic density profiles .the delta - like contribution of the water molecules at the surface is included in the plot in fig .[ fig : density ] , and defines the origin of the reference system .negative values of the signed distance from the interface correspond to the aqueous phase .before applying gitim to non - planar interfaces , one important issue has still to be solved , namely that of the proper calculation of intrinsic density profiles in non - planar geometries . in general , one uses one - dimensional density profiles ( intrinsic or non - intrinsic ) when the system is , or is assumed to be , invariant under displacements along the interface , so that the orthogonal degrees of freedom can be integrated out .when the interface has a non - planar shape , one needs to use a different coordinate system . in the case of a quasi - spherical object for example, one could use the spherical coordinate system to compute the non - intrinsic density profile , and normalize each bin by the integral of the jacobian determinant , that is the volume of the shell at constant distance from the origin . in the intrinsic case , however , it is necessary to know at every time step the volume of the shells at constant distance from the interface .the volume of shells at constant intrinsic distance can , in principle , be calculated at each frame by regular numerical integration , but this would require a large computing time and storage overhead . here , instead , we propose to employ an approach based on simple monte carlo integration : in parallel with the calculation of the histograms for the various phases , we compute also that of a random distribution of points , equal in number to the total atoms in the simulation . the volume of a shell can be estimated as box volume times the ratio of the number of points found at a given distance and the total number of random points drawn .we are following the heuristic idea that for each frame one does not need to know the volume of the shell with a precision higher than that of the average number of atoms in it , .in addition , we assume that the surface area of the interface is large enough for the shell volume variations to be small with respect to its average value . the average density can be approximated as .\ ] ] when the relative volume changes are small , one can therefore simply normalize the histogram by the average volume obtained by the monte carlo procedure , disregarding the terms of order .the correctness of our assumption is demonstrated incidentally by the application of this normalization once again to the planar case .the thin lines in fig .[ fig : density ] represent the itim intrinsic mass density profile of water and carbon tetrachloride , using the monte carlo normalization scheme instead of the usual normalization with box cross sectional area and slab width .close to the interface , the monte carlo normalization gives results which are fully compatible with the usual method , showing that the accuracy of the volume estimate is adequate . onthe other hand one can see that far from the interface the two profiles behave quite differently .the case with usual normalization decays slowly to zero : this effect is due to the presence of the second interface , whose profile is smeared again by capillary waves . the case with monte carlo normalization , on the contrary , shows that it is possible to recover the proper intrinsic density also at larger distances , and features such as the fourth peak at 2 nm , which are completely hidden in the normal picture , can be revealed .this shows that the use of the proper , curvilinear coordinate system is of fundamental importance also for macroscopically planar interfaces .the calculation of the monte carlo normalization factors does not change the typical scaling of the algorithm , as it consists in calculating the histogram for an additional phase of randomly distributed points ( which effectively behaves as an ideal gas ) . the better accuracy at larger distances ,however , demonstrates that the use of the monte carlo normalization is much more efficient than the standard approach , as it requires much smaller systems to be able to extract the same information ( e.g. , to resolve the fourth peak in fig .[ fig : density ] , an additional slab of about 2 - 3 nm would have been needed ) . in this sense , the monte carlo normalization procedure can be even beneficial in terms of performance .] , solid line , , dashed line ) .the vertical dashed lines marks the position of the interface .[ fig : order ] ] dodecylphosphocholine ( dpc ) is a neutral , amphiphilic molecule with a single fatty tail that can form micelles in solution : these play a relevant role in biochemistry , especially for nmr spectroscopy investigations aiming at understanding the structure of proteins or peptides bound to an environment that is similar to the biological membrane .the molecular structure of dpc is shown in fig.[fig : micelle ] .we have simulated for 500 ps a micelle of 65 dpc and 6305 water molecules using the force field and configurations from tieleman and colleagues , and have calculated the intrinsic mass density profiles of both phases ( dpc and water ) using gitim and the monte carlo normalization procedure , with a probe sphere radius .the result of the interfacial atoms identification on the dpc micelle for a single frame is shown in fig . [fig : micelle ] , where water molecules have been removed for the sake of clarity , and interfacial atoms are highlighted as usual with a halo .the intrinsic mass density profile , calculated relative to the dpc surface , is reported in fig .[ fig : order ] , with the dpc mass density profile shown on the left , and the water profile on the right . as usual, the delta - like contribution at identifies the contribution from interfacial dpc atoms .in addition , we have calculated , for the first time , the intrinsic profiles of the orientational order parameters and of the water molecules around the dpc micelle .the two parameters are defined as and , where and are the angles between the water molecule position vector ( with respect to the micelle center ) , and the water molecule symmetry axis and molecular plane normal , respectively .the orientation is taken so that when the hydrogen atoms are farther from the micelle than the corresponding oxygen .the complete picture of the orientation of water molecules would be delivered by the calculation of the probability distribution , but here we limit our analysis to the two separate order parameters and their intrinsic profiles . note that , since these quantities are computed per particle , there is no need to apply any volume normalization .the polarization of water molecules , which is proportional to , appears to be different from zero only very close to the micellar surface . in particular, has a correlation with the main peak of the intrinsic density profile in the proximity of 0.4 nm .water molecules located closer to the interface show a first change in the sign of the polarization and a subsequent one when crossing the interface . farther than 0.25 nm inside the micelle ,not enough water molecules are found to generate any meaningful statistics .also the order parameter is practically zero beyond 0.6 nm , and again a correlation is seen with the main peak of the intrinsic density profile , and the maximum in the orientational preference is found just next to the interface , where , showing that water molecules are preferentially laying parallel to the interfacial surface .soot model represented in section ( right , triangulated surface ) and in whole ( left , wireframe ) with the atoms identified by gitim as surface ones highlighted using thicker , red elements . besides surface atoms ,also chemical bonds between surface atoms are highlighted , as well as five , six and seven membered rings ( filled surfaces).[fig : soot ] ] one of the main byproducts of hydrocarbon flames , soot is thought to have a relevant impact on atmospheric chemistry and global surface warming .electron , uv , and atomic force microscopy have revealed the size and structure of soot particles from different sources at different scales . in particular , soot emitted by aircraftis found to be made of several , quasi - spherical , concentric graphitic layers of size in the range from 5 to 50 nm .we have used four model structures ( s,s , s and s from ref . ) to demonstrate the ability of gitim to identify surface atoms in complex geometries . in fig .[ fig : soot ] , the s model is represented in section as a triangulated surface ( right ) , showing the four concentric layers , and in whole ( left ) showing the surface atoms as detected by gitim using nm .the histograms of the total number of atoms and of the surface ones , as a function of the distance from the center of the soot particles , are shown in fig .[ fig : sootdens ] for the four different models , where it is seen how particles of the size of a water molecule have mostly access only to the inner and outer parts of the innermost and outermost shell , respectively , and cover them almost completely .this finding is in a clear accordance with the results of the void analysis and adsorption isotherm calculations presented in ref . .] bile acids , such as cholic acid are biological amphiphiles built up by a steroid skeleton and side groups attached to it .the organization of these side groups is such that hydrophilic and hydrophobic groups are located at the two opposite sides of the steroid ring .thus , bile acids have a hydrophilic and a hydrophobic face ( often referred to as the and side , respectively ) rather than a polar head and an apolar tail , as in the case of other surfactants like , for example , dpc .the unusual molecular shape leads to peculiar aggregation behavior of bile acids . at relatively low concentrations they form regular micelles with an aggregation number of 2 - 10 , while above a second critical micellar concentrationthese primary micelles form larger secondary aggregates by establishing hydrogen bonds between the hydrophilic surface groups of the primary micelles .these secondary micelles are of rather irregular shape, which makes them an excellent test system for our purposes . herewe analyze the surface of a secondary cholic acid micelle composed of 35 molecules , extracted from a previous simulation work and simulated for the present purposes for 500 ps in aqueous environment .an instantaneous snapshot of the micelle is shown in fig.[fig : bile ] ( water molecules are omitted for clarity ) together with a schematic structure of the cholic acid molecule .we calculated the density profile of water as well as of cholic acid relative to the intrinsic surface of the micelle by the gitim method .the resulting profiles are shown in fig.[fig : biledens ] .the micelle has a characteristic elongated shape , which exposes a large part of its components to the solvent , so that roughly 80% of the micelle atoms are identified as surface ones .the small volume to surface ratio of the micelle is at the origin of the rather noisy intrinsic density profile for the micelle itself .the profile , in addition to the delta - like contribution at the surface , presents another very sharp peak located at a distance of about 0.18 nm inside the surface , due to the rather rigid structure of the bile molecule .the water intrinsic density profile , on the contrary , shows a marked peak at 0.25 nm , absent in the dpc micelle case , due to the presence of hydrogen bonds between water molecules and the hydroxyl groups of cholic acid . ] ]in this paper we presented a new algorithm that combines the advantageous features of both the itim method and the -shapes algorithm to be used in determining the intrinsic surface in molecular simulations .thus , unlike the original variant , this new , generalized version of itim , dubbed gitim , is able to treat interfaces of arbitrary shapes and , at the same time , to take into account the excluded volumes of the atoms in the system .it should be emphasized that the gitim algorithm is not only able to find the external surface of the phase of interest , but it also detects the surface of possible internal voids inside the phase .the method , based on inflating probe spheres up to a certain radius in points inside the phase turned out to provide practically identical results with the original itim analysis for planar interfaces .further , its applicability to non - planar interfaces was shown for three previously simulated systems , i.e. , a quasi - spherical micelle of dpc , molecular models of soot , and a secondary micellar aggregate of irregular shape built up by cholic acid molecules .another important result of this paper concerns the correct way of calculating density profiles relative to intrinsic interfaces , irrespective of whether they are macroscopically planar or not .thus , here we proposed a monte carlo - based integration algorithm to estimate the volume elements in which points of the profile are calculated , in order to normalize them correctly .the issue of normalization with the volume elements in macroscopically flat fluid interfaces originates from the fact that these interfaces are rough on the molecular scale , namely , at the length scale of the calculated profiles .we clearly demonstrated that using this new normalization the artificial smearing of the intrinsic density profiles far from the intrinsic interface can be avoided .two computer programs that implement , respectively , an optimized version of itim and the new gitim algorithm , as well as the calculation of intrinsic density and order parameters profiles , are made available free of charge at ` http://www.gitim.eu/ ` .the programs are compatible with the trajectory and topology file formats of the gromacs molecular simulation package .m.s . acknowledges fp7-ideas - erc grant droemu : `` droplets & emulsions : dynamics & rheology '' for financial support , and gyrgy hantal for providing the soot structures .part of this work has been done by m.s . at icp , stuttgart university .p.j . is grateful for financial support from the hungarian otka foundation under project nr .s.k . is supported by fp7-ideas - erc grant patchycolloids and rfbr grant mol_a 12 - 02 - 31374 .simulation snapshots were made with vmd .here , following ref .we derive the expressions for the radius and position of the center of the sphere which is touching four other ones , having given radii and center positions and ( or 4 ) , respectively .the conditions of touching can be expressed with the following nonlinear system of four equations : by subtracting one of them from the other three ( without loss of generality we subtract the one with ) , the quadratic term , , will be eliminated and the system eq.([eq : constr ] ) would become linear with respect to : where the matrix and the vectors and are defined as and \mathbf{r}_1 ^ 2-\mathbf{r}_3 ^ 2-r_1 ^ 2+r_3 ^ 2 \\[0.2em ] \mathbf{r}_1^ 2-\mathbf{r}_4 ^ 2-r_1 ^ 2+r_4 ^ 2 \\[0.2em ] \end{array } \right).\ ] ] equation ( [ eq : linear ] ) has a unique solution if matrix is non - singular ( the singularity of corresponds to the case when all 4 spheres are co - planar , which means that the unknown sphere either does not exist , or is not unique ) : where and .once eq.([eq : r ] ) is substituted into the first of the constraints eq.([eq : constr ] ) , it leads to the quadratic algebraic equation with respect to : where .the solution of eq .( [ eq:2ndorder ] ) can be found in the following form : if is not equal to unity ( which corresponds to the case when the 4 spheres are tangential to one plane ) , then eq.([eq : r_pm ] ) provides two different solutions , and the positive one provides the radius of the touching sphere as a function of the centre position .eventually , the positions of their centres can be obtained by inserting into eq.([eq : r ] ) . in the present manuscript in case of two possible solutions we choose the sphere with minimal radius ., m. facello , p. fu , and j. liang , measuring proteins and voids in proteins , in _ proc .28th annu .hawaii intl .system sciences _ , volume 5 , p. 256264, los alamitos , california , 1995 , ieee computer society press .
|
we present a generalized version of the itim algorithm for the identification of interfacial molecules , which is able to treat arbitrarily shaped interfaces . the algorithm exploits the similarities between the concept of probe sphere used in itim and the circumsphere criterion used in the -shapes approach , and can be regarded either as a reference - frame independent version of the former , or as an extended version of the latter that includes the atomic excluded volume . the new algorithm is applied to compute the intrinsic orientational order parameters of water around a dpc and a cholic acid micelle in aqueous environment , and to the identification of solvent - reachable sites in four model structures for soot . the additional algorithm introduced for the calculation of intrinsic density profiles in arbitrary geometries proved to be extremely useful also for planar interfaces , as it allows to solve the paradox of smeared intrinsic profiles far from the interface .
|
a series of numerical simulations of an upward bubbly flow with a void fraction of 3% between two vertical walls using direct numerical simulation with a front tracking method is performed . a rectangular computational domain with two no - slip vertical wall and periodic boundary conditions in streamwise and spanwise directions is used .the effect of bubble deformability is isolated through controlling the surface tension while keeping other flow parameters fixed .the flow is driven upward by a specified pressure gradient .the dimensionless numbers in the problem are reynolds number , , eotvos number , , and density and viscosity ratios . here , , and are the liquid density and viscosity , surface tension coefficient , bubble diameter , channel width , average vertical velocity in the channel , and acceleration due to gravity , respectively .the pressure gradient and eotvos number are specified _ a priori _ and the reynolds number is calculated from the average velocity _ a posteriori_. eotvos number is decreased from 4.5 to 0.9 . the decrease in the deformability of bubbles changes the lateral lift force on the bubbles .therefore , the distribution of the bubbles changes from a uniform distribution in the middle of the channel to a wall - peak distribution with only a few bubbles in the center .the reduction of the number of the bubbles in the center increases the average density there which leads to a reduction in flow rate since the driving pressure gradient is kept fixed . as a result, the reynolds number drops from 3750 to 1700 .
|
this article describes the fluid dynamics video : `` effect of bubble deformability on the vertical channel bubbly flow . '' the effect of bubble deformability on the flow rate of bubbly upflow in a turbulent vertical channel is examined using direct numerical simulations . a series of simulations with bubbles of decreasing deformability reveals a sharp transition from a flow with deformable bubbles uniformly distributed in the middle of the channel to a flow with nearly spherical bubbles with a wall - peak bubble distribution and a much lower flow rate .
|
the game developed by walter penney is not particularly well known and we think that half the fun is to be found in playing the game with your friends .therefore we would like to take this first foray into the development of the game as an introduction and a lesson .the game is classically not played with suited cards ( as is its treatment in ) , but instead with a fair coin .we discuss penney s probabilistic game in the light of the coin .imagine that two players , and , are to play penney s game .then we may assume , without loss of generality , that has challenged , and for that reason it falls first to to elect a `` binary '' ( though it is perhaps more natural to consider it either `` heads '' or `` tails '' ) sequence of length three .it s easy to see that there are eight such sequences , and let us suppose that has ( lucklessly ) chosen that is , the sequence of three heads one after the other . in response , also chooses a sequence .let us pointedly suppose that chooses the sequence .then the game proceeds as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a coin is flipped repeatedly until either the first or second player s sequence is observed . in that case , the player whose sequence is found first is declared the winner , and we say that this player has won the `` trick . ''as many tricks as one likes may be played , and the player who wins the most tricks wins overall .for example , the following sequence may result from a fair coin , then we can by inspection confirm that the sequence chosen by occurs at the eighth 3-sequence ( immediately preceding elected 3-sequence ) .then we have that has won this particular trick ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what makes this game particularly interesting is that there exists an _ optimal _ strategy that may be taken by the second player ( in this case ) such that they may gain a probabilistic advantage over the first player ( here , ) .we formalize this notion in the following definition .we say that a 3-sequence of binary random variables is a strategy .furthermore , such a strategy is optimal for another strategy iff the probability that the sequence will occur before the sequence ( and thereby winning the trick ) is larger than .we denote this probability using the notation , the word `` non - transitive '' may at first be rather intimidating , but as it happens almost everyone has been met with a non - transitive game at one point in their lives or another . we begin by defining a non - transitive relationship . a relationship between strategies , , and said to be non - transitive if is optimal for and is optimal for , but is not optimal for .this is quite easy to find in everyday life , especially among young children !if one considers carefully , then it should be apparent that the classical game of rock - paper - scissors constitutes a non - transitive game .as we are playing a slightly different game in rock - paper - scissors , we shall for this section only consider a _ strategy _ to be one of the game s namesake possibilities .then we know that rock is optimal for scissors , and that scissors is optimal for paper .but if rock - paper - scissors were a transitive game , then rock would be optimal for paper .indeed , this is not the case , and in fact just the opposite since paper is optimal for rock !therefore , rock - paper - scissors is a non - transitive game .similarly , the penney ante game is also a non - transitive game , which is what makes it interesting . unlike rock - paper - scissors , however , where rock _ always _ beats scissors, it is conceivable in the penney ante game that a sub - optimal strategy may still win !thus , optimality must be defined in terms of probabilities rather than in terms of absolute properties .the penney ante game is a non - transitive game . as will follow from exhaustive proof later in the paper, we will have that has an optimal counterpart in .furthermore , is optimal for . in a transitive game, this would suggest that is optimal for , but indeed they are not , for each element in the strategy is simply the `` negation '' of the corresponding element of the other .thus , if one strategy were sub - optimal , one could make it optimal by relabeling the sides of the coin .interestingly , penney s game exhibits what we suppose might be called a four - directional non - transitive relationship . continuing where we left off ,we will demonstrate later that the strategy is optimal for .but crucially we have at last that is optimal for .this is a result precisely opposite of what one would expect in a transitive game .[ lemma : hhh_response ] assume that elects the sequence and that in response chooses the sequence .then will win iff comprises the first sequence to appear .the direction is quite immediate as if the first sequence to appear is then has won by definition .the direction requires closer examination .suppose that the first sequence was not .then we have the implication that there must therefore be at least one within the first 3-sequence .then we have the following four possibilities for the next 2-sequence following the : : : : in this case , the total 3-sequence is , and so wins the trick . : : : since the sequence ends on a , nothing has changed since we began , and we may take the sequence to `` reset '' in a sense on the last occurring . : : : from here , the sequence can either transition to a ( in which case wins the trick ) , or to a , in which case we can apply the logic in the point above . : : : in this case , nothing has changed , so we may take the sequence to reset on the last again .notice that in each of the four exhaustive conditions above , none allow for the possibility of winning the trick .therefore , we conclude that can not win if a has occurred within the first 3-sequence. therefore , can only win when the first three sequence does not contain , or in other words , if the first 3-sequence is .the probability that wins given that he selects the sequence and that plays optimally is , incidentally , the probability that wins under these conditions is .[ theorem : optimal ] denote by a bernoulli random variable with parameter .then the sequence is a binary sequence of length three , which is analogous to the sequence chosen by in penney s game .then an optimal strategy for to assume is the sequence , where is the logical negation such that , the proof is fortunately very straightforward , and it requires only the confirmation that sequence has consistently a higher probability of occurring first than does s .the eight possible strategies can be enumerated completely . from itis immediately clear that is optimal for .if we assume that takes the strategy , then predicts that the strategy is optimal .we can confirm this by calculating the probability that will occur before the sequence . therefore , we have via complementary events that , this confirms that is an optimal strategy for .this concludes the cases for which we provide a proof for the moment .let us next examine the case where has elected the 3-sequence and chooses .since both strategies depend on an initial result , we may assume without loss of generality that the first coin flip in the sequence yields .this proof is most easily understood via a markov chain argument , so we diverge here briefly to discuss markov chains and their relationship to penney s game .be the transition matrix of a markov chain .let the rows and columns of the transition matrix correspond to particular 3-sequences arising in the penney ante game .then the probability of transition from the 3-sequence to the sequence is given by the entry .[ theorem : markov ] let be the transition matrix of a markov chain and let the row - vector denote a probability vector of the state distributions initially .then the probability of arriving in the 3-sequence after iterations of the game is , in fact , the states may be generalized beyond 3-sequences to any number of states existing in an abstract system .it is this property that makes the theorem so useful in applications .[ lemma : markov_penney ] we can now demonstrate how markov chains can be leveraged to address two further cases in penney s game .in particular , is optimal for both and . in the first case , we consider .then predicts that should choose the strategy . in this case , we wish to understand .notice that the transition matrix , , for this gamemay be written , then it may be demonstrated computationally that we have asymptotically the following behavior of : let us denote .notice then that each of initial states are equally likely , each occurring with probability .therefore , provides , this shows that and that therefore , strategy is optimal for under these circumstances . to demonstrate that is optimal for ,we proceed simply in the same manner .we do not reproduce here the calculations as we did in the previous case , but they are easy to check using a programming language such as matlab or python .let be the transition matrix for the markov chain corresponding to the game where is strategy and is sequence. then we have that , it follows from here that , in turn , this provides the desired result . as it happens , it is possible to demonstrate for each of the eight strategies in penney s game that the corresponding optimal strategy _ is _ optimal using a markov chain argument .however , in order to demonstrate a broader range of mathematics , we do not take that approach here .( hht ) at ( -1 , 0 ) ; ( htt ) at ( 1 , 1 ) ; ( tth ) at ( 3 , 0 ) ; ( thh ) at ( 1,-1 ) ; ( hhh ) at ( -1,-1 ) ; ( hth ) at ( -1 , 1 ) ; ( ttt ) at ( 3 , 1 ) ; ( tht ) at ( 3,-1 ) ; ( htt ) ( hht ) ; ( hht ) ( thh ) ; ( thh ) ( tth ) ; ( tth ) ( htt ) ; ( ttt ) ( htt ) ; ( hth ) ( hht ) ; ( hhh ) ( thh ) ; ( tht ) ( tth ) ; we have already demonstrated proofs that yields an optimal strategy for the cases where chooses either of the 3-sequences or . by, we have immediately that gives the correct strategy for the cases where elects either of the strategies or .the remainder of the proof is not much work . in order to avoid the tedious repetition inherent in enumerating the remaining four cases, one can get away with simply observing that we could have arbitrarily relabeled the sides of our coin so that our `` heads '' side becomes our and the `` tails '' face represents .therefore , we may argue the last four cases via symmetry as follows . : : : by symmetry and the result in , we have that is optimal for . : : : by the first part of the proof for , is an optimal strategy . : : : by we have by symmetry that is optimal for . : : : by the same logic as in the prior point , is also optimal for the strategy .this completes the proof that does indeed produce an optimal strategy for given any of the eight strategies that can elect . provides a visual reference for understanding the relationships between the strategies in the the penney ante gamelet be the transition matrix of a markov chain .we say that a state is absorbing if such that the state may never be left .states that are not absorbing are called transient .a markov chain is itself absorbing if it contains at least one such state and if it is possible to transition to one of those states from any of the other states .be the transition matrix for an absorbing markov chain .then it is possible to `` shuffle around '' the states of such that has the form , let the matrix have absorbing states and transient states . then is a matrix , and is a matrix . as usual, indicates a identity matrix and is a matrix of zeros .our analysis relies primarily on the matrix .[ theorem : fundamental ] let be the transition matrix for a markov chain in canonical form . then there exists a matrix called the fundamental matrix which is , furthermore , the entry is the expected number of times that a markov process with transition matrix visits the transient state conditioned on the fact that it began in transient state .[ theorem : expectation ] leveraging the idea that penney s game may be viewed in the light of markov chains , we are to compute the expected number of rounds that will be played before a winner is determined in the case of a single trick .let be a vector whose entry is the expected number of iterations of the markov process beginning in state undergoes before it enters an absorbing state .then we may write that , by we have that represents the expected number of times that a process beginning in any of the transient states visits each of the other transient states .thus , taking the summation of the rows of yields the expected number of times that the process beginning in state is in any of the other transient states .this is the expected number of iterations before that process is absorbed .notice that is precisely the summation of the rows expressed as a matrix multiplication .c|cccccccc + & & & & & & & & + & 0 & 6 & 4 & 6 & 0 & 6 & 4 & 6 + & 2 & 0 & 4 & 6 & 0 & 6 & 4 & 6 + & 2 & 0 & 0 & 6 & 2 & 4 & 4 & 6 + & 2 & 0 & & 0 & 2 & & & + & & & & 2 & 0 & & 0 & 2 + & 6 & 4 & 4 & 2 & 6 & 0 & 0 & 2 + & 6 & 4 & 6 & 0 & 6 & 4 & 0 & 2 + & 6 & 4 & 6 & 0 & 6 & 4 & 6 & 0 + we have now established the mathematical foundation that will allow us to present our main result .we calculate the expected number of flips of a coin that will occur before a winner is declared in penney s game .we assume that given the first player s sequence the second player will choose their strategy according to .this is , of course , dependent on knowing which of the eight possible initial circumstances ended up occurring .we exhibit these results in .we wish to draw attention to interesting symmetry that exists in the tabulation of the results that results from the symmetric nature of the game . [theorem : expected_length ] for an instance of penney s game in which assumes the strategy and takes the optimal strategy as in , then the expected number of coin flips before a winner is declared is , consider that each of the initial states are equally probable , each occurring with frequency equal to .then the average time to absorption for a process beginning in either of or is clearly zero .additionally , the vector presents the average time to absorption for each of the six `` non - immediate - victory '' states .if we do not condition on any particular initial sequence at the beginning of the game , we may simply average together all of the expected times to absorption .however , it is important to note that in this case averaging these expected times to absorption will not yield the expected number of coins flipped before a winner is declared .this is because we are assumed to start with a given 3-sequence in each of these cases , yet it takes three coin flips to arrive there in the first place !hence , we introduce a supplement of three to each of our expected times to absorption to obtain the expected number of coins we must flip .this represents an approach that can serve as an alternative to the method using conway numbers proposed by nishiyama in .we show the expected number of coin tosses for each of the eight possible games in .if we assume that each of the strategies that can assume are equally likely , then we have that the expected number of coins flipped in penney s ante game is given by , it is possible to derive the probability for any pair of strategies arising from penney s game . here, we present a method that uses the fundamental matrix and another theorem from markov chain theory to derive these same probability values . butfirst , we present a critical result .let be a matrix such that the value of is the probability that a markov process will be absorbed into the absorbing state given that it started in the transient state .then we may calculate as , consider that if the game begins in the 3-sequence , then the probability of being absorbed into that state is one , yielding a probability of zero for the event that the process is absorbed into .the matrix gives the probability that the process is absorbed into given that it starts in any of the transient states .therefore , given that all eight initial states are equally likely , we have that the probability of the process terminating in is the average of the probabilities that it terminates in given an initial state .this yields the equation , to see that the equation for must hold in we need only confirm that is a complementary event to .this is easily seen to be true and so we obtain , this completes the proof . 5 nishiyama , yutaka . _ pattern matching probabilities and paradoxes as a new variation on penney s coin game_. international journal of pure and applied mathematics .volume 59 no .3 2010 , pages 357 - 366 .charles grinstead and laurie snell . _ginstead and snell s introduction to probabilitiy_. second edition .american mathematical society .july 4 , 2006 .walter penney , 95 penney - ante , _ journal of recreational mathematics _, 2 ( 1969 ) , 241 .
|
in late may of 2014 i received an email from a colleague introducing to me a non - transitive game developed by walter penney . this paper explores this probability game from the perspective of a coin tossing game , and further discusses some similarly interesting properties arising out of a markov chain analysis . in particular , we calculate the number of `` rounds '' that are expected to be played in each variation of the game by leveraging the fundamental matrix . additionally , i derive a novel method for calculating the probabilistic advantage obtained by the player following penney s strategy . i also produce an exhaustive proof that penney s strategy is optimal for his namesake game .
|
interaction free measurement ( ifm ) originated with the elitzur - vaidman `` bomb '' gedanken experiment that showed it was possible to detected a single - photon , hair - triggered bomb in an interferometer without setting it off by exploiting single particle interference combined with the presence of quantum `` which - path '' information .the original bomb protocol had a success probability of only 25% .( in another 50% of the runs the bomb was detonated , and in the remaining 25% no information about the bomb was obtained . ) the protocol was improved upon by kwiat , et al . , who combined lossless ifm with a multi - pass quantum zeno effect .in our work presented here , we discovered a curious nonlinear behaviour of photon s transmission in a zeno enhanced but lossy ifm apparatus .this discovery leads us to an ifm protocol robust against photon loss and dephasing .in addition , we recast the entire protocol in terms of statistical hypothesis testing , allowing us to quantify the operation of the device as a reliable yet undetectable intruder alert system the invisible quantum tripwire .the elitzur - vaidman `` bomb '' gedanken experiment posits that there exists a bomb with a single - photon sensitive detonator and the goal is to optically detect the presence of such a bomb without detonation .in contrast to the expectations of the classical approach , where such a goal could not be reached , quantum optics allows for a solution measurement without interaction . a lossless mach - zehnder interferometer in a dark port arrangement , , and a zero phase difference between its arms , constitutes a simple ifm setup with efficiency .this scheme allows for interaction - free hypotheses testing of a path being blocked ( ) or it being clear ( ) . ]this measurement is based on a fascinating property of a single photon to interfere with itself while being indivisible .imagine a lossless mach - zehnder interferometer ( mzi ) with beam splitters described by a two mode coupling matrix \ ] ] and the possibility of a photon - sensitive object to be placed in the detection arm ( see figure [ fig : simplesetup ] ) .this detection arm stays invisible to the object for as long as a photon has not been absorbed by the object .there are two possible scenarios : the path is blocked or it is clear . if the path is clear , a single photon , after the first beam splitter , can travel both arms of an interferometer and interfere with itself at the second beam splitter . under a proper choice of beam splitters , , and a zero phase difference, such an interference will result in zero probability of the photon to leave the mzi in mode a ( dark port ) , that is .if the path is blocked by an object , there is a definite destruction of the interference as well as the probability for an object to absorb a photon , .loss of the photon tells us that an object is there , but this is a measurement with an interaction . without interference thereexists a non - zero probability for a photon exiting the mzi through the dark port , .detection of a photon in a dark port constitutes a measurement without an interaction .the efficiency of a given measurement is ] is known as the chernoff distance .our ifm apparatus performs interaction free hypotheses testing based on three possible outcomes : ( ) the probability of absorption because of photon loss or a measurement with an interaction , ( ) the probability of an ifm , and ( ) the probability of learning nothing where the photon exits through the bright port of the interferometer .the importance of no photon loss without an object , , and the dark - port condition , , becomes now obvious in the light of .these assumptions ensure that the error of false acceptance comes from the probability of a photon to exit through the bright port in the presence of an object and is equal to due to a 50 - 50 chance of wrongly choosing after such an outcome .the error of false acceptance in a lossless mzi with a dark port is minimized by an increase of the first beam splitter s reflectance ( ) .it means that all the photons are routed into the detection arm .hence , interaction with an object becomes unavoidable and the photon path becomes visible . in the opposite case , , the probability of an interaction with the object is significantly reduced , at the expense of high statistical error . in order to compensate for the increased statistical error ,multiple trials are required . for the photon path to stay invisible to the object, every photon must be received at the output , which happens with the probability , where the visibility distance , , is introduced for an easy comparison with the chernoff distance , .judging by these distances , it is possible for the detection to be hidden from the object , , while revealing the presence of the object with a high level of certainty .sadly , any deviation from the ideal setup such as loss , phase shifts , or non - perfect dark port arrangement makes the chernoff and visibility distances comparable ; thus effectively preventing the invisibility of a tripwire based on ifm in a setup presented in figure [ fig : simplesetup ] .iqt apparatus based on a -pass ifm in the polarization interferometer . with each pass , a photon s polarization is rotated by an angle . the presence of an object prevents accumulation of polarization rotation and is similar to the quantum zeno effect .an additional beam splitter inside the polarization interferometer models unavoidable loss in the arm accessible by the object as well as controlled loss that is adjusted to provide best performance of the iqt apparatus . ]nevertheless , an invisible quantum tripwire ( iqt ) is possible . we realize it through a combination of an efficient ifm apparatus and a proper interrogation technique .a possible iqt apparatus is presented in figure [ fig : zenobasedifm ] and is based on a -pass ifm apparatus , which offers improved efficiency due to the quantum zeno effect .a crucial part of iqt apparatus is , however , a quantum interrogation technique that deals much better with high sensitivity of the -pass ifm to photon loss , as well as eliminates the dark - port condition .this technique is based on the partial zeno effect and actually adds a controllable amount of loss to the detection arm by means of a beam splitter with tunable reflectivity .any attempt to register a photon ( that constitutes a tripwire ) as well as crossing the path of a photon , would immediately engage the quantum zeno effect resulting in drastic reduction of the photon loss .this effect will increase the rate at which photons exit the system and trigger the alarm , with a confidence level given by the chernoff bound .the -pass ifm apparatus itself is based on a polarization interferometer that operates in the basis of linear polarizations and .the path of vertical polarization constitutes a tripwire .the evolution of a photon s polarization state is described by successive multiplication of matrices , , corresponding to polarization rotation by and loss , , of a photon in the detection arm : \qquad { \rm and } \qquad\hat{l}\left ( \lambda\right ) = \left[\matrix { 1 & 0\cr 0 & \sqrt{1-\lambda}\cr } \right ] , \ ] ] as well as the presence or absence of an object .if the input state of a photon is then after a single pass it will be latexmath:[$|\psi_1\rangle=\hat{o}(h)\hat{l}\hat{u}\left(\theta_n\right ) after passes is , while the probability of total transmission is , where is obtained by repeating a single - pass evolution times . the single - photon transmission probability in a -pass iqt apparatus for as a function of single - cycle probability of photon loss in the detection arm .loss in a -pass iqt is optimized for this partial zeno effect to take place .the detection of an object is based on increase of transmission . ] in the ifm apparatus , a photon is initially horizontally polarized , . with each pass ,polarization is rotated by an angle , which increases a photon s probability to be in the detection arm , where the photon interacts with a beam splitter before being sent along the tripwire .we present the transmission probability as a function of a single - cycle probability of photon loss in the detection arm , , in the absence of an object ( see figure [ fig : transmissions ] ) . is given for a different number of passes but with the same angle of evolution .a 100% photon loss corresponds to the presence of an object in the detection arm .one can see , transmission in this case improves with the number of passes due to the quantum zeno effect .the region of small demonstrates how an artificial lossless case behaves since even a small amount leads to a significant drop in the transmission probability .interestingly , the smallest transmission probability is for relatively high loss , but it is not high enough for the quantum zeno effect to become apparent .this partial zeno effect corresponds to a special type of quantum state evolution in the presence of a probabilistic measurement .our quantum interrogation technique is based on this special evolution .a controllable amount of loss is introduced in the detection arm by means of a beam splitter with tunable reflectivity .this additional loss in the presence of an object reduces the probability of a photon striking the object during a trial , .furthermore , we assume that reflectivity and phase shift of the additional beam splitter ( inside the interferometer ) are constantly adjusted such that detection of a photon at the output is minimal in order to operate the device at the minimum of the curve shown in figure [ fig : transmissions ] .such an adjustment is made in order to counteract changes in the environment as well as for the partial zeno effect to be maintained , which would obviously not be possible in the presence of an object .thus hypotheses testing is based on two outcomes : a low probability to detect a photon at the output in the absence of an object and 100% in its presence .the chernoff distance , in the case of a hypotheses testing apparatus with only two outcomes , registered with probabilities and or and , is where and .therefore , knowing and is sufficient for error estimation .the transmission probability could be calculated analytically only in the presence of the object , . however , in the absence of an object , the transmission probability , , is experimentally available information , which is constantly provided by the iqt apparatus .there are two primary goals of the iqt apparatus : detection of an object with high certainty , , while staying invisible , .although satisfying both goals is , in principle , possible ( see figure [ fig : result ] ) , its success is limited by the number of passage performed in practice .thus the following compromise between confidence level and invisibility is assumed .we would like ( in fact ) , which means a higher likelihood of not hitting the object with a photon than accepting the wrong hypothesis , while maintaining a confidence level above a blind guess : .in our apparatus , it is assumed that a tripwire becomes visible after a single event of a photon striking an object .therefore , the probability of a tripwire to stay invisible after trials is as before where the visibility distance , , is defined in terms of the probability to strike an object , as described earlier .chernoff and visibility distances as a function of number of passes as well as the amount of loss , in the detection arm required for partial zeno to take place .inset is a difference between those distances .invisible detection becomes possible when this difference becomes positive . ]we numerically simulated the performance of the iqt apparatus based on the state evolution described above .for a given number of passes and , we numerically found the optimal value of loss that minimizes the single - trial transmission probability ( in the absence of an object ) .then we used this value ( ) to calculate the chernoff and visibility distances and .figure [ fig : result ] summarizes these results for a total angle of evolution as a function of number of passes .this reveals that at least 13 passes are necessary for visibility distance to become smaller than chernoff distance thus allowing for . while the iqt operating at experimentally feasible with current technology , the plots are extended over the range in order to demonstrate the asymptotic behavior .probability for the tripwire being invisible and the maximum error bound are given as functions of the number of trials , for given number of passes and .as gets larger , will stay closer to one while will go faster to zero . ]the chernoff and visibility distances are directly translated to the probability for the tripwire being invisible , and the maximum error bound of probability making the wrong decision .figure [ fig : result2 ] shows the dependence of and on the number of trials , for given numbers of passes and .we note that , as the number of passes gets larger , stays closer to one and goes faster to zero allowing the ideal iqt .ccccccc & & + n & & & & & & + 5 & 0.29 & 0.184 & 0.575 & 0.28 & 0.057 & 0.523 + 10 & 0.75 & 0.154 & 0.349 & 0.79 & 0.042 & 0.314 + 11 & 0.85 & 0.147 & 0.324 & 0.92 & 0.039 & 0.291 + 12 & 0.96 & 0.140 & 0.302 & 1.00 & 0.038 & 0.271 + 13 & 1.07 & 0.133 & 0.282 & 1.14 & 0.035 & 0.253 + 20 & 1.91 & 0.098 & 0.195 & 2.08 & 0.025 & 0.174 + 50 & 6.16 & 0.045 & 0.084 & 6.73 & 0.011 & 0.075 + table [ tab : results ] presents numerical values of the visibility distance , the ratio of the distances , as well as the operational amount of loss in the detection arm , .it again shows that at least 13 passes are required before the statistical error starts going to zero faster than the probability of staying invisible .it also shows that a requirement of the total angle of rotation to be , which is a requirement for the standard -pass ifm apparatus , could be dropped .one can actually use as an additional parameter for the optimization of iqt apparatus . in the case of ,the visibility distance is shortened by a factor of four .the shorter the distance the more trials are necessary , thus allowing for longer acquisition times ( with larger ) and better averaging out of any additional errors acquired in a single trial .in addition , one can see that the chernoff distance actually becomes greater relative to the visibility distance , which signifies that for the same probability of invisibility , statistical error could be made smaller for the case than it was possible with a greater total angle of rotation .finally , the amount of controlled loss in the detection arm is relatively high , which is comforting for practical realizations .in conclusion , we have presented an iqt apparatus that is robust against both loss of photons and random phase accumulations in the detection arm due to a built - in feedback .interaction - free hypotheses testing in an iqt apparatus allows for stealth operation : detection of an intrusion while being virtually undetectable by an intruder .in addition , our apparatus does not require analyzing a photon s polarization state and does not rely on an exact rotation , thus allowing for the fine tuning of the performance .therefore such an iqt apparatus holds great promise for practical applications related to security .this work was supported by the army research office , the boeing corporation , the department of energy , the foundational questions institute , the intelligence advance research projects activity , and the northrop - grumman corporation .
|
we present here a quantum tripwire , which is a quantum optical interrogation technique capable of detecting an intrusion with very low probability of the tripwire being revealed to the intruder . our scheme combines interaction - free measurement with the quantum zeno effect in order to interrogate the presence of the intruder without interaction . the tripwire exploits a curious nonlinear behaviour of the quantum zeno effect we discovered , which occurs in a lossy system . we also employ a statistical hypothesis testing protocol , allowing us to calculate a confidence level of interaction - free measurement after a given number of trials . as a result , our quantum intruder alert system is robust against photon loss and dephasing under realistic atmospheric conditions and its design minimizes the probabilities of false positives and false negatives as well as the probability of becoming visible to the intruder .
|
during the last two decades a major research effort has been conducted in the emerging field of quantum information theory .much of this activity starts with the observation that the capacity of physical systems to process , store and transmit information depends on thir classical or quantum nature .quantum algorithms , that is , algorithms based on the laws of quantum mechanics , show an enhancement of information processing capabilities over their classical counterparts .a large collection of quantum communication protocols such as quantum teleportation , entanglement swapping , quantum cloning and quantum erasing reveal new forms of transmitting and storing classical and quantum information .most of these protocols have already been experimentally implemented .a common assumption concerning quantum algorithms and quantum communication protocols is the capacity of performing transformations belonging to a fixed but arbitrary set of unitary transformations together with measurements on a given basis . in this articlewe study the problem of mapping a mixed initial state onto a known pure state using measurements as the only allowed resource , that is , a measurement driven quantum evolution .we show how this problem connects naturally to generation of quantum copies , quantum deleting and entangled states generation .this article is organized as follows : in section [ 2d ] we study the problem considering states belonging to a two - dimensional hilbert space . in section [ ddimension ]we generalize to the case of a -dimensional hilbert space and show that mutually unbiased bases optimize the overall success probability .section [ d - d and m / d ] presents the case of target states in a -dimensional hilbert space . in section [ conclusions ] we summarize our results .let us consider a quantum system described by a two - dimensional hilbert space . initially , the system is in a mixed state .our goal consists in mapping this state onto the known target state by using quantum measurements as the only allowed resource . in order to accomplish this taskwe define a non - degenerate observable .its spectral decomposition is where the and states are eigenstates of with eigenvalues and respectively .thereby , the target state must belong to the spectral decomposition .a measurement of the observable onto the state projects the system to the target state with probability . in this casewe succeed and no further action is required. however , the process fails with probability when the measurement projects the system onto the state .since this state can not be projected to , the target state , by means of another measurement of , it is necessary to introduce a second observable whose nondegenerate eigenstates and are with and being real numbers . a measurement of projects the state onto the state or .since both states have a component on the state , a second measurement of allows us to project again , with a certain probability , to the target state .the probability that this procedure fails after a first measurement of but is successful after a consecutive measurement of the and operators is ) .then , the success probability in the sequence of measurements is similarly , the success probability of mapping the initial state onto , the target state , after applying the consecutive measurement proceses ^nm(\hat{\varphi)}$ ] , that is , a measurement of followed by measurement processes each one composed of followed by , is given by or equivalently clearly , the extreme values and correspond to observables and defining the same basis .consequently , in this case the success probability becomes simply .the expression ( [ pe ] ) indicates that the success probability can be maximized by choosing .in this case we obtain fig .( [ fig1]a ) illustrates the behavior of the maximum success probability as a function of for different values of .we observe that quickly converges to almost independently of the even if the initial state belongs to a subspace orthogonal to .for instance , in this particular case , if the success probability for the first measurement of vanishes , after four successive measurements of and the success probability has increased to approximately , while after twelve measurement processes it reaches approximately the value .the fact that the success probability is maximized for indicates that in this case the and observables define two mutually unbiased bases in a two - dimensional hilbert space .( [ fig1]b ) shows versus for equal to ( circle ) , ( square ) , and ( triangle ) . since mutually unbiased bases give the optimal process for each , approaches to faster than in the other cases . as is apparent from fig .( [ fig1]b ) , the convergence of the success probability strongly depends on the relation between the involved bases . in the following section we study this relation and generalize the results of this section to the -dimensional case . as a function of n for three values of : (triangle ) , ( square ) , ( circle ) , ( b ) as a function of n for three values of : (triangle ) , ( square ) , ( circle ) , with .,scaledwidth=40.0% ]two noncommuting and nondegenerate observables defined on a -dimensional hilbert space can have at most equal eigenstates with .thus , on this -dimensional subspace both observables can be well defined simultaneously , that is , the system can be described by one of the common eigenstates . however , on the -dimensional subspace only one of them can be well defined .this property for two noncommuting observables turns up in a hilbert space only when its dimension is higher that .one can easily conclude that observables having some common eigendirections are not useful for our purpose . in this casethe noncommutativity of the observables is a necessary but not sufficient condition , as in the two - dimensional case studied in the previous section .this motivates us to study the scheme of driving a quantum state by means of measurement in the general case of a -dimensional hilbert space .we now generalize the previous results to the case of a target state , , belonging to a -dimensional hilbert space .let and be orthonormal bases defined by the spectral decompositions of the non - degenerate observables and respectively . initially the system is described by the state to be mapped onto the known pure state .the probability of success after measurement processes of followed by , is given by , \label{se}\ ] ] where is defined as the process which maps the state with onto the state is fundamental in this protocol because it is repeated when we do not succeed .the success probability of this process is .this can be seen as an inner product between vectors whose real , non - negative components are ( ) , that is , .this product is maximum when both vectors are parallel , which implies that for all .since these vectors are real and the sum of their components is unitary , we deduce that for all and that .therefore , we conclude that .this property indicates that , in the optimal case , the two and observables define two mutually unbiased bases . an alternative proof can be obtained by noting that it suffices to optimize the first step . that is , we need to project a state onto some element of the basis in order to take the state out from the subspace orthogonal to the desired direction .the resulting density matrix of this process is now we look for the basis which leads to the state nearest to the target state .this can be quantified by means of the hilbert - schmidt distance . in this case, we need to minimize the expression considering the state , this expression becomes which means that the basis must be complementary to the original basis .that is , the two required bases are related by means of the discrete fourier transformation . in this scheme only two complementary basesare required , which can always be found . a different proof can be obtained by interpreting as a correlation function and considering the property . for mutually unbiased bases the success probability , eq .( [ se ] ) , simplifies considerably to in the limit , , this expression becomes thus , in the case of higher dimensions , in order to reach a success probability close to , it is required that the number of measurement processes of the observable followed by must be larger than the dimension of the hilbert space . otherwise , the term entering in eqs .( [ pk ] ) and ( [ pke ] ) dominates .we now proceed to obtain an average success probability which does not depend on the initial pure state .this is achieved by integrating over the whole hilbert space , that is where denotes the haar integration measure and we consider initially pure states only . in this casethe average probability is where we have considered the case of mutually unbiased bases .the starting point is the identity where with is an arbitrary base for a -dimensional hilbert space . taking the trace of this identity we obtain making and considering as belonging to the basis ,we obtain , for any state , the identity thereby , the average success probability becomes thus , if we randomly select an initial state , the average success probability of mapping this state onto the target state is given by . in the limit of large , becomes these results are equal to the case when the initial state is , see eqs .( [ pk ] ) and ( [ pke ] ) . an interesting application of this result arises when we study the case of a target state belonging to a bipartite system , each system being described by a -dimensional hilbert space . in particular assuming a factorized initial state of the form the success probability of the process which maps this state onto the state is given by ( 1-\frac{1}{d^2})^n,\ ] ] where the coefficient gives account of the initial decoherence process affecting the state .this coefficient relates the diagonal coefficients of to the non - diagonal coefficients through the relation with .it can be shown that the success probability can be upper bounded as ( 1-\frac{1}{d^2})^n.\ ] ] thereby , when , the maximum probability for fixed is achieved under the condition , that is , for a pure initial state . however , if , the probability is maximum when .this means that , for states fulfilling the condition , such as the singlet state , the success probability is higher in the case of total initial decoherence than in any other case , corresponding the smallest probability to an initially pure state .this scheme can also be connected to the application of quantum erasure .if we fix the target state , and consequently the and operators , then the sequence of measurements will map any initial state onto that same target state .thereby , the overall effect will correspond to probabilistically erasing the information content of the initial state .the success probability of this probabilistic erasure will be given by eq .( [ average success probability ] ) .the above results can be generalized to the case of orthogonal target states belonging to a -dimensional hilbert space .here we consider again the bases and of the observables and respectively .our aim is to map the initial state onto any of the target states . after measuring the observable and failing ,the state of the system is in one of the states .the probability of mapping the system from any one of these states onto any one of the target states by a consecutive measurement of the and observables is given by this probability can be written as a sum of scalar products of the vectors defined in the previous section , that is the maximum value of this quantity is achieved when each scalar product involves two parallel vectors , that is since the sum of the elements of each vector is unity , we obtain .thus , it holds that , that is , all the vectors are equal .this implies that any state has the same projection onto all the states belonging to the basis , which is possible only if .therefore , the and observables define mutually unbiased bases . considering these types of bases , which optimize the process ,the probability of mapping the initial state onto any of states after measurement processes of and is given by as an application , let us suppose that we want to generate copies of each of the orthogonal states , belonging to a -dimensional hilbert space .so , we can assume that the target states belong to a multipartite system composed of identical systems and that they have the form , where .then the probability of generating any of these `` state - copies '' after processes of measurement of followed by is where we have assumed that the initial state is factorized and that each of the systems is in the state . independently of the initial condition ,this probability is closer to unity when and . on the other hand , randomly selecting an initially pure state the average success probability of mapping this state onto one of the target `` state - copies '' is given by the expression which in the limit behaves as thus , in this limit the more the number of copies , the probability of success decreases or converges more slowly to .we have studied a scheme to map an unknown mixed state of a quantum system onto an arbitrary state belonging to a set of known pure quantum states .this scheme is based on a sequence of measurements of two noncommuting observables .the target states are eigenstates of one of the two observables , while the other observable maps the states out of the subspace orthogonal to the one defined by the target states . the success probability turns out to be maximal under the condition that the observables define mutually complementary bases . in other words ,both required bases are always related by a discrete fourier transform .we have also shown that these results hold in the case of arbitrary but finite dimensions .the scheme consists of applying a measurement of the observable followed by measurement processes each one composed of followed by , this is , .the target states belong to the spectral descomposition of the observable .the success probability quickly converges to unity when the number of the sequences of measurement processes is larger than the dimension of the hilbert space .we have connected these results to the generation of quantum copies , quantum deleting and pure entangled states generation .the extension of these results to the case of continuous variables is under study . c. h. bennett et al .* 70 * , 1895 ( 1993 ) .m. zukowsky et al . , phys .* 71 * , 4287 ( 1993 ) .w. k. wootters and w. h. zurek , nature * 299 * , 802 ( 1982 ) . l. m. duan and g. c. guo , phys .. lett . * 80 * , 499 ( 1998 ) .a. k. pati and s. l. braunstein , nature * 404 * , 164 ( 2000 ) .z. zhao , y. a. chen , a. n. zhang , t. yang , h. j. briegel , and j. w. pan , nature * 430 * , 54 ( 2004 ) .z. zhao et al .91 * , 180401 ( 2003 ). j. w. pan , m. daniell , s. gasparoni , g. weihs , and a. zeilinger , phys .86 * , 4435 ( 2001 ). j. w. pan , d. bouwmeester , m. daniell , h. weinfurter , and a. zeilinger , nature * 403 * , 515 ( 2000 ) .d. bouwmeester , j. w. pan , m. daniell , h. weinfurter , and a. zeilinger , phys .lett . * 82 * , 1345 ( 1999 ) .d. boschi , s. branca , f. de martini , l. hardy , and s. popescu , phys .lett . * 80 * , 1121 ( 1998 ) .d. bouwmeester , j. w. pan , k. mattle , m. eibl , h. weinfurter , and a. zeilinger , nature * 390 * , 575 ( 1997 ) .j. lee , m. s. kim , and c. brukner , phys .. lett . * 91 * , 087902 ( 2003 ) . c. archer , j. math. phys . * 46 * , 022106 ( 2005 ) .s. chaturvedi , phys .a * 65 * , 044301 ( 2002 ) .t. s. santhanam , proc .opt . eng . * 5815 * , 215 ( 2005 ) .f. reif , _ fundamentals of statistical and thermal physics _ ( mcgraw - hill , 1976 ) .k. banaszek , phys .a * 62 * , 024301 - 1 ( 2000 ) .
|
we study the problem of mapping an unknown mixed quantum state onto a known pure state without the use of unitary transformations . this is achieved with the help of sequential measurements of two non - commuting observables only . we show that the overall success probability is maximized in the case of measuring two observables whose eigenstates define mutually unbiased bases . we find that for this optimal case the success probability quickly converges to unity as the number of measurement processes increases and that it is almost independent of the initial state . in particular , we show that to guarantee a success probability close to one the number of consecutive measurements must be larger than the dimension of the hilbert space . we connect these results to quantum copying , quantum deleting and entanglement generation .
|
long - haul communication research has during the latest decade become completely focused on coherent transmission .the data throughput has been increased by better utilization of the available bandwidth and receiver digital signal processing ( dsp ) has allowed many signal impairments to be compensated for .further significant increase of the data rate is expected through the use of multicore / multimode fibers , but while this development is exciting , there are also significant challenges . for example , these approaches call for the deployment of new fibers .it is also important to use available resources to the greatest possible extent .the increase in spectral efficiency enabled by coherent communication is actually an example of this .for example , using a channel spacing of 50 ghz , it is now possible to transmit 100 gbit / s using polarization - multiplexed quadrature phase - shift keying . comparing this with the traditional 10 gbit / s using on off keying ,the data throughput is increased by a factor of ten . in this work ,the focus is on optical networking . by developing routing algorithms with awareness of the nonlinear physical properties of the channel, it would be possible to operate optical links closer to the optimum performance , and this would further increase the throughput in optical communication networks .optical networks are not as flexible as their electronic counterparts , but there are several efforts that aim at improving the situation by investigating , _e.g. _ , cognitive and elastic optical networks .an increased flexibility requires , _e.g. _ , hardware routing of channels in the optical domain and a monitoring system responsible for the scheduling of the data streams .it is also desirable that the coherent transmission and detection can be done using a number of different modulation formats and symbol rates , chosen dynamically in response to time - varying traffic demands and network load .hence , the traditional wavelength - division multiplexing ( wdm ) paradigm , in which the available spectrum is divided into a fixed grid of equal - bandwidth channels , is being replaced by the concept of _ flexible - grid _wdm .the scheduling algorithm requires a nonlinear model of the physical layer .while the problem of linear routing and wavelength assignment is well investigated , see , _e.g. _ , , no nonlinear model that combines reasonable accuracy and low computational complexity seems to have been published .both these properties are necessary in order to be able to find a close - to - optimal solution in real time .this excludes simulations of the nonlinear schrdinger equation as an alternative , but the recently suggested gaussian noise ( gn ) model provides a tool to approach this question .unfortunately , also the general formulation of the gn model is computationally complex and further simplification is necessary . in this paper, we start from a description of how a model suitable for network optimization can be formulated and derive such a model from the gn model .the organization of this paper is as follows . in sec .[ sec_approach ] , the considered problem is stated and the approach is outlined . the main assumptions and approximationsare given in sec .[ sec_model ] . the model for a multichannel fiber spanis derived in sec .[ sec : span ] , and it is expanded into a network model in sec .[ sec : network - model ] . after a brief discussion about the validity of the model assumptions ,the paper is concluded in sec .[ sec_discussion ] .a network consists of a number of _ nodes , _ _ i.e. _ , transceivers or routing hardware components , connected by optical communication _ links_.each link consists of concatenated _ fiber spans , _ which each consists of an optical fiber followed by an erbium - doped fiber amplifier ( edfa ) .each link can transmit simultaneous _ channels _ using wdm .in such a network , a large number of _ connections _ between transceiver nodes are established . for each connectionthere is a _ route , _ _i.e. _ , a set of fiber links that connect the transmitting and receiving nodes via a number of intermediate nodes .we consider an all - optical network , implying that the signal is in the optical domain throughout the path .the intermediate nodes typically consist of reconfigurable add / drop multiplexers and may also include wavelength conversion. for any connection through the network , there will be signal degradation caused by a number of mechanisms .the two most fundamental ones are amplifier noise and nonlinear signal distortion due to the kerr nonlinearity .while the amplified spontaneous emission from optical amplifiers is easy to model , the latter presents a big challenge .additional degrading effects include the finite signal - to - noise ratio ( snr ) already at the transmitter , which may be important for large quadrature amplitude modulation constellations , and the crosstalk between different wdm channels in routing components and in the receiver . however , the efforts to increase the spectral efficiency have led to sophisticated shaping of the optical spectrum .techniques such as orthogonal frequency - division multiplexing and nyquist wdm have demonstrated optical signal spectra that are very close to rectangular . using a dsp filter , the channel crosstalkcan then be very small in the receiver .optical routing components , such as reconfigurable optical add - drop multiplexers , are more difficult to realize as the filter function is implemented in optical hardware , but the lack of wdm channel spectral overlap reduces the crosstalk also here .signal degradation due to , _e.g. _ , polarization - mode dispersion is neglected as it is compensated for by the receiver equalizer .thus , we here choose to focus on the nonlinear effects generated as the interplay between the kerr nonlinearity and the chromatic dispersion , known as the nonlinear interference ( nli ) within the gn model . the aim of the modeling effort is to find an approximate quantitative model for the nli for a large number of connections between network transceivers .for each connection there is a route , _i.e. _ , a set of fiber links that connect the transmitting and the receiving nodes via a number of intermediate nodes . each link can transmit wdm channels and for each channel , the center frequency , the bandwidth , and the power are chosen .this is summarized as the set of channel parameters . for a given link , the wdm channelscan then be written as the set .the physical parameters of a link are the power attenuation , the group - velocity dispersion , and the nonlinearity . here ] , ] depends on both the signal and the system ( * ? ? ?* section ix ) .there seems to be no exact analytical expression for available and this approach would also require all spans in the links to be identical .we see no obvious way to improve this approximation but it should be remembered that the wdm channel switching will reduce the error from this assumption . finally , approximations were introduced in the integration of . as seen , the value of is quickly reduced as the frequency separation is increased .thus we expect the choice to include only sci and xci to lead to very small error , as long as not too narrow channel bandwidths are considered , but this is , as discussed , an inherent assumption of the gn model . for the same reason , the error introduced by approximating the integration polygons by rectangles is small. we do not expect the proposed model to be the last word in the development of nonlinear fiber - optic network models , but rather a starting point .it is , to our knowledge , the first model that can predict the signal quality independently for a number of heterogeneous channels in a flexible - grid wdm system .such a model is essential for efficient , if not optimal , resource management in elastic optical networks , which is an interesting and emerging area for future research .for , the dilog function has the asymptotic expansion ^{2 - 2 k}}{\gamma(3 - 2 k)},\end{aligned}\ ] ] where are the bernoulli numbers . as the functionis not defined for negative integers , there are only two terms in the expansion , giving asymptotically however , assuming , we have where is the sign of . we get and which gives i. de miguel , r. j. durn , r. m. lorenzo , a. caballero , i. t. monroy , y. ye , a. tymecki , i. tomkos , m. angelou , d. klonidis , a. francescon , d. siracusa , and e. salvadori , `` cognitive dynamic optical networks , '' in _ optical fiber communication conference ( ofc ) _ , 2013 , p. ow1h.1 .p. poggiolini , a. carena , v. curri , g. bosco , and f. forghieri , `` analytical modeling of nonlinear propagation in uncompensated optical transmission links , '' _ ieee photon .technol ._ , vol .23 , no . 11 , pp . 742744 , june 2011 .a. carena , v. curri , g. bosco , p. poggiolini , and f. forghieri , `` modeling of the impact of nonlinear propagation effects in uncompensated optical coherent transmission links , '' _ j. lightw. technol ._ , vol . 30 , no . 10 , pp .15241539 , may 2012 .r. schmogrow , m. winter , m. meyer , d. hillerkuss , s. wolf , b. baeuerle , a. ludwig , b. nebendahl , s. ben - ezra , j. meyer , m. dreschmann , m. huebner , j. becker , c. koos , w. freude , and j. leuthold , `` real - time nyquist pulse generation beyond 100 gbit / s and its relation to ofdm , '' _ opt . express _20 , no . 1 ,317337 , jan . 2011 .p. johannisson and m. karlsson , `` perturbation analysis of nonlinear propagation in a strongly dispersive optical communication system , ''_ j. lightw ._ , vol . 31 , no . 8 , pp . 12731282 , apr. 2013 .pontus johannisson received his ph.d .degree from chalmers university of technology , gothenburg , sweden , in 2006 .his thesis was focused on nonlinear intrachannel signal impairments in optical fiber communications systems . in 2006, he joined the research institute imego in gothenburg , sweden , where he worked with digital signal processing for inertial navigation with mems - based accelerometers and gyroscopes . in 2009 , he joined the photonics laboratory , chalmers university of technology , where he currently holds a position as assistant professor .a significant part of his time is spent working on cross - disciplinary topics within the fiber - optic communications research center ( force ) at chalmers .his research interests include , _e.g. _ , nonlinear effects in optical fibers and digital signal processing in coherent optical receivers . from 1997 to 1999 , he was a postdoctoral researcher with the university of california , san diego and the university of illinois at urbana - champaign . in 1999, he joined the faculty of chalmers university of technology , first as an associate professor and since 2009 as a professor in communication systems . in 2010, he cofounded the fiber - optic communications research center ( force ) at chalmers , where he leads the signals and systems research area .his research interests belong to the fields of information theory , coding theory , and digital communications , and his favorite applications are found in optical communications .agrell served as publications editor for the ieee transactions on information theory from 1999 to 2002 and is an associate editor for the ieee transactions on communications since 2012 .he is a recipient of the 1990 john ericsson medal , the 2009 itw best poster award , the 2011 globecom best paper award , the 2013 ctw best poster award , and the 2013 chalmers supervisor of the year award .
|
a low - complexity model for signal quality prediction in a nonlinear fiber - optical network is developed . the model , which builds on the gaussian noise model , takes into account the signal degradation caused by a combination of chromatic dispersion , nonlinear signal distortion , and amplifier noise . the center frequencies , bandwidths , and transmit powers can be chosen independently for each channel , which makes the model suitable for analysis and optimization of resource allocation , routing , and scheduling in large - scale optical networks applying flexible - grid wavelength - division multiplexing . optical fiber communication , optical fiber networks , fiber nonlinear optics , wavelength division multiplexing .
|
the enhancement of x - ray and ultraviolet ( uv ) emission that is observed during chromospheric flares on the sun immediately causes an increase in electron density in the ionosphere .these density variations are different for different altitudes and are called sudden ionospheric disturbances , sid , ( davies , 1990 ) , ( donnelly,1969 ) .sids are generally recorded as the short wave fadeout , swf , ( stonehocker , 1970 ) , sudden phase anomaly , spa , ( ohshio , 1971 ) , sudden frequency deviation , sfd , ( donnelly , 1971 ) , ( liu et al . , 1996 ) , sudden cosmic noise absorption , scna , ( deshpande and mitra , 1972 ) , sudden enhancement / decrease of atmospherics , ses , ( sao et al . , 1970 ) .much research is devoted to sid studies , among them a number of thorough reviews ( mitra , 1974 ) , ( davies , 1990 ) .sfd are caused by an almost time - coincident increase in -and -region electron densities at over 100 km altitudes covering an area with the size comparable to or exceding that of the region monitored by the system of hf radio paths ( davies , 1990 ) , ( donnelly,1969 ) , ( liu et al . ,a limitation of this method is the uncertainty in the spatial and altitude localization of the uv flux effect , the inadequate number of paths , and the need to use special - purpose equipment .the effect of solar flares on the ionospheric -region is also manifested as a sudden increase of total electron content , sitec , which was measured previously using continuously operating vhf radio beacons on geostationary satellites ( mendillo et al . , 1974 ) , ( davies , 1980 ) .a serious limitation of methods based on analyzing vhf signals from geostationary satellites is their small and ever increasing ( with the time ) number and the nonuniform distribution in longitude .hence it is impossible to make measurements in some geophysically interesting regions of the globe , especially in high latitudes .a further , highly informative , technique is the method of incoherent scatter - is ( mendillo et al . , 1974 ) , ( thome et al . , 1971 ) .however , the practical implementation of the is method requires very sophisticated , expensive equipment .an added limitation is inadequate time resolution .since the relaxation time of electron density in the and regions is also less than 5 - 10 min , most is measurements lack time resolution needed for the study of ionospheric effects of flares .consequently , none of the above - mentioned existing methods can serve as an effective basis for the radio detection system to provide a continuous , global sid monitoring with adequate space - time resolution .furthermore , the creation of these facilities requires developing special purpose equipment , including powerful radio transmitters contaminating the radio environment .it is also significant that when using the existing methods , the inadequate spatial aperture gives no way of deducing the possible spatial inhomogeneity of the x - ray and uv flux .the advent and evolution of a global positioning system ( gps ) and also the creation on its basis of widely branched networks of gps stations ( at least 800 sites at the february of 2001 , the data from which are placed on the internet ) opened up a new era in remote ionospheric sensing .high - precision measurements of the tec along the line - of - sight ( los ) between the receiver on the ground and transmitters on the gps system satellites covering the reception zone are made using two - frequency multichannel receivers of the gps system at almost any point of the globe and at any time simultaneously at two coherently coupled frequencies mhz and mhz .the sensitivity of phase measurements in the gps system is sufficient for detecting irregularities with an amplitude of up to of the diurnal tec variation .this makes it possible to formulate the problem of detecting ionospheric disturbances from different sources of artificial and natural origins .the tec unit ( tecu ) which is equal to and is commonly accepted in the literature , will be used throughout the text .afraimovich et al .( 2000a , 2000b , 2001 ) developed a novel technology of a global detection of ionospheric effects from solar flares and presented data from first gps measurements of global response of the ionosphere to powerful impulsive flares of july 29 , 1999 , and december 28 , 1999 , were chosen to illustrate the practical implementation of the proposed method .authors found that fluctuations of tec , obtained by removing the linear trend of tec with a time window of about 5 min , are coherent for all stations and los on the dayside of the earth .the time profile of tec responses is similar to the time behavior of hard x - ray emission variations during flares in the energy range 25 - 35 kev if the relaxation time of electron density disturbances in the ionosphere of order 50 - 100 s is introduced .no such effect on the nightside of the earth has been detected yet .the objective of this paper is to use this technology for analysing the ionosphere response to faint and bright solar flares .following is a brief outline of the global monitoring ( detection ) technique for solar flares . a physical groundwork for the method is formed by the effect of fast change in electron density in the earth s ionosphere at the time of a flare simultaneously on the entire sunlit surface .essentially , the method implies using appropriate filtering and a coherent processing of tec variations in the ionosphere simultaneously for the entire set of visible ( during a given time interval ) gps satellites ( as many as 5 - 10 satellites ) at all global gps network stations used in the analysis . in detecting solar flares ,the ionospheric response is virtually simultaneous for all stations on the dayside of the globe within the time resolution range of the gps receivers ( from 30 s to 0.1 s ) .therefore , a coherent processing of tec variations implies in this case a simple addition of single tec variations .the detection sensitivity is determined by the ability to detect typical signals of the ionospheric response to a solar flare ( leading edge duration , period , form , length ) at the level of tec background fluctuations .ionospheric irregularities are characterized by a power spectrum , so that background fluctuations will always be distinguished in the frequency range of interest .however , background fluctuations are not correlated in the case of beams to the satellite spaced by an amount exceeding the typical irregularity size . with a typical length of x - ray bursts and uv emission of solar flares of about 5 - 10 min ,the corresponding ionization irregularity size does normally not exceed 30 - 50 km ; hence the condition of a statistical independence of tec fluctuations at spaced beams is almost always satisfied .therefore , coherent summation of responses to a flare on a set of los spaced throughout the dayside of the globe permits the solar flare effect to be detected even when the response amplitude on partial los is markedly smaller than the noise level ( background fluctuations ) .the proposed procedure of coherent accumulation is essentially equivalent to the operation of coincidence schemes which are extensively used in x - ray and gamma - ray telescopes .if the sid response and background fluctuations , respectively , are considered to be the signal and noise , then as a consequence of a statistical independence of background fluctuations the signal / noise ratio when detecting the flare effect is increased through a coherent processing by at least a factor of , where is the number of los .the gps technology provides the means of estimating tec variations on the basis of phase measurements of tec in each of the spaced two - frequency gps receivers using the formula ( hofmann - wellenhof et al . , 1992 ) , ( calais and minster , 1996 ) : \ ] ] where and are the increments of the radio signal phase path caused by the phase delay in the ionosphere ( m ) ; and stand for the number of complete phase rotations , and and are the wavelengths ( m ) for the frequencies and , respectively ; is some unknown initial phase path ( m ) ; and is the error in determining the phase path ( m ) .phase measurements in the gps system are made with a high degree of accuracy where the error in tec determination for 30-second averaging intervals does not exceed , although the initial value of tec does remain unknown ( hofmann - wellenhof et al . , 1992 ) .this permits ionization irregularities and wave processes in the ionosphere to be detected over a wide range of amplitudes ( as large as of the diurnal variation of tec ) and periods ( from several days to 5 min ) .the tec unit , , which is equal to m and is commonly accepted in the literature , will be used throughout the text .the data analysis was based on using the stations , for which the loval time during the flare was within 10 to 17 lt . from 50 to 150 los were processed for each flare .primary data include series of slant values of tec , as well as the corresponding series of elevations and azimuths along los to the satellite calculated using our developed convtec program which converts the gps system standard rinex - files on the internet ( gurtner , 1993 ) .the determination of sid characteristics involves selecting continuous series of measurements of at least a one - hour interval in length , which includes the time of the flare .series of elevations and azimuths of the los are used to determine the coordinates of subionospheric points . in the case under consideration , all results were obtained for elevations larger than . the method of coherent summation of time derivatives of the series of variations of the `` vertical '' tec value was employed in studying the ionospheric response to solar flares .our choice of the time derivative of tec was motivated by the fact this derivative permits us to get rid of a constant component in tec variations ; furthermore , it reflects electron density variations that are proportional to the flux of ionizing radiation .the coherent summation of time derivatives of the series of variations of the `` vertical '' tec value was made by the formula : where is the number of los . the correction coefficient is required for converting the slant tec to an equivalent `` vertical '' value ( klobuchar , 1986 ) ,\ ] ] where is earth s radius ; and is the height of the ionospheric -layer maximum . next the trend determined as a polynomial on a corresponding time interval is removed from the result ( normalized to the number of los ) of the coherent summation of the time derivatives . after that , the calculated time dependence is integrated in order to obtain the mean integral tec increment on the time interval specified . this technique is useful for identifying the ionospheric response to faint solar flares ( of x - ray class c ) when the variation amplitude of the tec response to separate los is comparable to the level of background fluctuations .an example of a processing of the data for a faint solar flare july 29 , 1999 ( c2.7/ sf , 11:11 gt , s16w11 ) is given in figure 1 .one hundred los were processed for the analysis of this event .panels ( a ) and ( b ) present the typical time dependencies of tec variations for separate los , and their time derivatives . the brus ( prn14 , thick line ) and bahr ( prn29 , thin line ) stations are taken as our example .it is apparent from these dependencies that no response to the flare is distinguished in the tec variations and in their time derivatives for the individual los , because the amplitude of the tec response for the individual los is comparable to the level of background fluctuations .a response to the solar flare is clearly seen in the time dependence ( figure 1c ) which is a normalized result of a coherent summation of the time derivatives of the tec variations for all los . upon subtracting the trend determined as a polynomial of degree 3 on the time interval 10:07 - 10:39 ut , the same curve ( c )is presented in figure 1d as .next the calculated time dependence was integrated over the time interval 10:07 - 10:39 ut to give the mean integral increment of tec ( figure 1e , thick line ) .a comparison of the resulting dependence with the values of the soft x - ray emission flux ( goes-10 ) in the range 1 - 8 ( figure 1e , thin line ) reveals that it has a more flattened form , both in it rise and fall .a maximum in x - rays is about 6 minutes ahead of that in tec .examples of the application of our technology for the analysis of the ionospheric response to faint solar flares are given in figures 2 and 3 .figure 2 gives the data processing results on tec variations for solar flares : july 29 , 1999 ( c2.2 , 11:00 ut ) on panels a , b , c , d , and july 29 , 1999 ( c6.2/sn , 15:14 ut , n25e33 ) on panels e , f , g , h. figure 3 shows the results of a data processing of tec variations for solar flares of november 17 , 1999 ( c7.0/1n , 09:38 ut , s15w53 ) on panels a , b , c , d , and november 11 , 1999 ( c5.0 , 15:40 ut ) on panels e , f , g , h.an example of a processing of the data for the bright solar flare of july 14 , 1998 ( m4.6/1b , 12:59 ut , s23e20 ) is given in figure 4 .fifty los were used in the analysis of this event .figure 4a presents the time dependencies of hard x - ray emission ( cgro / batse , 25 - 50 kev , thick line on panels a ) and of the uv line ( soho / summer 171 , thin line ) in arbitrary units ( aschwanden et al . , 1999 ) .it should be noted that the time dependence of the uv 171 line is more flattened , both in the rise and in the fall , when compared with the hard x - ray emission characteristic .the increase in the uv 171 line starts by about 1.8 minute earlier , and the duration of its disturbance exceeds considerably that of the hard x - ray emission disturbance .panel ( b ) presents the typical time dependencies of the tec variations for separate los .the aoml ( prn24 , thick line ) and acsi ( prn18 , thin line ) stations are taken as examples . a response to the bright flareis clearly distinguished for separate los .the normalized sum of the time derivatives of the tec variations for all los is presented in figure 4c ; panel ( d ) plots the same curve ( c ) , upon subtracting the trend determined as a polynomial of degree 3 on the time interval 12:48 - 13:12 ut . nextthe resulting time dependence was integrated in order to obtain the mean integral increment of tec ( figure 4e ) .it might be well to point out that the time dependence of the mean integral increment of tec has a more flattened form in the rise than the emission flux characteristics ; however , the onset time of its increase coincides with that of hard x - ray emission , and is delayed by about 1.8 minutes with respect to the uv 171 line . a total of 11 events was processed .the class of x - ray flares was from m4.5 to m7.4 .it was found that the mean tec variation response in the ionosphere depends on the flare lecation on the sun ( central meridian distance , cmd ) - figure 5a .our results is consistent with the findings reported by ( donnelly , 1969 ) , ( donnelly , 1971 ) , ( donnelly , 1976 ) , ( donnelly et al . , 1986a ) , where a study of extreme uv ( euv ) flashes of solar flares observed via sfd was made . in the cited references it was shown that the relative strength of impulsive euv emission from flares decreases with increasing cmd and average peak frequency deviation is also significantly lower for sfd s associated with flares at large cmd .( donnelly , 1971 ) is of the opinion that percentage of flares with sfd s tends to decrease for large cmd of flare location - figure 5b .similar effects at the center and limb were observed in the ratio of euv flux to the concurrent hard x ray flux ( kane and donnelly , 1971 ) .using a fourth - order polynomial to fit the results in figure 5b with cmd in degrees ( donnelly , 1976 ) gives equation ( 5 ) implies that on the average the impulsive euv emission is more than an order of magnitude weaker for flares near the solar limb than for flares at the central meridian .( donnelly , 1976 ) have assumed that it is result from the low - lying nature of the flare source region and from absorption of euv emission in the surrounding cool nonflaring atmosphere .figure 5c presents the result of a modeling of the sfd occurrence probability at the time of the solar flare as a function of cmd ( solid line ) in arbitrary units , as well as the values amplitude of the tec response in the ionosphere to solar flares ( in the range of x - ray class m4.9-m5.7 ( dots ) , and m6.6-m7.4 ( grosses ) as a function of cmd . the modeling used equation ( 5 ) .it figure suggests that the results of our measurements do not contradict the conclusions drawn by donnelly ( 1976 ) that the relative strength of impulsive euv emission from flares decreases with increasing cmd .it should be noted that in the case of solar flares whose class is similar in x - ray emission , the dependence under study resembles cos(cmd ) rather than a polynomial of degree 4 .the fitting cos(cmd ) curve for solar x - ray m4.4-m5.7 flares is plotted in figure 5a ( solid line ) and 5c ( dashed line ) .this conclusion is consistent with the findings reported by ( donnelly and puga , 1990 ) . in the cited referenceit was found the empirical curves of the average dependence of active region emission on its cmd ( figure 6a ) for several wavelengs .assume a quiet sun plus one average active region that starts at the center of the backside of the sun and rotates across the center of the solar disk with a 28-day period .6(b - e ) presents our obtained dependencies of as a function of the angular distance of the flaring region from the central solar meridian ( cmd ) for different classes of flares in the x - ray range : x2.0-x5.7 ( fig .6b , 6 flares ) , m2.0-m7.4 ( fig .6c , 26 flares ) , m1.1 ( fig .6d , 14 flares ) , and c2.7-c10 ( fig . 6e , 7 flares ) .because of lack of space , we do not give here any detailed characteristics of the flares .our analysis was based on using sets of gps stations and los similar to those described in the previous section .6a presents the appropriate dependencies of x - ray flux emission intensities ( mosher , 1979 ) , f10 cm ( riddle , 1969 ; vauquois , 1955 ) , and uv ( samain , 1979 ) .it is evident from fig . 6that the character of the response amplitude corresponds to the behavior of ultraviolet lines .hence it follows that the main contribution to the ionospheric tec response to solar flares is made by the f region and the upper part of the e region where solar ultraviolet radiation is absorbed .7 plots , on a logarithmic scale , the dependence of the response amplitude to solar flares as a function of their peak power f in the x - ray range for flares located near the center of the solar disk ( ) .this dependence illustrates a wide dynamic range of the proposed method ( three orders in the flare power ) , and is quite well approximated by the power - law function paper suggests a new method for investigating the ionospheric response to faint solar flares ( of x - ray class c ) when the variation amplitude of the tec response to individual los is comparable to the level of background fluctuations .the dependence of the tec variation response amplitude on the flare location on the sun is investigated . in the case of solar flareswhose class is similar in x - ray emission , the dependence under study resembles cos(cmd ) .the high sensitivity of our method permits us to propose the problem of detecting , in the flare x - ray and euv ranges , emissions of non - solar origins which are the result of supernova explosions . for powerful solar flaresit is not necessary to invoke a coherent summation , and the ionospheric response can be investigated for each beam .this opens the way to a detailed study of the sid dependence on a great variety of parameters ( latitude , longitude , solar zenith angle , spectral characteristics of the emission flux , etc . ) . with current increasing solar activity ,such studies become highly challenging .in addition to solving traditional problems of estimating parameters of ionization processes in the ionosphere and problems of reconstructing emission parameters , the data obtained through the use of our method can be used to estimate the spatial inhomogeneity of emission fluxes at scales of the earth s radius .authors are grateful to e.a . kosogorov and o.s .lesuta for preparing the input data .thanks are also due v.g .mikhalkovsky for his assistance in preparing the english version of the manuscript .this work was done with support under rfbr grant of leaping scientific schools of the russian federation no . 00 - 15 - 98509 and russian foundation for basic research ( grants 99 - 05 - 64753,00 - 05 - 72026 , 00 - 07 - 72026 and 00 - 02 - 16819a ) , gntp astronomy. afraimovich , i. l. , kosogorov e. a. and leonovich l. a.,2000b .the use of the international gps network as the global detector ( globdet ) simultaneously observing sudden ionospheric disturbances , earth , planets , and space , 52 , 10771082 .afraimovich , e.l .altyntsev , e.a .kosogorov , n.s .larina , and l.a .leonovich , 2001 .detecting of the ionospheric effects of the solar flares as deduced from global gps network data , geomagnetism and aeronomy , 41 , 2 , 208214 .deshpande s. d. and mitra a. p. , 1972 .ionospheric effects of solar flares , iv , electron density profiles deduced from mlasurements of scna s and vlf phase and amplitude , j. atmos . terr . phys .34 , 255 .mendillo m. , klobuchar j. a. , fritz r. b. , da rosa a.v ., kersley l. , yeh k. c. , flaherty b. j. , rangaswamy s. , schmid p. e. , evans j. v. , schodel j. p. , matsoukas d. a. , koster j. r. , webster a. r. , chin p. , 1974bbehavior of the ionospheric f region during the great solar flare of august 7 , 1972 , j. geophys .res . , 79 , 665672 .
|
results derived from analysing the ionosphere response to faint and bright solar flares are presented . the analysis used technology of a global detection of ionospheric effects from solar flares as developed by the authors , on the basis of phase measurements of the total electron content ( tec ) in the ionosphere using an international gps network . the essence of the method is that use is made of appropriate filtering and a coherent processing of variations in the tec which is determined from gps data , simultaneously for the entire set of visible gps satellites at all stations used in the analysis . this technique is useful for identifying the ionospheric response to faint solar flares ( of x - ray class c ) when the variation amplitude of the tec response to separate line - on - sight to gps satellite is comparable to the level of background fluctuations . the dependence of the tec variation response amplitude on the flares location on the sun is investigated . 10000 10000
|
measurement - based feedback control is the process of making measurements on a quantum system , and using the results of the measurements to apply forces to the system to control it .an alternative means of realizing feedback control is to have the system interact with a second `` auxiliary '' quantum system .the auxiliary quantum system can extract information from the primary system , via the interaction , and apply forces back onto the system , also via the interaction , thus implementing feedback control without the use of measurements .this paradigm for controlling quantum systems is referred to as _ coherent feedback control _ . since many control processes that involve auxiliary systemsare designed without reference to feedback processes , coherent feedback should be viewed more as a way of thinking about quantum control processed rather than as a distinct control technique . from a purely theoretical point of viewcoherent feedback subsumes control the uses measurements any measurement - based process can be implemented in a coherent manner , technology permitting .the discovery by nurdin , james , and petersen that a coherent feedback process can outperform measurement - based feedback for linear quantum systems began a quest to understand the relationship between the two forms of control . while the difference discovered by nurdin _et al . _was _ _ small , hamerly and mabuchi subsequently showed that coherent feedback could significantly outperform its measurement - based counterpart for cooling oscillators when the controller was restricted to linear interactions and controls . following thisit was shown in that when the strength of the system / auxiliary interaction hamiltonian is bounded , coherent feedback can significantly outperform measurement - based feedback even when the controller has access to arbitrary ( non - linear ) interactions and controls . in this casethe difference between the two is due to a fundamental restriction on the paths in hilbert space that measurement - based control can use .we note also , that in addition to the quantitive relationships between coherent and measurement - based control determined in the works mentioned above , yamamoto and wiseman have also determined important qualitative differences between the two forms of control . herewe consider a somewhat different , and arguably more experimentally relevant , constraint on control resources than that used in .the forces that can be applied to a system , or more specifically the physical coupling between a system and an external controller , is an important and fundamental resource for control .a key question in quantum control is therefore how a constraint on this coupling affects the optimal control that can be achieved for the system . in , and also in , the authors considered the best possible control that could be obtained when the norm of the coupling hamiltonian with the system that is bounded .this particular choice of constraint on the coupling is most appropriate when the quantum system is finite dimensional and is coupled directly to another mesoscopic system that is also finite dimensional . in this casethe coupling hamiltonian is finite dimensional and the norm of this hamiltonian characterizes well the maximum forces that the controller can apply to the system . a quite different , but also physically natural way to control a system is to couple it to a traveling - wave field , and such a field is effectively infinite dimensional . in this casethe norm of coupling hamiltonian may be unbounded , and no - longer characterizes the rate at which the controller can modify the state of the system . instead the forces that can be applied by the controller , certainly in the limit in which the coupling to the field is broadband ( markovian ) can be characterized solely by the system operator that couples the system to the field , and in particular by the norm of this operator .( note that the overall size of any coupling between the system and field can always be absorbed into the system observable ) .it is a coupling to a traveling - wave field that is used to make a continuous measurement on a system , and it is therefore this type of coupling that is relevant for continuous - measurement based feedback control . a field - coupling can also be used to couple mesoscopic systems together and thus implement coherent feedback control .here we compare the performance of measurement - based and coherent feedback control when the fundamental limitation is the magnitude of the coupling to the field .specifically we will characterize this magnitude using a measure of the rate at which the field extracts information about the system ( see below ) .more simply this can be thought of merely as the norm of the operator that appears in the master equation for the system when the field is traced out. we will refer to this norm as the _ strength _ of the field - coupling . in order to compare measurement - based feedback ( mbf ) to coherent feedbackwe take into account the following important difference between the two. to extract information from a mesoscopic system the system must interact with another system that is also mesoscopic , by which we mean it has a similar energy scale to the first .this is because in order for the second `` probe '' system to learn about the state of the first , the latter must change the state of the probe appreciably .that is , it must have an appreciable effect on probe .however , to reliably store and process the information obtained by the probe this information must , at least with present technology , be stored on circuits that have a much higher energy scale than the of typical mesoscopic quantum systems .the information in the probe must somehow be transferred to a much more macroscopic system , and this is the process of _ amplification_. note that the process of amplification effectively allows a mesoscopic system ( the probe ) to have an appreciable effect on a macroscopic system .it is because this process is most readily performed in stages that motivates our previous assertion that the measurement must be initially realized by coupling the system to another mesoscopic system , rather than directly to a macroscopic one . as something of an aside, it is worth noting that the amplification involved in a measurement is the only part of the process of measurement that distinguishes measurement from any other quantum dynamical process .the fact that the results of a measurement are amplified to a macroscopic level means that the mechanism by which mbf can apply feedback forces to a system is quite different than that available to coherent feedback control ( cfc ) . because coherent feedback must maintain coherence and thus quantum behavior during the feedback loop , all the control must be implemented using mesoscopic systems ( at least given present technology ) , and thus the feedback forces must be applied using an interaction between mesoscopic systems .therefore , if there is an experimental limitation on the strength that can be achieved between a mesoscopic system and a field , the feedback forces applied by cfc must be subject to this same limitation .the feedback forces that are applied by mbf , on the other hand , can be applied using fields with macroscopic amplitudes , and as a result are not subject to the same constraint as the strength of the coupling via which a mesoscopic system affects a field .the reason for this is that the interaction strength , or the force , by which the macroscopic field affects the system is proportional not only to the system operator that couples to the field , but also to the `` coherent - state '' amplitude of the field .thus while the forces applied by a mesoscopic system to a field are `` weak '' those applied by the field to the system can be `` strong '' if the field has sufficient amplitude . given the above discussion of the physical difference between mbf and cfc , we conclude that a fair comparison between the two is obtained , at least for the purposes of current experimental technology , by placing the same limit on the strength with which the measurement component of mbf interacts with a given system as that in which a cfc controller interacts with the system , but allowing the feedback forces applied by mbf to be as large as desired .that is the basis we will use for our comparison here . in the next sectionwe define the task , or `` control problem '' , for which we will compare mbf and cfc , and define what we mean by `` perfect '' or `` ideal '' controllers . in section [ bounds ]we define precisely how we quantify the strength of a markovian coupling between a system and a field , and thus how we quantify the constraint in our control problem for both classical and quantum controllers .we also explain why we restrict the coupling between the system - to - be - controlled and the fields to those in which the coupling operator is hermitian . in section [ hpic ]we briefly review the heisenberg - picture quantum noise equations that we use to describe the coupling between the systems and the fields , and introduce some useful notation . in section [ secmbf ]we describe the control of a single qubit with a continuous measurement and discuss briefly what is known about the performance of this kind of control .in particular , we review the optimal performance of this control method in the limit in the which the feedback force is infinite , which has been established in previous work , and present numerical results on the optimal performance when the feedback force is finite . in section [ cfc ]we introduce the cfc configurations that we consider , which cover all configurations in which both the system and controller have two interactions with a field , and discuss some of there properties .in section [ numcfc ] we use numerical optimization to explore the performance of these configurations when the controller is a single qubit , and we compare this performance to that of continuous measurement - based control described in section [ secmbf ] . finally , we summarize the results and some open questions in section [ conc ] , and the appendix presents further details of the method we use for the numerical optimization .here we consider the control of a single qubit , which provides not only a system that is experimentally relevant , but also one that is relatively simple dynamically and thus a good platform for evaluating the relative performance of various control systems . as the task for our controllerswe choose that of maintaining the qubit in its ground state in the presence of thermal noise .this task if thus one of steady - state control . as our measure of performance( strictly , lack of performance ) we choose the steady - state probability that the qubit is in its excited state . denoting the ground and excited states of the qubit by and , respectively , the master equation for the qubit in the absence of any control is - \frac{\gamma}{2 } \left [ ( n_t + 1 ) \mathcal{k}(\sigma ) + n_t \mathcal{k}(\sigma^\dagger ) \right ]\rho \label{therm}\ ] ] in which the hamiltonian is , is the thermal relaxation rate of the qubit , and the superoperator is defined by for an arbitrary operator .the temperature of the bath is characterized by the parameter which is given by )},\ ] ] where is boltzmann s constant and is the temperature . in the absence of any control the steady - state ( thermal ) population of the excited stateis in our analysis here we assume that the controllers are perfect , since we interested in the best control that can be achieved by both methods when the only constraint is the speed of the interaction with the system .this means specifically that , for measurement - based control we assume that the measurement is perfectly efficient , meaning that there is no additional noise on the measurement result over the noise which is purely a result of the uncertainty inherent in the quantum state of the system .in addition we assume that the feedback forces applied by the classical control system have no errors , and the processing of the measurement results is effectively instantaneous . for coherent feedback control ,since the controller is an auxiliary quantum system , the assumption of a perfect controller means that the auxiliary does not feel any thermal noise or decoherence beyond that which we choose to maximize its ability to effect control .that is , we are able to completely isolate it from the environment .while the interaction of the auxiliary with the traveling - wave fields that it uses to talk the system is subject to the same bound as the system , for consistency , the internal dynamics of the controller is unrestricted , which is equivalent to the instantaneous processing allowed by the classical controller .finally , the traveling - wave fields that connect the system and the auxiliary are assumed to have no loss , which is equivalent to the assumption that the measurements made by the classical controller are perfectly efficient .a continuous measurement of an observable of a quantum system is obtained by coupling the system to a probe system via and coupling the probe to a traveling - wave field .the reason that we use this two - stage process for coupling a system to a field , and thus for making a continuous measurement , as opposed to coupling the system directly to the field is the following . to obtain a simple markovian process whereby the field continuously carries information away from the probe the frequency of the photons emitted by the probe must be large compared to the rate at which the probe emits photons into the field .the emitted field then contains a signal whose bandwidth is small compare to the frequency of the photons , and it is within this bandwidth that the field must cary the signal containing the information about the measured observable of the system . we can achieve this using by a probe to mediate the interaction because we can choose the frequency of the probe to be significantly larger than the timescale of the evolution of the system . explicit treatments of this continuous measurement process can be found in . the result of the coupling to the probe and the subsequent coupling of the probe to the field is a master equation that describes the evolution of the system given the continuous stream of measurement results obtained by detecting the field .if we denote this stream by the master equation for the system density matrix , , is dt - k [ \tilde{a},[\tilde{a},\rho ] ] dt \nonumber \\ & + \sqrt{2 k } ( \tilde{a}\rho + \rho \tilde{a } - 2\mbox{tr}[\rho \tilde{a } ] \rho ) dw .\label{sme } \end{aligned}\ ] ] here we have written , so that if is a dimensionless operator is a rate - constant. we will find this definition useful below . in the above master equation, is the hamiltonian of the system and is the stochastic increment of weiner noise .this noise is the random fluctuations in the stream of measurement results , and is given by \rho ) dt ] for a vacuum input .note that if is hermitian then ] to the system hamiltonian , in which is a hermitian operator that depends on . ]in fig . [ fig1 ]we depict a feedback control system that uses continuous measurement . here a field traveling to the right , which might physically be an optical beam or superconducting transmission line , interacts with the system via a system observable and is subsequently measured .the continuous classical measurement signal , denoted by , is processed by a classical computing device to produce a vector - valued signal .this signal is then used to control the system by applying control fields to it .the hamiltonian of the system becomes \ ] ] in which is the time - independent hamiltonian of the system in the absence of the control , and the hermitian operator ] can in general be chosen to be any function of the measurement record up until the current time .we will quantify the strength ( or speed ) of control using , and denote the limit to this strength by , meaning that we now review the level of control that can be obtained with continuous - time measurement - based feedback , as described by eq.([mbf ] ) .the question of the optimal mbf control protocol for a single qubit is remarkably complex , and is not known in general .however an optimal protocol is known in the limit of strong feedback ( ) .this protocol involves using the measurement record to continually calculate the density matrix , and continually modifying the feedback hamiltonian and measured observable so that i ) in each time - step the total hamiltonian rotates the bloch vector towards the target state ( in our case the ground state ) as fast as possible , and ii ) the eigenbasis of the measured observable remains unbiased with respect to that of the density matrix .this second condition can also be stated by saying that the bloch vectors of the eigenstates of remain orthogonal to those of the eigenstates of the density matrix . under the above mbf protocol , in the limitin which and in the absence of any noise , the purity , , of the system density matrix evolves as , and the density matrix remains diagonal in the -basis . adding the thermal noise described by the master equation in eq.([therm ] )it is simple to calculate the resulting steady - state value of the excited - state probability , which is the results in indicate that the above feedback protocol remains optimal so long as , but is it not possible to obtain an analytic solution for the performance for general .we therefore simulate the protocol numerically , for which we choose the values for the thermal relaxation rate and for the temperature parameter .note that the control problem is defined by four parameters , and only the three parameters , , and the feedback strength if we scale everything by the measurement rate ( that is , measure all rates in units of ) .we plot the performance of the protocol in fig .[ fig2 ] as a function of . for these simulations we used the recent numerical method devised by rouchon and collaborators , which is a tremendous advance on previous methods for simulating the sme , both in terms of stability and accuracy .to compare cfc protocols with the mbf protocol depicted in fig .[ fig1 ] the cfc protocols must interact with the system via traveling - wave fields .three possible configurations that we can use to implement field - mediated cfc are shown in fig .[ fig3 ] . in configurations ( a ) and ( b )the system and controller are connect by a single traveling - wave field . in ( a )this field interacts with the system via , goes on to interact with the controller ( a second quantum system ) , and then returns to interact with the system via an operator before passing out to be discarded .the only difference between ( a ) and ( b ) is that after the field has provided feedback to the system via operator it is then allowed to interact with the controller once more before being discarded .phase shifters are also included that can apply shifts and to the field . configuration ( c ) is different from the others in that two distinct fields are used . the field that carries the control signal from the controller to the system is not the field that carries information the other way . in all three configurations the interaction operators that mediate interactions between the system and controller must be bounded as in the mbf protocol .further , we should not allow the system to be damped arbitrarily by the field ; since the field is at zero temperature such damping would provide an entropy dumping mechanism for free that we did not allow in the mbf protocol .this condition can be imposed merely by demanding that and are hermitian as we did for the mbf protocol .we can allow the controller as much damping as we want , however , and so the operators and can be non - hermitian .we will decompose into hermitian operators by writing . with this decompositionwe will find that it is only that appears in the resulting field - mediated interaction with the system .thus while must be subject to the same bound as in the mbf protocol , can be left unbounded since it does not play any role in mediating a mesoscopic interaction .for the same reason is unbounded .we are also free to include additional arbitrary damping channels for the controller if we wish . for configuration ( a )the heisenberg equations of motion for an arbitrary system operator , , and an arbitrary auxiliary operator , , are given by dt + d\mathcal{q}[\sqrt{k}\tilde{d } , 0 , a_{{\mbox{\scriptsize in } } } ] s , \label{s1a } \\ d x = & \ , i \ !\frac{h_{{\mbox{\scriptsize a}}}}{\hbar } + 4 k \tilde{a}\ , \mbox{im}[e^{-i\theta}\tilde{l } ] , x \right ] \ ! dt \nonumber \\ & + d\mathcal{q}[\sqrt{k}\tilde{l } , \theta , a_{{\mbox{\scriptsize in } } } ] x , \label{x1a } \ ] ] in which = & \sin(\theta \ ! + \ !\phi ) \tilde{c } ' - \cos(\theta \ ! + \ !\phi ) \tilde{c } .\end{aligned}\ ] ] for configuration ( b ) the equations of motion for the system are identical to those for ( a ) , while the equation of motion for the auxiliary becomes dt \nonumber \\ & + d\mathcal{q}[\sqrt{k}\tilde{v } , \theta , a_{{\mbox{\scriptsize in } } } ] x \label{xb } \ ] ] in which in eq.([xb ] ) the term `` h.c . ''represents the hermitian conjugate of the expression that appears before it within the parentheses .thus .the equations of motion for ( c ) are dt + d\mathcal{q}[\sqrt{k}\tilde{a } , 0 , a_{{\mbox{\scriptsize in } } } ] s \nonumber \\ & + d\mathcal{q}[\sqrt{k}\tilde{b } , \phi , b_{{\mbox{\scriptsize in } } } ] s , \\d x = & \ , i \ !\left[\frac{h_{{\mbox{\scriptsize a}}}}{\hbar}+ 2 k \tilde{a } \left ( e^{i\theta } \tilde{m}^\dagger - e^{-i\theta } \tilde{m } \right),x \right ] dt \nonumber \\ & + d\mathcal{q}[\sqrt{k}\tilde{l } , 0 , b_{{\mbox{\scriptsize in } } } ] x + d\mathcal{q}[\sqrt{k}\tilde{m } , \theta , a_{{\mbox{\scriptsize in } } } ] x. \ ] ] by examining the equations of motion for ( a ) we see that the field mediates an effective coupling between the two systems .specifically , in eq.([s1a ] ) a hamiltonian term proportional to the product of and appears .this effective hamiltonian is generated by the field that flows from the auxiliary operator to the system operator , and we note that it does not appear in the equation of motion for the auxiliary .instead the auxiliary sees an effective hamiltonian proportional to and that comes from the field that flows from the system to the auxiliary .the equations of motion are thus asymmetric in a way that a purely hamiltonian coupling never is . in configuration ( a ) it is the fact that the _ same _ field interacts with the system via and that causes the system to see an effective interaction with the field given by the lindblad operator .note that in ( c ) where separate fields interact with and the decoherence due to the fields must instead be described by two separate superoperators and .the fact that the same field interacts with the system twice allows the noise introduced at the second input to cancel that at the first input ( assuming no significant time - delay between the two points of input ) .this cancellation can be achieved , for example , by choosing the phase shifts so that and , with the result that .this possibility makes configurations ( a ) and ( b ) very different from ( c ) . .the field(s ) are represented by the wavy - lines and interact with the systems via the operators in the circles .the arrows indicate the direction in which the field(s ) carry information away from the systems and and denote phase shifts applied to the field(s ) .the operators and are hermitian while and may be non - hermitian , with the subscripts indicating possible time - dependence .it is convenient to write an arbitrary in terms of hermitian operators and and the phase shift . ]we now examine some of the properties of the three cfc configurations in fig .[ fig3 ] so as to gain some insight into the possible control mechanisms .first , as we noted above , configuration ( a ) and ( b ) have the potential to cancel the noise that is fed into the system .if we take ( a ) and set and ( or equivalently and ) the noise terms arising from the two interactions with the field cancel , and the equations of motion become ,\\ d x = & \ , i[h_{{\mbox{\scriptsize a}}}/\hbar + 4 k \tilde{b } \tilde{c } , x ] dt + d \mathcal{q } [ \sqrt{k } \tilde{l } , \theta , a_{{\mbox{\scriptsize in } } } ] x .\label{halfh}\end{aligned}\ ] ] interestingly when we cancel the input noise for the system both the system and the controller see the same hamiltonian interaction , which is given by . in this casethe dynamics could be reproduced instead by coupling the two systems together using a direct interaction hamiltonian given by , and then separately coupling the controller to a field via the operator .this kind of _ direct coupling _ configuration is depicted in fig .[ fig4 ] .configuration ( b ) has the ability to cancel the noise input to both the system and the controller , which is achieved by setting and as before , and in addition and .the resulting equations of motion are , \\ \dot{x } = & i[h_{{\mbox{\scriptsize a}}}/\hbar + 4 k \tilde{b } \tilde{c } , x ] .\label{fullh}\end{aligned}\ ] ] we see that these equations describe two systems coupled together solely by the interaction hamiltonian .thus from a theoretical point of view the use of field - mediated coupling to connect quantum systems subsumes the use of direct coupling , because the former can simulate the latter . the fact that the field coupling configurations can reproduce the direct - coupling scenario in fig .[ fig4 ] means that they can extract entropy from the system using a `` state - swapping '' procedure .the effective direct coupling , along with the control of the auxiliary hamiltonian , can be used to realize a joint unitary operation that swaps the states of the system and controller .this method of control is discussed , for example in , and is the mechanism used in resolved - sideband cooling .the latter is presently the state - of - the - art for cooling nano - mechanical oscillators and the external motion of trapped ions .if one prepares the controller in the state in which one wishes to prepare the system , then swapping the state of the system with that of the controller prepares the system in the desired state , automatically transferring any entropy in the system to the controller . in this case , assuming a perfect controller , the fidelity with the which the system can be prepared in the desired state that is , the degree of control that can be obtained is determined by the speed at which the swap can be implemented .the faster the swap is performed the less time the noise that drives the system has to degrade the state as it is loaded into the system . of course , to continually re - prepare the system in the desired state the controller must get rid of the entropy it extracts from the system .this is the reason that we include in fig .[ fig4](b ) , in addition to the direct coupling between the system and controller , a coupling between the controller and a field that can act as a zero temperature bath .the limit with which a coherent interaction can prepare a system in its ground state in the presence of thermal noise was explored by wang _ _ et al . _ _ in .there the authors presented a simple analytic expression as a bound on the minimum achievable excited - state probability , for which they provided strong evidence , and which is expected to be valid when the interaction rate is much greater than the thermal relaxation rate .if we denote the interaction hamiltonian between the system and the controller by , then this bound is . for our coherent control configurations ,in which the field - mediated coupling is bounded , we have the resulting value of the bound asserted in is this bound is a factor of higher than that achievable by mbf .we now note that the coherent interaction provided by is not the only control mechanism available to the coherent control configurations given in fig .[ fig3 ] ( a ) and ( b ) . by choosing the hermitian operators and appropriately , along with the phases and , these configurations can create an effectively dissipative interaction with the field .further , this can be achieved with the simple feedback loop depicted in fig .[ fig4 ] ( a ) in which we have removed the quantum auxiliary system .the equation of motion for the system in this case , obtained from eq.([s1a ] ) by setting , is dt - k \mathcal{k}(\tilde{d } ) s dt \nonumber \\ & + \sqrt{2k } ( [ s , \tilde{d}^\dagger ] da_{{\mbox{\scriptsize in } } } - [ s , \tilde{d } ] da_{{\mbox{\scriptsize in}}}^\dagger ) .\label{noc}\end{aligned}\ ] ] if we now choose and then this makes the decay operator for the qubit and results in damping for the qubit at rate .if we include thermal noise for the qubit in eq.([noc ] ) then the steady - state population of the excited state , assuming , is while this is twice the minimum value for mbf with infinite , it does nt require any quantum auxiliary ! to summarize the above discussion, we see that the coherent configurations ( a ) and ( b ) in fig .[ fig3 ] possess two separate mechanisms by which they can prepare the system in its ground state .while the best performance that can be achieved by each of these mechanisms individually is less than that achievable with mbf , presumably both can be used simultaneously , at least to some degree .( a ) in which the auxiliary system has been removed .the processing of the output is merely a phase shift that is applied before the field is fed back to the system .( b ) in this scenario , which is subsumed by that in fig .[ fig3](a ) , the system is coupled directly to the auxiliary via the interaction hamiltonian , which is indicated diagrammatically by the horizontal line .the auxiliary is also coupled to an output field to which it can discard entropy .these two configurations elucidate two control mechanisms that are available to the configurations in fig .[ fig3 ] ( a ) and ( b ) . ] we have obtained some insight into the power of coherent feedback , but we have not found an analytic expression for the maximum performance of cfc under our constraint . to explore this questionfurther we now turn to numerical optimization . for each of the configurations in fig .[ fig3 ] the space of possible options for implementing control is large . in ( a ) for example , we have four hermitian operators , , , , and that we can vary in an essentially arbitrary way with time , as well as the hamiltonian of the auxiliary system and the phases and .the purpose of numerical optimization is to search over the space of functions of time , called the _ control functions _, that we can choose for the above quantities in order to maximize the performance of the cfc configuration . to perform such a searchwe must characterize the control functions in terms of a finite set of parameters .the optimization procedure then performs a search over the space of parameters .a given set of control functions that describes how all of the time - variable quantities change with time , defined over some duration , , is called a _control protocol_. a simple way to parametrize our control functions is to divide the time over which the control will be applied , , into equal intervals , and make the control functions piecewise - constant on these intervals .this is the parametrization we use here .since we are interested in steady - state control we choose a duration over which to define the control functions , and we then apply this control repeatedly until the steady - state is obtained .thus our control protocol will be periodic with period .the total number of real parameters over which we must search is the number of ( real - valued ) control functions multiplied by the number of intervals .each hermitian operator appearing in the equation of motion for an dimensional system is defined by real parameters ( one less than because the motion generated by an operator is unaffected by adding to it a multiple of the identity ) . for our numerical exploration of the performance of cfcwe use only the simplest auxiliary system for the controller , namely a single qubit . with this choice , and allowing the energy gap of the system as an additional control function , the total number of parameters for configurations ( a ) , ( b ) , and ( c ) , in fig .[ fig3 ] are , , and , respectively . for configurations ( a ) and ( b ) in fig .[ fig4 ] there are and parameters , respectively .note that in fig .[ fig4 ] ( b ) the operators and only appear in the evolution as a product , so that together they require 5 rather than 6 parameters .further details of the procedure we use for numerical optimization are given in the appendix .for the numerical analysis we must choose values for the parameters for our control problem , and we use the same ones we used for the simulation of mbf in section [ secmbf ] , namely and .recall that the coherent feedback control problem is completely specified by these two parameters when we measure in units of .we first perform a numerical search for protocols that employ the direct - coupling configuration ( fig .[ fig4](b ) ) . forthe chosen values of and the lower bound for direct coupling , given in eq.([lowb ] ) , is .the best we are able to achieve for the direct coupling configuration using numerical optimization is .it is expected that this should be higher than the lower bound in eq.([lowb ] ) as this bound is not expected to be achievable in the steady - state but only for preparation at a single instant .our results are thus consistent with the claims of . turning to the field - mediated cfc protocols in fig .[ fig3 ] we first evaluate the performance of ( a ) and ( b ) .the results of our numerical optimization , presented in detail in the appendix , indicate that the best possible performance of both ( a ) and ( b ) is exactly that same as that of the continuous mbf protocol with infinite feedback force , which for the given parameter values is .this result is interesting for at least two reasons .first , it seems somewhat remarkable that the cfc protocol is able to perform as well as the mbf protocol when its feedback interaction forces are limited precisely to those of its measurement interaction forces , while the mbf protocol can use infinitely fast hamiltonian control .second , it would seem rather coincidental that a two - level auxiliary would exactly match the performance of mbf unless this performance is a bound that is independent of the size of the auxiliary .this suggests that the bound previously established for mbf with infinite feedback , eq.([infmu ] ) , may be the upper limit for any control under a bound on the markovian coupling to a field .certainly this would appear to be an interesting question for future work .the fact that ( a ) and ( b ) given the same performance suggests that the addition of the interaction given by in ( b ) may not provide any additional power for the control process , at least when the dynamics of the auxiliary is unconstrained .this too may be a question worth pursuing in future work .finally we evaluate the performance of configuration ( c ) in fig .we find that this configuration is not able to provide _ any _ control over the entropy of the qubit , in that it is unable to reduce the probability of the excited state below the uncontrolled value of .this shows that the ability to cancel the input noise to the system , a property possessed by configurations ( a ) and ( b ) , is essential for performing non - trivial control when the interactions with the system and the fields are hermitian ( and thus non - dissipative ) , and when the auxiliary has only two levels .it seems likely that this will remain true for couplings that are dissipative but in which the field is at the same temperature as the noise one wishes to control .we expect however , that configuration 1(c ) will be able to perform non - trivial control when the auxiliary has more than two levels , since such a scenario should be able to mimic the functioning of mbf .here we plot the result of 1000 independent searches for a range of values of the protocol period , , logarithmically equally spaced between to .( the plot actually shows the subset of results that lie within the slightly smaller interval ] .we see from this that while not every search finds a good protocol ( that is , a near - optimal value of ) , a significant fraction of them do . for the field - mediated configurations ( fig . [ fig3](a ) and( b ) ) we find that the second method is superior .for ( a ) we performed 5 scans starting with values for t in the range \pi / k12 & 12#1212_12%12[1][0] _( , , ) _ _ ( , , ) * * , ( ) link:\doibase 10.1103/physrevlett.105.060602 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.080503 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1109/tfuzz.2012.2186817 [ * * , ( ) ] link:\doibase 10.1109/tac.2012.2195871 [ * * , ( ) ] link:\doibase 10.1038/nature11505 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.173601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.240502 [ * * , ( ) ] link:\doibase 10.1103/physrevx.4.041029 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/16/9/093059 [ * * , ( ) ] link:\doibase 10.1103/physreva.92.062321 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.093602 [ * * , ( ) ] `` , '' ( ) * * , ( ) * * , ( ) ( ) * * , ( ) link:\doibase 10.1109/tac.2009.2031205 [ * * , ( ) ] link:\doibase 10.1016/j.automatica.2009.04.018 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.173602 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.013815 [ * * , ( ) ] * * , ( ) ( ) link:\doibase 10.1103/physreva.91.043812 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.107.177204 [ * * , ( ) ] ( ) * * , ( ) link:\doibase 10.1103/physrevlett.65.1697 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.75.042308 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.157207 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.170501 [ * * , ( ) ] link:\doibase 10.1103/physreva.30.1386 [ * * , ( ) ] link:\doibase 10.1103/physreva.31.3761 [ * * , ( ) ] _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.91.012118 [ * * , ( ) ] in _ _ ( , , ) pp .link:\doibase 10.1038/nphys939 [ * * , ( ) ] link:\doibase 10.1038/nphys1304 [ * * , ( ) ] link:\doibase 10.1038/nature10261 [ * * , ( ) ] link:\doibase 10.1038/nature10461 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.75.281 [ * * , ( ) ] link:\doibase 10.1126/science.aad9480 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.062309 [ * * , ( ) ] ( )
|
we compare the performance of continuous coherent feedback , implemented using an ideal single - qubit controller , to that of continuous measurement - based feedback for the task of controlling the state of a single qubit . here the basic dynamical resource is the ability to couple the system to a traveling - wave field ( for example , a transmission line ) via a system observable , and the fundamental limitation is the maximum rate that is available for this coupling . we focus on the question of the best achievable control given ideal controllers . to obtain a fair comparison we acknowledge that the amplification involved in measurement - based control allows the controller to use macroscopic fields to apply feedback forces to the system , so it is natural to allow these feedback forces to be much larger than the mesoscopic coupling to the transmission line that mediates both the measurement for measurement - based control and the coupling to the mesoscopic controller for coherent control . interestingly our numerical results indicate that under the above platform for comparison , coherent feedback is able to exactly match the performance of measurement - based feedback given ideal controllers . we also discuss various properties of , and control mechanisms for , coherent feedback networks .
|
although the molecular biology paradigm has been quite successful in solving a number of problems in the understanding of cell structure and function , important problems remain open .some of the most notable of these are in the area of cell division . given the constraints wittingly or unwittingly imposed by the molecular cell biology paradigm , it would seem that models of mitotic motions and events have become more and more complex , and therefore to this observer at least more and more unsatisfactory .insisting on a molecular biology approach to explaining mitotic chromosome motions ( and a number of other cellular processes ) has striking similarities to the ancient greeks insisting on perfect circles to explain planetary motions .this paper reviews an alternative based on classical electrostatics expressed in terms of stably bound continuum surface and volume charge densities ( _ charge distributions _ ) .this sort of approach makes it possible to describe the dynamics , including timing and sequencing , of post - attachment mitotic motions within a comprehensive approach .the known charge in mitotic chromosome motions is the negative charge on chromosome arms and centrosomes , and positive charge at kinetochores .negative charge at and near _ plus _ ends of microtubules and positive charge at _ minus _ ends of microtubules will be assumed .( according to existing convention , one end is designated _ plus _ because of its more rapid growth , there being no reference to charge in the use of this nomenclature . ) arguments for these two assumptions will be presented ; however , they are to be viewed here as the sole postulates within which a comprehensive model for post - attachment mitotic movements and events can be framed . in 2002 and 2005 papers , i argued that indirect experimental evidence indicates that pole - facing plates of kinetochores manifest positive charge [ 1 - 3 ] and interact with negatively charged microtubule free ends to provide the motive force for poleward force generation at kinetochores .this has subsequently been supported by experiments [ 4 - 6 ] implicating positively charged molecules at kinetochores in establishing a dynamic coupling to negative charge on microtubules during mitosis . assuming a volume positive charge at kinetochores and negative charge at and near the free plus ends of microtubules, it was possible to derive a magnitude of the maximum ( tension ) force per microtubule for poleward chromosome motions that falls within the experimental range [ 3 ] . in these papers , i also proposed that indirect experimental evidence is consistent with a negative charge distribution on centrosomes [ 1 - 3 ] .recent direct experimental measurements have confirmed this [ 7 ] . as noted above, a major advantage of focusing on cellular charge distributions is that it appears to offer the possibility of discovering a minimal assumptions model for post - attachment chromosome motions [ 3,8 ] .a model of this sort can point the way to the eventual discovery of specific molecules , and their biochemistries , that are responsible for the various mitotic motions .this is what is now happening .as mentioned above , a number of recent experiments have shown that certain kinetochore molecules that bind with microtubules have a net positive charge , and that poleward force for chromosome motions at kinetochores may be due to electrostatic interactions between these molecules and negative charge on microtubules [ 4 - 6 ] .these discoveries might have been made sooner if the above mentioned 2002 and 2005 papers had been duly noted .in the cytoplasmic medium ( cytosol ) within biological cells , it has been generally thought that electrostatic fields are subject to strong attenuation by screening with oppositely charged ions ( counterion screening ) , decreasing exponentially to much smaller values over a distance of several _debye lengths_. the debye length within cells is typically given to be of order 1 nm [ 9 ] , and since cells of interest in the present work ( i.e. eukaryotic ) can be taken to have much larger dimensions , one would be tempted to conclude that electrostatic force could not be a major factor in providing the cause for mitotic chromosome movements in biological cells .however , the presence of microtubules , as well as other factors to be discussed shortly , change the picture completely .microtubules can be thought of as intermediaries that extend the reach of the electrostatic interaction over cellular distances , making the second most potent force in the universe available to cells in spite of their ionic nature .microtubules are 25 nm diameter cylindrical structures comprised of _ protofilaments _ , each consisting of tubulin dimer subunits , 8 nm in length , aligned lengthwise parallel to the microtubule axis .the protofilaments are bound laterally to form a sheet that closes to form a cylindrical microtubule .the structure of microtubules is similar in all eukaryotic cells .cross sections reveal that the wall of a microtubule consists of a circle of 4 to 5 nm diameter subunits .the circle typically contains 13 subunits as observed in vivo .neighboring dimers along protofilaments exhibit a small ( b - lattice ) offset of 0.92 nm from protofilament to protofilament .microtubules continually assemble and disassemble , so the turnover of tubulin is ongoing .the characteristics of microtubule lengthening ( polymerization ) and shortening ( depolymerization ) follow a pattern known as dynamic instability " : that is , at any given instant some of the microtubules are growing , while others are undergoing rapid breakdown . in general, the rate at which microtubules undergo net assembly or disassembly varies with mitotic stage [ 10 ] .changes in microtubule dynamics are integral to changes in the motions of chromosomes during the stages of mitosis .poleward and antipoleward chromosome motions occur during prometaphase and metaphase .antipoleward motions dominate during the _ congressional _ movement of chromosomes to the cell _ equator _ , and poleward motion prevails during anaphase - a .it is assumed here that poleward chromosome motions are in response to disassembling kinetochore microtubules at kinrtochores and poles , and antipoleward chromosome motions are in response to assembling microtubules at chromosome arms .experiments have shown that the intracellular ph ( ) of many cells rises to a maximum at the onset of mitosis , subsequently falling during later stages [ 11,12 ] .studies [ 13 ] have shown that _ in vivo _ microtubule growth ( polymerization ) is favored by higher ph values . it should be noted that _ in vitro _ studies of the role of ph in regulating microtubule assembly indicate a ph optimum for assembly in the range of 6.3 to 6.4 .the disagreement between _ in vitro _ and _ in vivo _ studies has been analyzed in relation to the nucleation potential of microtubule organizing centers like centrosomes [ 13 ] , and it has been suggested that regulates the nucleation potential of microtubule organizing centers [ 14 - 16 ] .this favors the more complex physiology characteristic of _ in vivo _ studies to resolve this question .kinetochore molecules are known to self - assemble onto condensed , negatively charged dna at centromeres [ 17 ] , indicating that kinetochores may exhibit positive charge at their pole - facing plates .this is an example of an important aspect of electrostatic interactions within cells : namely their longer range compared to other intracellular molecular interactions , and the resulting capacity of electrostatic forces to organize molecules and structures within cells .this line of reasoning was the basis for my assuming positive charge on pole - facing plates " of kinetochores in a previous paper [ 3 ] . in earlier worksi had assumed positive charge at kinetochores for different reasons [ 1,2 ] .cellular electrostatics is strongly influenced by significantly reduced counterion screening due to layered water adhering to charged molecules . such water layering with consequent reduction or elimination of debye screening to charged proteins has long been theorized [ 18,19 ] and has been confirmed by experiment [ 20 ] .additionally , water between sufficiently close ( up to 4 nm ) charged proteins has a dielectric constant that is considerably reduced from the _ bulk _ value far from charged surfaces [ 3,8,21 ] .as will be discussed in the next section , this would further increase the tendency for an electrostatic assist to aster and spindle self - assembly .the combination of these two effects ( or conditions ) water layering and reduced dielectric constant can significantly influence cellular electrostatics in a number of important ways .this is especially true in relation to mitosis [ 8,21 ] .the aster s pincushion - like appearance is consistent with electrostatics since electric dipole subunits will align radially outward about a central charge with the geometry of the resulting configuration resembling the electric field of a point charge .from this it seems reasonable to assume that the pericentriolar material , the _ centrosome matrix _ within which the microtubule dimer dipolar subunits assemble in many cell types to form an aster [ 22 ] , carries a net charge .this agrees with observations that the microtubules appear to start in the centrosome matrix [ 23 ] .one may assume that the sign of this charge is negative [ 1,2 ] .this assumption is consistent with experiments [ 24 ] revealing that mitotic spindles can assemble around dna - coated beads incubated in _ xenopus_ egg extracts .the phosphate groups of the dna will manifest a net negative charge at the ph of this experimental system .this experimental result was cited in my 2002 and 2005 papers to conclude that centrosomes are negatively charged [ 1 - 3 ] . as noted above ,centrosomes have recently been shown to have a net negative charge by direct measurement [ 7 ] .a number of investigations have focused on the electrostatic properties of microtubule tubulin subunits [ 25 - 28 ] .large scale calculations of the tubulin molecule have been carried out using molecular dynamics programs along with protein parameter sets .the dipole moment of tubulin has been calculated to be as large as 1800 debye ( d ) [ 25,29 ] . in experiments carried out at nearly physiological conditions ,the dipole moment has been determined to be 36 d [ 30 ] , corresponding to a dipole charge of approximately 0.1 electron per dimer .experiments [ 29,31 ] have shown that tubulin net charge depends strongly on ph , varying quite linearly from 12 to 28 ( electron charges ) between ph 5.5 and 8.0 .this could be significant for microtubule dynamics during mitosis because , as noted above , many cell types exhibit a decrease of 0.3 to 0.5 ph units from a peak at prophase during mitosis .it has been determined that tubulin has a large overall negative charge of 20 ( electron charges ) at ph 7 , and that as much as 40 % of the charge resides on c - termini [ 32 ] .the c - termini can extend perpendicularly outward from the microtubule axis as a function of , extending 45 nm at 7 [ 32 ] .it would therefore seem reasonable to assume that an increased tubulin charge and the resulting greater extension of c - termini may be integral to an increased probability for microtubule assembly during prophase when is highest .this will be discussed next .as noted above , in addition to addressing force generation for post - attachment chromosome motions , a continuum electrostatics approach to mitotic motions can also account for the timing and sequencing of the detailed changes in these motions .these changes can be attributed to changes in microtubule dynamics based on a progressively increasing microtubule disassembly to assembly ratio for kinetochore microtubules that is caused by a steadily decreasing during mitosis [ 2,8 ] .a higher during prophase is consistent with an enhanced interaction between highly extended c - termini of tubulin dimers and positively charged regions of neighboring dimers .this enhanced interaction is due to their greater extension as well as increased expression of negative charge , with both favoring microtubule growth .it would therefore seem reasonable to expect that prophase high conditions and the electrostatic nature of tubulin dimer subunits greatly assists in their self - assembly into the microtubules of the asters and spindle [ 1,2,8 ] . as discussed in the previous section , this self - assembly would be aided by significantly reduced counterion screening due to layered water and the reduced dielectric constant between charged protein surfaces .an electrostatic component to the biochemistry of the microtubules in assembling asters is consistent with experimental observations of ph effects on microtubule assembly [ 13 ] , as well as the sensitivity of microtubule stability to calcium ion concentrations [ 33,34 ] .the two effects ( or conditions ) discussed in the last section would be expected to significantly increase the efficiency of microtubule self - assembly in asters and spindles by ( 1 ) allowing electrostatic interactions over greater distances than debye ( counter - ion ) screening dictates , and ( 2 ) increasing the strength of these interactions by an order of magnitude due to a corresponding order of magnitude reduction in the cytosolic dielectric constant between charged protein surfaces separated by critical distances or less .thus it would seem reasonable to assume that , over distances consistent with the reduced dielectric constant and modified counterion screening discussed above , the electrostatic nature of tubulin dimers would allow tubulin dimer microtubule subunits ( 1 ) to be attracted to and align around charge distributions within cells in particular , as mentioned above , around centrosomes and ( 2 ) to align end to end and laterally , facilitating the formation of asters and mitotic spindles [ 1,2,8 ] .the motive force for the migration of asters and assembling spindles during prophase can also be understood in terms of nanoscale electrostatics . as a consequence of the negative charge on microtubules at , and on c- termini near , the plus free ends of microtubules of the forming asters and half - spindles , the asters / half - spindles would be continuously repelled electrostatically from each other and drift apart .specifically , as microtubule assembly proceeds , a subset of the negatively charged microtubule free ends at and near the periphery of one of the growing asters / forming half - spindles would mutually repel a subset of the negatively charged free ends at and near the periphery of the other growing aster / half - spindle , causing the asters / half - spindles to drift apart as net assembly of microtubules continues and subsets of interacting microtubules are continually replaced [ 1,2,8 ] .microtubules disassembling from previously overlapping configurations could also generate repulsive force between asters / half - spindles , but net microtubule assembly will dominate during prophase . as discussed above , because of significantly reduced counterion screening and the low dielectric constant of layered water adhering to charged tubulin dimers , the necessary attraction and alignment of the dimers during spindle self - assembly would be enhanced by the considerably increased range and strength of the electrostatic attraction between oppositely charged regions of nearest - neighbors .similarly , the mutually repulsive electrostatic force between subsets of like - charged free plus ends of interacting microtubules from opposite half - spindles in the growing mitotic spindle would be expected to be significantly increased in magnitude and range . thus mutual electrostatic repulsion of the negatively charged microtubule plus ends distal to centrosomes in assembling asters / half - spindles could provide the driving force for their poleward migration in the forming spindle [ 1,2 ] .a subset of interacting microtubules in a small portion of a forming spindle is depicted in figure 1 . as noted above, it is important to recognize that interacting microtubules can result from either growing or shrinking microtubules but polymerization probabilities will dominate during prophase . as cited above , experiments have shown that of many cell types rises to a maximum at the onset of mitosis , subsequently falling steadily through mitosis .although it is experimentally difficult to resolve the exact starting time for the beginning of the decrease in during the cell cycle , it appears to decrease 0.3 to 0.5 ph units from the typical peak values of 7.3 to 7.5 measured earlier during prophase .the further decrease in through metaphase [ 12 ] would result in increased instability of the microtubules comprising the spindle fibers .previously , i noted that _ in vivo _ experiments have shown that microtubule stability is related to , with a more basic ph favoring microtubule assembly .it is important to note that ph in the vicinity of the negatively charged plus ends of microtubules ( see discussion of net charge at microtubule free ends below ) will be even lower than the bulk because of the effect of negative charge at the free plus ends of the microtubules .this lowering of ph in the vicinity of negative charge distributions is a general result .intracellular ph in such limited volumes is often referred to as _local _ ph . as one might expect from classical boltzmann statistical mechanics , the hydrogen ion concentration at a negatively charged surface can be shown to be the product of the bulk phase concentration and the boltzmann factor , where is the electronic charge , is the ( negative ) electric potential at the surface , and is boltzmann s constant [ 35 ] .for example , for typical mammalian cell membrane negative charge densities , and therefore typical negative cell membrane potentials , the local ph can be reduced 0.5 to 1.0 ph unit just outside the cell membrane .because of the negative charge at the plus ends of microtubules , a reduction of ph would be expected in the immediate vicinity of these free ends making the local ph influencing microtubule dynamics considerably lower , and a lower bulk would be accompanied by an even lower local ph .a continuum electrostatics model of mitotic events also addresses the dynamics of nuclear envelope fragmentation and reassembly [ 36 ] . experimentally observed increases in whole cell sialic acid content[ 37 ] and intracellular ph during prophase [ 11,12 ] , followed by an observed release of free calcium from nuclear envelope stores at the onset of nuclear envelope breakdown [ 38,39 ] could significantly enhance the manifestation of negative charge on the nuclear envelope , providing sufficient electrostatic energy for nuclear envelope fragmentation [ 36 ] .experimental observations regarding the mechanical properties of the plasma membrane show that electrostatic stress does manifest itself in ways consistent with this scenario [ 40 ] .since terminal sialic acids are attached to membrane proteins that are firmly anchored in the lipid bilayer , the observed disassembly of the nuclear envelope is consistent with electrostatic repulsion between membrane continuum charge clusters , which tear apart under the influence of increased electrostatic charge [ 36 ] .it is difficult to envision a purely biochemical process that would result in the nuclear envelope s breaking into fragments of many molecules each .models for nuclear envelope breakdown in the current literature do not address this .the observed lowering of both intracellular ph and whole cell sialic acid content during late anaphase and telophase for many cell types [ 11,12,37 ] is consistent with a decreased manifestation of net negative charge on membrane fragments at that time .these decreases could shift the balance of thermal energy versus electrostatic repulsive energy , allowing the closer approach of membrane fragments necessary for reassembly to occur in nascent daughter cells [ 36 ] .an increased probability for microtubule depolymerization , as compared to the prophase predominance of microtubule assembly , is consistent with alternating poleward and antipoleward motions with antipoleward motions more probable of _monovalently _ attached chromosomes during prometaphase . as discussed elsewhere [ 2,3 ] , after a _ bivalent _ attachment to both poles , poleward forces toward both poles acting in conjunction with inverse square antipoleward forces exerted between negatively charged microtubule free plus ends and negatively charged chromosome arms could account for chromosome congression .the relative complexity of microtubule disassembly force generation at kinetochores and poles coupled with inverse square antipoleward forces from microtubule assembly at chromosome arms precludes an unequivocal conclusion regarding a possible continuing increase in the microtubule disassembly to assembly ( disassembly / assembly ) probability ratio during chromosome congression . however ,metaphase chromosome midcell oscillations are indirect experimental evidence for a microtubule disassembly / assembly probability ratio approaching unity . at late metaphase , before anaphase - a , experiments reveal that the poleward motions of sister kinetochores stretch the intervening centromeric chromatin , producing high kinetochore tensions .it is reasonable to attribute these high tensions to a continuing microtubule disassembly / assembly probability ratio increase caused by a further lowering of .the resulting attendant increase in poleward electrostatic disassembly force on sister chromatids would lead to increased tension . a lower would also increase the expression of positive charge on sister kinetochores , with the possibility of further increasing the tension due to increased mutual repulsion . thus regarding post - attachment chromosome movements through metaphase, it seems reasonable to ascribe an increasing microtubule dissassembly / assembly probability ratio , with attendant changes in microtubule dynamics and associated mitotic chromosome motions through metaphase , to an experimentally observed steadily decreasing .we may then envision the decrease in from a peak at prophase favoring microtubule assembly , declining through prometaphase as discussed above , and continuing to decline through metaphase when parity between microtubule assembly and disassembly leads to midcell chromatid pair oscillations , culminating in increased kinetochore disassembly tension close to anaphase - a , as the cell s master clock controlling microtubule dynamics , and consequently the events of mitosis .one might also be tempted to attribute the more complete dominance of microtubule disassembly with an accompanying predominance of poleward electrostatic disassembly forces during anaphase - a to a further continuation of a decreasing intracellular ph . however , as discussed elsewhere [ 3,8,21 ] , any additional possible decreases in during anaphase - a may not be a major determinant of anaphase - a motion .following a _ monovalent _ attachment to one pole , chromosomes are observed to move at considerably slower speeds , a few per minute , in subsequent motions throughout prometaphase [ 41 ] . in particular ,a period of slow motions toward and away from a pole will ensue , until close proximity of the negatively charged end of a microtubule from the opposite pole with the other ( _ sister _ ) kinetochore in the chromatid pair results in an attachment to both poles ( a _ bivalent _ attachment ) [ 2,3 ] .attachments of additional microtubules from both poles will follow .( there may have been additional attachments to the first pole before any attachment to the second . )after the sister kinetochore becomes attached to microtubules from the opposite pole , chromosomes perform a slow ( 12 per minute ) congressional motion to the spindle equator , culminating in oscillatory motion of chromatid pairs during metaphase .chromosome motion during anaphase has two major components , designated as anaphase - a and anaphase - b .anaphase - a is concerned with the poleward motion of chromosomes , accompanied by the shortening of kinetochore microtubules at kinetochores and/or spindle poles .the second component , anaphase - b , involves the separation of the poles .both components contribute to the increased separation of chromosomes during mitosis .molecular biology explanations of these motions require that specific molecules , and/or molecular geometries , for mitotic chromosome force generation be identified for each motion .as indicated above , electrostatic models within the molecular cell biology paradigm have recently been sought ( or advanced ) involving positively charged kinetochore molecules interacting with negative charge on microtubules . as in the situation involving models that center partially or wholly on simulations ,these molecular biology approaches are quite complex and primarily attempt to address specific mitotic motions , most notably poleward force generation at kinetochores .critical experimental observations such as the slip - clutch " mechanism [ 42 ] , observations of calcium ion concentration on anaphase - a motion , and polar generation of poleward force are not addressed .however , it is possible to account for the dynamics of post - attachment mitotic motions in terms of electrostatic interactions between experimentally known , stably bound continuum surface and volume electric charge distributions interacting over nanometer distances .this is the approach that i have taken in a series of papers [ 1 - 3 , 21 ] and book [ 8 ] . as mentioned above , charge disributionsare known to exist at centrosomes , chromosome arms , and kinetochores .assumptions of negative charge at microtubule plus ends and positive charge at microtubule minus ends the only assumptions are sufficient to explain the dynamics , timing , and sequencing of post - attachment chromosome motions .these assumptions will now be discussed . excluding possible contributions from microtubule associated proteins , the evidence for a net negative charge at microtubule plus ends and net positive charge at minus ends is as follows : ( 1 ) large scale computer calculations of tubulin dimer subunits indicate that 18 positively charged calcium ions are bound within monomers with an equal number of negative charges localized at adjacent monomers [ 25,26 ] , ( 2 ) experiments reveal that microtubule plus ends terminate with a crown of subunits , and minus ends terminate with subunits [ 43 ] , ( 3 ) the lower local ph vicinal to a negatively charged centrosome matrix would cause a greater expression of positive charge at microtubule minus ends , ( 4 ) the higher ph vicinal to a positively charged kinetochore pole - facing plate " would cause a greater expression of negative charge at microtubule plus ends , ( 5 ) negative charge on centrosome matrices will induce positive charge on microtubule minus ends , and positive charge at pole - facing plates of kinetochores will induce negative charge on microtubule plus ends . as discussed elsewhere[ 3,8,21 ] , force generation from positive charge at the free minus ends of kinetochore microtubules may be responsible for polar generation of poleward force .a calculation of the force per microtubule assuming positive charge at microtubule minus ends falls within the experimental range [ 3 ] .although a calculation of induced positive charge on a microtubule minus end from negative charge on a centrosome matrix is difficult because of the complex geometry , a reciprocal calculation of induced negative charge on a centrosome matrix from positive charge at the minus of a microtubule is relatively straightforward , and agrees with experimental ranges for cellular charge densities and force per microtubule measurements [ 21 ] .similarly , net positive charge at kinetochore pole - facing surfaces would induce negative charge on the plus ends of kinetochore microtubules proximal to kinetochores , and reciprocal calculations at kinetochores similar to those at centrosomes ( see previous paragraph ) are also in agreement with experimental ranges for cellular charge densities and force per microtubule measurements [ 21 ] .the above charge distributions at plus and minus microtubule free ends are also in accord with the common observation that the free ends of an aster / half - spindle s microtubules distal to centrosomes ( the pinheads in a pincushion analogy ) are not attracted to the negatively charged outer surface of the nuclear envelope .if this were not the case , the forming half - spindles would not be able to move freely in their migration to the poles of the cell .as mentioned above , critical experimental observations such as the `` slip - clutch '' mechanism and calcium ion concentration effects on anaphase - a motion have not been addressed by current models for mitotic chromosome motions .these experiments will now be reviewed along with their natural explanations within the context of a continuum electrostatics approach . at the high kinetochore tensionsprior to anaphase - a mentioned above , coupled microtubule plus ends often switch from a depolymerization state to a polymerization state of dynamic instability .this may be explained by kinetochore microtubule plus or minus free ends taking up the slack by polymerization to sustain attachment and resist further centromeric chromatin stretching .this is known as the slip - clutch mechanism " [ 42 ] .the slip - clutch mechanism is addressed within the context of the present work as follows : ( 1 ) microtubule assembly at a kinetochore or pole is regarded as operating in passive response to a repulsive _ robust _ inverse square electrostatic antipoleward microtubule assembly force acting between the plus ends of astral microtubules and chromosome arms [ 2,3 ] and/or an electrostatic microtubule disassembly force at a sister kinetochore or at poles [ 3 ] ; ( 2 ) non - contact electrostatic forces acting over a range of protofilament free end distances ( up to 4 nm , as discussed above ) from bound positive charge both inside and near surfaces " at kinetochores would be effective in maintaining coupling while larger protofilament gaps in the same or other microtubules are passively filled in ; ( 3 ) the repulsive inverse square electrostatic assembly force acting at the sister chromatid s arms will provide a positive feedback mechanism to resist detachment .this explanation of the slip - clutch mechanism follows as a direct consequence of the present approach to chromosome motility with no additional assumptions .there appears to be an optimum calcium ion concentration for maximizing the speed of chromosome motions during anaphase - a .if the [ is increased to micromolar levels , anaphase - a chromosome motion is increased two - fold above the control rate ; if the concentration is further increased slightly beyond the optimimum , the chromosomes will slow down , and possibly stop [ 44 ] .it has long been recognized that one way elevated [ could increase the speed of chromosome motion during anaphase - a is by facilitating microtubule depolymerization [ 33,45 - 48 ] , and it has been commonly believed that microtubule depolymerization , if not the motor for chromosome motion , is at least the rate - determining step [ 49 - 52 ] . however , the slowing or stopping of chromosome motion associated with moderate increases beyond the optimum [ is more difficult to interpret since the microtubule network of the spindle is virtually intact and uncompromised. such disruption of the mitotic spindle requires much higher concentrations [ 44,53 ] . in terms of the present model , higher concentrations of doubly - charged calcium ionswould shield the negative charge at the _ plus _ ends of kinetochore microtubules as well as negative charge at the centrosome matrix , shutting down the poleward - directed nanoscale electrostatic disassembly force .an experimental test of nonspecific divalent cation effects on anaphase - a chromosome motion by substitution of for [ 44 ] does not offer a definitive test for the possibility of negative charge cancellation by positive ions .this is because high frequency sound absorption studies of substitution rate constants for water molecules in the inner hydration shell of various ions reveal that the inner hydration shell water substitution rate for is more than three orders of magnitude slower than that for [ 54 ] , indicating that the positive charge of is shielded much more effectively by water than is the case for .thus , the slowing or stopping of anaphase - a chromosome motion accompanying free calcium concentration increases above the optimum concentration for maximum anaphase - a chromosome speed but well below concentration levels that compromise the mitotic apparatus is completely consistent with an electrostatic disassembly motor for poleward chromosome motion .this experimental observation has not been addressed by any of the other current models for anaphase - a motion .it seems clear that cellular electrostatics involves more than the traditional thinking regarding counterion screening of electric fields and the resulting unimportance within cells of the second most powerful force in nature .the reality may be that the evidence suggests otherwise , and that the resulting enhanced electrostatic interactions are more robust and act over greater distances than previously thought .one aspect of this is the ability of microtubules to extend the reach of electrostatic force over cellular distances ; another lies in the reduced counterion sceeening and dielectric constant of the cytosol between charged protein surfaces .high during prophase favors spindle assembly .this includes greater electrostatic attractive forces between tubulin dimers as well as increased repulsive electrostatic interactions driving poleward movement of forming half - spindles .additionally , because of significantly reduced counterion screening and the low dielectric constant of layered water adhering to charged free ends of tubulin dimers , the necessary attraction and alignment of tubulin during spindle self - assembly would be enhanced by the considerably increased range and strength of the electrostatic attraction between oppositely charged regions of tubulin dimers .similarly , the mutually repulsive electrostatic force between a continually changing subset of like - charged plus ends of interacting microtubules from opposite half - spindles in the growing mitotic spindle would be expected to be significantly increased in magnitude and range . experimentally observed increases in whole cell sialic acid content and intracellular ph during prophase , followed by an observed release of free calcium from nuclear envelope and endoplasmic reticulum stores , will significantly enhance the expression of negative charge on sialic acid residues of the nuclear envelope , providing sufficient electrostatic energy for nuclear envelope breakdown .since terminal sialic acids are attached to membrane proteins that are firmly anchored in the lipid bilayer , the observed disassembly of the nuclear envelope into membrane fragments is consistent with electrostatic repulsion between membrane charge clusters that could tear apart under the influence of increased electrostatic charge .the observed lowering of both intracellular ph and whole cell sialic acid content during late anaphase and telophase is consistent with a decreased manifestation of net negative charge on membrane fragments .this decrease could shift the balance of thermal energy versus electrostatic repulsive energy in favor of thermal energy , allowing the closer approach of membrane fragments necessary for reassembly biochemistry to occur in nascent daughter cells .changes in microtubule dynamics are integral to changes in the motions of chromosomes during mitosis .these changes in microtubule dynamics can be attributed to an associated change in intracellular ph ( ) during mitosis . in particular ,a decrease in from a peak during prophase through mitosis may act as a master clock controlling microtubule disassembly / assembly probability ratios by altering the electrostatic interactions of tubulin dimers .this , in turn , could determine the timing and dynamics of post - attachment mitotic chromosome motions through metaphase .force generation for the dynamics of post - attachment chromosome motions during prometaphase and metaphase can be explained by statistical fluctuations in nanoscale repulsive electrostatic microtubule antipoleward assembly forces acting between microtubules and chromosome arms in conjunction with similar fluctuations in nanoscale attractive electrostatic microtubule poleward disassembly forces acting at kinetochores and spindle poles [ 2,3 ] .the different motions throughout prometaphase and metaphase may be understood as an increase in the microtubule disassembly to assembly probability ratio due to a steadily decreasing [ 2,8 ] .thus it seems reasonable to assume that the shift from the dominance of microtubule growth during prophase , to a lesser extent during prometaphase , and to approximate parity between microtubule polymerization and depolymerization during metaphase chromosome oscillations can be attributed to the gradual downward shift during mitosis that is observed in many cell types .evidence for a further continuing decrease in and an increasing microtubule disassembly to assembly probability ratio is seen in increased kinetochore tension just prior to anaphase .this increased tension has a possible simple interpretation in terms of the greater magnitude of poleward electrostatic disassembly forces at kinetochores and poles relative to antipoleward assembly forces between plus ends of microtubules and chromosome arms .additional continuing decreases in during anaphase - a and anaphase - b may not be the major determinant of anaphase motions [ 3,8,21 ] . in light of the large body of experimental information regarding mitosis, the complexity and lack of unity of models for the various events and motions gives , at least to this observer , reason to believe that approaching mitosis primarily within the molecular biology paradigm is flawed .this paper reviews the merits of an approach based on continuum electrostatics .such an approach to mitotic motions based on stably bound charge distributions can be used to frame a minimal assumptions model that incorporates the force production , timing , and sequencing of post - attachment chromosome motions .[ 1 ] l.j .gagliardi , j. electrostat . 54 ( 2002 ) 219 .[ 2 ] l.j .gagliardi , phys .e 66 ( 2002 ) 011901 .[ 3 ] l.j .gagliardi , j. electrostat .63 ( 2005 ) 309 .[ 4 ] c. ciferri __ , cell 133 ( 2008 ) 427 . [ 5 ] g.j .guimaraes , y. dong , b.f .mcewen , j.g .deluca , current biol . 18 ( 2008 ) 1778 .[ 6 ] s.a .miller , m.l .johnson , p.t .stukenberg , curr .18 ( 2008 ) 1785 .[ 7 ] s. hormeo __ , biophys. j. 97 ( 2009 ) 1022 .[ 8 ] l.j .gagliardi , _ electrostatic considerations in mitosis_. iuniverse publishing co. , bloomington , in , 2009 .[ 9 ] g.b.benedek , f.m.h .villars , _ physics : with illustrative examples from medicine and biology : electricity and magnetism_. springer - verlag , 2000 , p. 403 . [ 10 ] b. alberts , d. bray , j. lewis , m. raff , m.k .roberts , j.d ._ molecular biology of the cell_. garland publishing co. , n.y . , 1994 ,[ 11 ] c. amirand _et al . _ ,cell 92 ( 2000 ) 409 .[ 12 ] r.a .steinhardt , m. morisawa , in : r. nuccitelli , d.w .deamer ( eds . ) , intracellular ph : its measurement , regulation , and utilization in cellular functions .alan r. liss , new york , 1982 , pp .361 - 374 .[ 13 ] g. schatten , t. bestor , r. balczon , eur . j. cell biol . 36 ( 1985 ) 116 .[ 14 ] m.w .kirschner , j. cell biol .86 ( 1980 ) 330 .[ 15 ] m. de brabander , g. geuens , r. nuydens , cold spring harbor symp .46 ( 1982 ) 227 .[ 16 ] w.j .deery , b.r .brinkley , j. cell biol .* 96*:1631 .[ 17 ] b. alberts , d. bray , j.lewis , m. raff , m.k .roberts , j.d .watson , _ molecular biology of the cell_. new york : garland publishing company , 1994 , p. 1041 .[ 18 ] d. jordan - lloyd , a. shore , _ the chemistry of proteins_. london : j. a. churchill publishing company , 1938 [ 19 ] l. pauling , j. am .67 ( 1945 ) 555 .[ 20 ] m.f .toney , j.n .howard , j. richer , g.l .borges , j.g .gordon , o.r .melroy , d.g .wiesler , d. yee , l. sorensen , nature 368 ( 1994 ) 444 .[ 21 ] l.j .gagliardi , j. electrostat .66 ( 2008 ) 147 .[ 22 ] h.c .joshi , m.j .palacios , l. mcnamara , d.w .cleveland , nature 356 ( 1992 ) 80 .[ 23 ] s.l .wolfe , _ molecular and cellular biology_. belmont , ca : wadsworth publishing company , p. 1012 .[ 24 ] r. heald , r. tournebize , t. blank , r. sandaltzopoulos , p. becker , a. hyman , e. karsenti , nature 382 ( 1996 ) 420 .[ 25 ] m.v .satari , j.a .tuszyski , r.b .zakula , phys .e , 48 ( 1993 ) 589 .[ 26 ] j.a .brown , j.a .tuszyski , phys .e 56 ( 1997 ) 5834 .[ 27 ] n.a .baker , d. sept , s. joseph , m.j .holst , j.a .mccammon , proc .98 ( 2001 ) 10037 .[ 28 ] j.a .tuszyski , j.a .brown , p. hawrylak , phil .lond . a 356 ( 1998 ) 1897 .[ 29 ] j.a .tuszyski , s. hameroff , m.v .satari , b. trpisov , m.l.a .nip , j. theor .( 1995 ) 371 . [ 30 ] r. stracke , k.j .bhm , l. wollweber , j.a .tuszyski , e. unger , bioch . and biophys( 2002 ) 602 . [ 31 ] d. sackett , ph - induced conformational changes in the carboxy - terminal tails of tubulin , presented at the banff workshop molecular biophysics of the cytoskeleton , banff , alberta , canada , august 25 - 30 , 1997 .[ 32 ] j.a .tuszyski , j.a .brown , e.j .carpenter , e. crawford .2002 . in : proceedings of the electrostatics society of america and institute of electrostatics - japan .morgan hill , ca : laplacian press , pp .[ 33 ] r.c .weisenberg , science 177 ( 1972 ) 1104 .[ 34 ] g.g .borisy , j.b .olmsted , science 177 ( 1972 ) 1196 .[ 35 ] g.s .hartley , j.w .roe , trans .faraday soc .35 ( 1940 ) 101 .[ 36 ] l.j .gagliardi , j. electrostat .64 ( 2006 ) 843 .[ 37 ] m.c .glick , e.w .gerner , and l. warren , j. cell physiol .77 ( 1971 ) 1 .[ 38 ] r.b .silver , l.a .king , and a.f .wise , biol .( 1998 ) 209 .[ 39 ] r.b .silver , biol .bull . 187 ( 1994 ) 235 .[ 40 ] l. weiss , j. theoret .18 ( 1968 ) 9 .[ 41 ] a. grancell , p.k .sorger , current biol . 8 ( 1998 ) r382 .[ 42 ] h. maiato , j. deluca , e.d .salmon , w.c .earnshaw , j. cell science 117 ( 2004 ) 5461 .[ 43 ] y.h .song , e. mandelkow , j. cell biol .128 ( 1995 ) 81 .[ 44 ] d.h .zhang , d.a .callaham , p.k .hepler , j. cell biol . 111( 1990 ) 171 .[ 45 ] e.d .salmon , r.r .segall , j. cell biol .86 ( 1980 ) 355 .[ 46 ] d.p .kiehart , j. cell biol .88 ( 1981 ) 604 .[ 47 ] w.z .cande , physiology of chromosome movement in lysed cell models , in : h.g .schweiger ( ed . ) , international cell biology , springer , berlin , 1981 , pp .382 - 391 .[ 48 ] j.b .olmsted , g.g .borisy , biochemistry 14 ( 1975 ) 2996 .[ 49 ] r.b .nicklas , chromosome movement : current models and experiments on living cells , in : s. inoue , r.e .stephens ( eds . ) , molecules and cell movement , raven press , new york , 1975 , pp .[ 50 ] r.b .nicklas , chromosomes and kinetochores do more in mitosis than previously thought , in : j.p .gustafson , r. appels , r.j .kaufman ( eds . ) , chromosome structure and function : the impact of new concepts , plenum , new york , 1987 , pp . 53 - 74 .[ 51 ] e.d .salmon , ann .253 ( 1975 ) 383 .[ 52 ] e.d .salmon , microtubule dynamics and chromosome movement , in : j.s .hyams , b.r .brinkley ( eds . ) , mitosis : molecules and mechanisms , academic press , san diego , 1989 , pp .119 - 181 . [ 53 ] s.l .wolfe , molecular and cellular biology , second ed . ,wadsworth , belmont , ca ., 1993 , p. 425 .[ 54 ] h. diebler , g. eigen , g. ilgenfritz , g. maass , r. winkler , pure appl .( 1969 ) 93 .
|
recent experiments revealing possible nanoscale electrostatic interactions in force generation at kinetochores for chromosome motions have prompted speculation regarding possible models for interactions between positively charged molecules in kinetochores and negative charge on c - termini near the plus ends of microtubules . a clear picture of how kinetochores establish and maintain a dynamic coupling to microtubules for force generation during the complex motions of mitosis remains elusive . the current paradigm of molecular cell biology requires that specific molecules , or molecular geometries , for force generation be identified . however it is possible to account for mitotic motions within a classical electrostatics approach in terms of experimentally known cellular electric charge interacting over nanometer distances . these charges are modeled as bound surface and volume continuum charge distributions . electrostatic consequences of intracellular ph changes during mitosis may provide a master clock for the events of mitosis . _ keywords _ : electrostatics ; mitosis ; chromosome motility ; intracellular ph
|
in this article , we study a general two - player nonzero - sum stochastic differential game with impulse controls . more specifically , after setting the general framework , we investigate the notion of nash equilibrium and identify the corresponding system of quasi - variational inequalities ( qvis ) .moreover , we propose within this setting a model for competition in retail electricity markets and give a detailed analysis of its properties in both one - player and two - player cases .regarding general nonzero - sum impulse games , we consider a problem where two players can affect a continuous - time stochastic process by discrete - time interventions which consist in shifting to a new state ( when none of the players intervenes , we assume to diffuse according to a standard sde ) .each intervention corresponds to a cost for the intervening player and to a gain for the opponent .the strategy of player is determined by a couple , where is a fixed subset of and is a continuous function : player intervenes if and only if the process exits from and , when this happens , she shifts the process from state to state .once the strategies , , and a starting point have been chosen , a couple of impulse controls is uniquely defined : is the -th intervention time of player and is the corresponding impulse. each player aims at maximizing her payoff , defined as follows : for every belonging to some fixed subset and every couple of strategies we set ,\end{gathered}\ ] ] where , and is the exit time of from .the couple is a nash equilibrium if and , for every couple of strategies .the game just described is connected to the following system of qvis , where with and are suitable intervention operators defined in section [ ssec : qvi ] : the main mathematical result of this paper is the verification theorem [ thm : verification ] : if two functions , with , are a solution to , have polynomial growth and satisfy the regularity condition where with and , then they coincide with the value functions of the game and a characterization of the nash strategy is possible .we stress here that even if stochastic differential games have been widely studied in the last decades , the case of nonzero - sum impulse games has never been considered , to the best of our knowledge , from a qvi perspective .indeed , related former works only address zero - sum stopping games , the corresponding nonzero - sum problems ( with only two , very recent , explicit examples in and ) and zero - sum impulse games .we notice that the qvi formulated in for zero - sum impulse games are obtained as a particular case of our framework .only the two papers deal with some nonzero - sum stochastic differential games with impulse controls using an approach based on backward stochastic differential equations and the maximum principle .the second contribution of our paper is an application of the general setting to competition in retail electricity markets .since electricity market deregulation started twenty years ago , electricity retail markets have been mainly studied from the point of view of the regulation : joskow and tirole study the effect of the lack of hourly meters in households on retail competition , while von der fehr and hansen analyse the switching process of consumers in the norwegian market . here, we are interested in the rationale behind the price policy of electricity retailers for which an illustration is given in figure [ fig : retail - uk ] in the case of the uk electricity markets. retailers tend to increase the household price when the wholesale price increases and to decrease the household price when the wholesale price decreases .since retailers change their price nearly at the same moment ( moments differ only by a few weeks ) , one can wonder if these changes are optimal or result in a non - competitive behaviour .this question is the reason why the british energy regulator launched an inquiry on energy retailers in 2014 . in this paper , we propose to model the competition between two electricity retailers within the general setting of nonzero - sum impulse games , where it is rational for the retailers to increase or decrease their retail prices at discrete moments depending on the evolution of the wholesale price and of the competitor s choice . in our model, we assume that retailers buy the energy on a single wholesale market without distinguishing the purchases on the forward market from those on the spot market .moreover , we suppose that retailers have the same sourcing cost ( the price of power on the wholesale market ) but may have different fixed cost ( i.e. different amount of commercials ) .we also suppose , for tractability reason , that the structure cost of each retailer is quadratic in her respective market share .finally , retailers sell electricity to their final consumers at a fixed price ( possibly different for each retailer ) .both retailers objective is to maximize their total expected discounted profits .their instantaneous profits are composed of three parts : sale revenue ( market share times retail price ) , sourcing cost ( market share times wholesale market price ) , and structure cost .the wholesale market price evolution is assumed to follow an arithmetic brownian motion for the sake of simplicity .this is also partly justified by the fact that negative spot prices for electricity are more and more frequent on various national european markets .a last important feature of our model is that retailers can not transfer continuously the variations of their sourcing cost to their clients .instead , they can only change their prices in discrete time .whenever a retailer changes her price , she faces a fixed cost .indeed , each time a retailer decides to change her price , she has to advertise it and to inform all her actual clients about that change .therefore , the problem naturally formulates as a nonzero - sum stochastic impulse control game . under the guidance provided by the verification theorem established in the general setting , we provide a detailed analysis of nash equilibria of the retail impulse game .we focus on nash equilibria where each retailer keeps her price constant as long as the spread between her price and the wholesale price belongs to some region in the plane ( called non - intervention or continuation region ) .we conjecture that the non - intervention region of retailer consists of a ribbon in the plane , which is delimited by two curves .when the difference between her retail price and the wholesale price hits the boundary of the non - intervention region , the optimal intervention policy consists in instantaneously changing the retail price in order to come back to the interior of this region . within this class of nash equilibria , we obtain a system of algebraic equations that the parameters characterizing the equilibrium have to satisfy .the outline of the paper is the following .section [ sec : stochimpgame ] rigorously formulates the general impulse stochastic games , defines nash equilibria , provides the associated system of qvis and the corresponding verification theorem . in section[ sec : oneplayer ] we consider the retail management problem in a simple one - player framework , while in section [ sec : twoplayer ] we study the two - player model . finally , section [ sec : conclusion ] concludes .in this section we consider a general class of two - player nonzero - sum stochastic differential games with impulse controls : after a rigorous formalization ( see section [ ssec : pb ] ) , we define a suitable differential problem for the value functions of such games ( see section [ ssec : qvi ] ) and prove a verification theorem ( see section [ ssec : verif ] ) .let ( , , , ) be a filtered probability space whose filtration satisfies the usual conditions of right - continuity and -completeness .let be a -dimensional -adapted brownian motion and let be an open subset of . for every and we denote by a solution to the problem with initial condition and where and are given continuous functions .we will later provide precise conditions ensuring that the process is well - defined .we consider two players , that will be indexed by .equation models the underlying process when none of the players intervenes ; conversely , if player intervenes with impulse , the process is shifted from its current state to a new state , where is a continuous function and is a fixed subset of , with .each intervention corresponds to a cost for the intervening player and to a gain for the opponent , both depending on the state and the impulse .the action of the players is modelled via discrete - time controls : an impulse control for player is a sequence where denotes the number of interventions of player , are non - decreasing stopping times ( the intervention times ) and are -valued -measurable random variables ( the corresponding impulses ) . as usual with multiple - control games , we assume that the behaviour of the players , modelled by impulse controls , is driven by strategies , which are defined as follows . [ defphi ]a strategy for player is a pair , where is a fixed open subset of and is a continuous function from to .strategies determine the action of the players in the following sense .once , , and a starting point have been chosen , a pair of impulse controls , which we denote by , is uniquely defined by the following procedure : & \text{\,\ , in which case the impulse is given by , where is the state ; } \\[-0.1 cm ] & \text{- if both the players want to act , player 1 has the priority ; } \\[-0.1 cm ] & \text{- the game ends when the process exits from . } \end{aligned}\ ] ] in the following definition we provide a rigorous formalization of the controls associated to a pair of strategies and the corresponding controlled process , which we denote by .moreover denotes a generic subset of .[ controls ] let and let be a strategy for player .let and consider the conventions and .for every we define , by induction , } } \\ & \widetilde\tau_k = ( \alpha^{a_1}_k \land \alpha^{a_2}_k \land \alpha^{s}_k ) \mathbbm{1 } _ { \ { \widetilde\tau_{k-1 } < \alpha^{s}_{k-1 } \ } } + \widetilde\tau_{k-1 } \mathbbm{1 } _ { \ { \widetilde\tau_{k-1 } = \alpha^{s}_{k-1 } \ } } , & & \text{{\footnotesize[intervention time ] } } \\ & m_k = \mathbbm{1 } _ { \ { \alpha^{a_1}_k \leq \alpha^{a_2}_k \ } } + 2 \ , \mathbbm{1 } _ { \ { \alpha^{a_2}_k < \alpha^{a_1}_k \ } } , & & \text{{\footnotesize[index of the player interv.~at ] } } \\ & \widetilde\delta_k = \xi_{m_k } \big ( \widetilde x^{k-1 } _ { \widetilde \tau_k } \big)\mathbbm{1 } _ { \ { \widetilde \tau_k < \infty \ } } , & & \text{{\footnotesize[impulse ] } } \\ & x_k=\gamma^{m_k } \big ( \widetilde x^{k-1 } _ { \widetilde\tau_k } , \widetilde\delta_k \big)\mathbbm{1 } _ { \ { \widetilde \tau_k <\infty \ } } , & & \text{{\footnotesize[starting point for the next step ] } } \\ & \widetilde x^k= \widetilde x^{k-1 } \mathbbm{1}_{[0 , \widetilde \tau_k [ } + y^ { \widetilde\tau_{k},x_{k } } \mathbbm{1 } _ { [ \widetilde \tau_k , \infty[}. & & \text{{\footnotesize[contr.~process up to the -th interv.]}}\end{aligned}\ ] ] let be the index of the last significant intervention , and let be the number of interventions of player : for and ,let ( index of the -th intervention of player ) and let finally , the controls , , the controlled process and the exit time from are defined by to shorten the notations , we will simply write and . notice that player 1 has priority in the case of contemporary intervention ( i.e. , if ) . in the following lemmawe give a rigorous formulation to the properties outlined in .[ lemmaprocess ] let and let be a strategy for player . * the process admits the following representation ( with the convention ) : * the process is right - continuous .more precisely , is continuous and satisfies equation in , whereas is discontinuous in , where we have * the process never exits from the set .we just prove the first property in , the other ones being immediate .let , with and set , with as in definition [ controls ] . by , and definition [ controls ] , we have where in the fifth equality we have used the continuity of the process in and in the next - to - last equality we exploited the fact that in .each player aims at maximizing her payoff , consisting of four discounted terms : a running payoff , the costs due to her interventions , the gains due to the opponent s interventions and a terminal payoff .more precisely , for each we consider ( the discount rate ) and continuous functions ( the running payoff ) , ( the terminal payoff ) and , ( the interventions costs and gains ) , where with .the payoff of player is defined as follows .let , let be a pair of strategies and let be defined as in definition [ controls ]. for each , provided that the right - hand side exists and is finite , we set ,\end{gathered}\ ] ] where with and is the impulse control of player associated to the strategies .as usual in control theory , the subscript in the expectation denotes conditioning with respect to the available information ( hence , it recalls the starting point ) .notice that in the summations above we do not consider stopping times which equal ( since the game ends in , any intervention is meaningless ) . in order for in to be well defined , we now introduce the set of admissible strategies in .[ admstrat ] let and be a strategy for player .we use the notations of definition [ controls ] and we say that the pair is -admissible if : 1 . for every , the process exists and is uniquely defined ; 2 . for ,the following random variables are in : 3 . for each ,the random variable is in : <\infty;\ ] ] 4 . if for some and , then ; 5 .if there exists for some , then .we denote by the set of the -admissible pairs .thanks to the first and the second conditions in definition [ admstrat ] , the controls and the payoffs are well - defined . the third condition will be used in the proof of the verification theorem [ thm : verification ] . as for the fourth and the fifth conditions , they prevent each player to exercise twice at the same time and to accumulate the interventions before .we conclude the section with the definition of nash equilibria and value functions for our problem .[ defnash ] given , we say that is a nash equilibrium of the game if finally , the value functions of the game are defined as follows : if and a nash equilibrium exists , we set for we now introduce the differential problem satisfied by the value functions of our games : this will be the key - point of the verification theorem in the next section .let us consider an impulse game as in section [ ssec : pb ] .assume that the corresponding value functions are defined for each and that for there exists a ( unique ) function from to such that for each .we define the four intervention operators by for and , with .notice that . the functions in and have an immediate and intuitive interpretation .let be the current state of the process ; if player ( resp .player ) intervenes with impulse , the present value of the game for player can be written as ( resp . ) : we have considered the value in the new state and the intervention cost ( resp .hence , in is the impulse that player would use in case she wants to intervene .similarly , ( resp . ) represents the value of the game for player when player ( resp .player ) takes the best immediate action and behaves optimally afterwards . notice that it is not always optimal to intervene , so , for each , and that player should intervene ( with impulse , as already seen ) if and only if .hence , we have an heuristic formulation for the nash equilibria , provided that an explicit expression for is available .the verification theorem will give a rigorous proof to this heuristic argument .we now characterize the value functions .assume ( weaker conditions will be given later ) and define where are as in , denotes the transpose of and are the gradient and the hessian matrix of , respectively .we are interested in the following quasi - variational inequalities ( qvis ) for , where and : [ pbnew ] notice that there is a small abuse of notation in , as is not defined in , so that means , for each .we now provide some intuition behind conditions - .first of all , the terminal condition is obvious .moreover , as we already noticed , is a standard condition in impulse control theory . for [ pbnew - dj ] , if player intervenes ( i.e. , ) , by the definition of nash equilibrium we expect that player does not lose anything : this is equivalent to . on the contrary , if player does not intervene ( i.e. , ) , then the problem for player becomes a classical one - player impulse control one , hence satisfies . in short, the latter condition says that , with equality in case of non - intervention ( i.e. , ) .the functions can be unbounded .indeed , this is the typical case when the penalties depend on the impulse : when the state diverges to infinity , one player has to pay a bigger and bigger cost to push the process back to the continuation region .this corresponds to a strictly decreasing value function ( whereas the value of the game is strictly increasing for the competitor , who gains from the opponent s intervention ) . as a comparison, we recall that in one - player impulse problems the value function is usually bounded from above .finally , we notice that the operator appears only in the region , so that needs to be of class only in such region ( indeed , this assumption can be slightly relaxed , as we will see ) .this represents a further difference with the one - player case , where the value function is asked to be twice differentiable almost everywhere in , see ( * ? ? ?6.2 ) . a verification theorem will be provided in the next section . here , as a preliminary check on the problem we propose ,we show that we are indeed generalizing the system of qvis provided in , where the zero - sum case is considered .we show that , if we assume then the problem in collapses into the one considered in .to shorten the equations , we assume ( this makes sense since in a finite - horizon problem is considered ) .first of all , we define for each . it is easy to see that , under the conditions in, we have so that problem writes [ pbtemp ] simple computations , reported below , show that problem is equivalent to [ pbcosso ] which is exactly the problem studied in , as anticipated .we conclude this section by proving the equivalence of and .problems and are equivalent ._ we prove that implies .the only property to be proved is ( [ pbcosso - qvi ] ) .we consider three cases .first , assume .since and , we have , which implies ( [ pbcosso - qvi ] ) since .then , assume .since and , we have , which implies ( [ pbcosso - qvi ] ) since .finally , assume .since and , we have , which implies ( [ pbcosso - qvi ] ) since ._ we prove that implies .the only properties to be proved are ( [ pbtemp - m ] ) , ( [ pbtemp - mh ] ) and ( [ pbtemp - h ] ) .we assume ( the case being immediate ) and consider three cases .first , assume .since , from ( [ pbcosso - qvi ] ) it follows that , which implies . then , assume .since for every , and since , from ( [ pbcosso - qvi ] ) it follows that .finally , assume . from ( [ pbcosso - qvi ] )it follows that , which implies since .we provide here the main mathematical contribution of this paper , which is a verification theorem for the problem formalized in section [ ssec : pb ] .[ thm : verification ] let all the notations and working assumptions in section [ ssec : pb ] be in force and let be a function from to , with .assume that holds and set , with as in .moreover , for assume that : * is a solution to - ; * and it has polynomial growth ; * is a lipschitz surface and has locally bounded derivatives near . finally , let and assume that , where with , the set is as above and the function is as in . then , basically , we are saying that the nash strategy is characterized as follows : player intervenes if and only if the controlled process exits from the region ( equivalently , if and only if , where is the current state ) .when this happens , his impulse is . in the case of such ( candidate ) optimal strategies, we notice that the properties in lemma [ lemmaprocess ] imply what follows ( the notation is heavy , but it will be crucial to understand the proof of the theorem ) : [ prop2 ] for every strategies such that , every and every , . by definition [ defnash ], we have to prove that for every and strategies such that and .we show the results for and , the arguments for and being symmetric ._ step 1 : ._ let be a strategy for player 1 such that . herewe will use the following shortened notation : thanks to the regularity assumptions and by standard approximation arguments , it is not restrictive to assume ( see ( * ? ? ?* thm . 3.1 ) ) . for each and , we set where is the exit time from the ball with radius . we apply it s formula to the function , integrate in the interval ] .let us now consider the second term : by ( [ pbnew - s ] ) and the definition of in , for every stopping time we have as for the third term , let us consider any stopping time . by ( [ prop2]f ) we have ; hence , the condition in ( [ pbnew - dj ] ) , the definition of in and the expression of in ( [ prop2-d2 ] ) imply that by and the estimates in - it follows that . \end{gathered}\ ] ] thanks to the conditions in , and the polynomial growth of , we can use the dominated convergence theorem and pass to the limit , first as and then as . in particular , for the fourth termwe notice that for suitable constants and ; the corresponding limit immediately follows by the continuity of in the case and by itself in the case ( as a direct consequence of , we have a.s . ) .hence , we finally get = j^1(x;{\varphi}_1,{\varphi}_2^*).\end{gathered}\ ] ] _ step 2 : . _we argue as in step 1 , but here all the inequalities are equalities by the properties of . when solving the qvi problem , one deals with functions which are piecewise defined , as it will be clear in the next sections .then , the regularity assumptions in the verification theorem correspond to suitable pasting conditions , leading to a system of algebraic equations. if the regularity conditions are too strong , the system has more equations than parameters , making the application of the theorem more difficult .hence , a crucial point when stating a verification theorem is to set regularity conditions which allow such a system to actually have a solution . in (* section 3.3 ) a simple example shows that the regularity conditions we impose lead to an algebraic system with as many equations as parameters , so that a solution exists , at least formally . moreover , we observe that , unlike one - player control impulse problems , in our verification theorem the candidates are not required to be twice differentiable everywhere . for example , consider the case of player 1 : as in the proof we always consider pairs of strategies in the form , by the controlled process never exits from , which is then the only region where the function needs to be ( almost everywhere ) twice differentiable in order to apply it s formula .we now address the optimization problem of energy retailers who wants to maximize their expected profits , by increasing or decreasing the price they charge their customers for the consumption of electricity . in section [ sec : oneplayer ] , as a warm - up , we consider a simpler but enlightening one - player version of the problem . in section [ sec : twoplayer ] we will turn to a two - player competitive market and we will focus on a nonzero - sum impulse game , that can be embedded in the setting presented in section [ sec : stochimpgame ] , so that results therein will serve us as a guide to perform our analysis .the problem we study in this section has a long tradition ( see ) and it is in particular very similar to the one in ( see also the references therein ) .nevertheless , we give all the mathematical details ( most of them in the appendix ) in order to keep this section self - contained .more precisely , the article solves an optimal control problem of an inventory where the state variable is a mean - reverting process , the running cost is quadratic in the state variable and the switching costs are piecewise linear in the impulse size . our problem could be seen as a limiting case of theirs when both the proportional switching costs and the mean - reverting part of the state variable tend to zero .we also notice that the running cost in our model is more general than in .[ [ formulation - of - the - problem . ] ] formulation of the problem .+ + + + + + + + + + + + + + + + + + + + + + + + + + + let us consider a retailer who buys energy ( electricity , gas , gasoline ) on the wholesale market and resells it to final consumers .we address the problem of investigating the retailer s optimal strategy in setting the final price and we model it as an impulse stochastic control problem .as anticipated , the retailer buys the commodity in the wholesale market .we assume that the continuous - time price of the commodity is modelled by a brownian motion with drift : for , where and are fixed constants .the standard brownian motion is defined on a probability space , which is equipped with the natural filtration generated by itself and made -complete ( hence right - continuous ) .notice that the retailer has no control on the wholesale price . after buying the energy ,the retailer sells it to final consumers . according to the most common contracts in energy markets ,the retailer can change the price only after a written communication to all her customers .then , we model the final price by a piecewise - constant process .more precisely , we consider an initial price and a sequence of non - negative random times , which correspond to the retailer s interventions to adjust the price and move to a new state . if we denote by the corresponding impulses , i.e. , , we have for every .let us denote by the difference or spread between the final price and the wholesale price .in other words , represents the retailer s unitary income when selling energy ( we do not consider , for the moment , the operational costs she faces ) . by and , we have for every , where we have set .we remark that , when the player does not intervene , the process satisfies the following stochastic differential equation : we assume that the retailer s market share at time is a function of , which we denote by . in our model , we set for every , where is a fixed constant .in other words , the market share is a truncated linear function of with two thresholds : if ( in which case the final price of the retailer is lower than the wholesale price ) all the customers buy energy from the retailer , whereas if the retailer has lost all her customers . at each time , the retailer s income from selling the energy is given by , but she also has to pay an operational cost , which we assume to be a quadratic function of the market share .hence , the instantaneous payoff is given by where is the current state of the process and is a positive real constant .moreover , there is a constant penalty to be paid when the retailer intervenes to adjust .finally , we denote by be the discount rate . to sum up, we consider here the following stochastic impulse control problem .[ def : admissiblecontrols ] a control is a sequence , where , is a non - decreasing non - negative family of stopping times ( the intervention times ) and are real random variables ( the corresponding impulses ) such that is -measurable for all .we denote by the set of admissible controls , that is the set of controls such that < \infty.\ ] ] for each and , we denote by the process defined in . [ def : jandv ] the function ( value function ) is defined , for each , by where , for every ,\ ] ] and the function has been defined in .if there exists such that , we say that is an optimal control in .notice that the functional in is well - defined , as is bounded and holds . to shorten the notations, we will often omit the dependence on the control and simply write .we now list some remarks about the payoff and the penalty of our problem : these properties will be useful for stating and proving our results . * an explicit expression for the running cost is for every , where we have set in particular , we remark that we have , for every . *the function in is a concave parabola : where is as in and the vertex is given by : from the retailer s point of view , equation says that is the state which maximizes the payoff , the optimal income being .notice that the optimal share is given by in particular , if the optimal share is .* moreover , we notice that x_z = \frac{b\delta}{2\delta+b} ] . in other words , if we want the income from the energy sale to be higher than the operational costs , we need the spread between the wholesale price and the final price to be greater than . * finally , if we consider as functions of , we notice that , \delta/4 ] , & & y_v(0 ) = \delta/4 , & & y_v(+\infty ) = 0 , & & y_v'<0 , \\ & x_z(b ) \in \,\ , ] 0 , \delta [ , & & x_z(0 ) = 0 , & & x_z(+\infty ) = \delta , & & x_z'>0 , \\ & \phi_v(b ) \in \ , ] 0,1/2 [ , & & \phi_v(0 ) = 1/2 , & & \phi_v(+\infty ) = 0 , & & \phi_v'<0 .\end{aligned}\ ] ] some intuitive properties of the model are formalized in : as the operational costs increase the optimal spread increases , the maximal instantaneous income decreases , the region where the payoff is positive gets smaller and the optimal share decreases . in particular, we remark that ,1/2[ ] : ,\delta[.\ ] ] as a consequence , when dealing with the continuation region , we will consider as the running cost of the problem the restriction of the function ,\delta [ } = f ] and it is included in , \delta[ , where the retailer intervenes , } \\\text{ , where the retailer does not intervene.}\end{gathered}\ ] ] then , the qvi problem suggests the following candidate for : {\underline x } , { \bar x}[ x \in { \mathbb{r}}\setminus ] { \underline x } , { \bar x}[ ] , where ) and where the function ( recall definition [ defmvonepl ] ) is given by heuristically , it is reasonable to assume that the function has a unique maximum point , which belongs to the continuation region {\underline x},{\bar x}[},\\ { \varphi}_{a_1,a_2}(x^ * ) - c , & \text{in } , \end{cases}\ ] ] where is as in and the five parameters satisfy and the following conditions : in order to have a well - posed definition , we first need to prove that a solution to actually exists .since the system can not be solved directly , we try to make some guesses to simplify it .consider the structure of the problem : the running cost is symmetric with respect to ( see the formulas ( [ deffparab ] ) and ( [ defvertexf ] ) ) , the penalty is constant , the uncontrolled process is a scaled brownian motion ( recall that ) .then , we expect the value function to be symmetric with respect to , which corresponds to the choice .the same argument suggests to set . finally ,as a symmetry point is always a local maximum or minimum point , we expect . in short ,our guess is with . in particular ,we now consider functions in the form where and the coefficients and have been defined in and . indeed, an easy check shows that is a local maximum for ( so that the first condition in is satisfied ) if and only if .then , under our guess , we can equivalently rewrite the system as with and .equivalently , we have to solve in order to simplify the notations , we operate a change of variable and set , so that we have a e^|y - a e^- |y - 2k_2|y=0 , [ sista ] + a e^|y + a e^- |y - k_2 |y^2 - 2a + c = 0 , [ sistb ] where and . finally , notice that the order condition now reads so , in order to prove that is well - defined it suffices to show that a solution to -- exists and is unique .the proof of the following proposition is in the appendix .[ res : gooddefonepl1st ] assume , with , where is a suitable function defined in . then , the function in definition [ def : candidateonepl1st ] is well - defined , namely there exists a solution to system , which is given by where is as in and is the unique solution to -- .we conclude this section with an application of the verification theorem in proposition [ prop : verificationonepl ] , which yields that the candidate defined in the previous section actually corresponds to the value function .moreover , we characterize the optimal price management policy : the retailer has to intervene if and only if the process hits or and , when this happens , she has to shift back to the state .the proof of the next result is postponed to the appendix .[ prop : checkverifonepl1st ] let hold and let be as in definition [ def : candidateonepl1st ] . for every ,an optimal control for the problem in definition [ def : jandv ] exists and is given by , where the variables are recursively defined by for , where we have set and .moreover , coincides with the value function : for every we have for more details and results on the one - player model including the nonzero drift case ( ) as well as the asymptotic analysis of the value function and the optimal strategy as we refer to .we now extend the one - player model in section [ sec : oneplayer ] to a competitive two - player energy market model , getting a nonzero - sum stochastic game with impulse controls , which is a special case of the general framework in section [ sec : stochimpgame ] . after setting the model in section [ ssec : twoplformulation ] , we provide in section [ ssec : twoplcandidate ] a system of equations to be solved in order to fully determine the value function .the one - player model in section [ sec : oneplayer ] has the advantage of being mathematically tractable .however , it does not fully reproduce the fierce competition which characterizes modern deregulated energy markets : the interaction between opposing retailers is only implicitly considered ( the player s market share decreases as her price rises ) . motivated by this fact , we now modify our model by introducing a second player .hence , we assume that the retail market is made up of two opponent players , indexed by . similarly to section [ sec : oneplayer ] , each retailer buys energy at the wholesale price , with as in , and resells it to her customers at a final price , with resembling : for each , where is the initial wholesale price , and are fixed constants , is a one - dimensional brownian motion , is the initial retail price and is the impulse control corresponding to the retailer s interventions on the final price , as in section [ sec : oneplayer ] .notice that the retailers buy the energy from the same provider and that they do not influence the wholesale price . in order to have a realistic model ,the market share of player depends on the price she asks ( as in the one - player case ) and on the opponent s pricing choices as well .in particular , if the two retailers fix the same final price then both market shares are equal to , whereas a lower price with respect to the competitor should correspond to an increase in the number of customers .let , with , and let be a fixed constant . in our modelwe assume that the market share of retailer at time is , where for every .in other words , the market share of player is a truncated linear function of , with two thresholds : if the retailer has the monopoly of the market , whereas if the retailer has lost all his customers .notice that the market share function in is not the same as in the case of one player ( recall equation ) .this is clearly due to the presence of a second market actor , which leads to an expansion of the domain ( from to ) where takes values in ,1[ ] , where are suitable functions .hence , we guess that ( the intervention region of player 1 ) is given by {\underline x}_1(x_2),{\bar x}_1(x_2 ) [ \big\} ] , but we have to exclude from such region the points where player 1 already intervenes ( in case of contemporary intervention , player 1 has the priority ) .finally , the common continuation region is just the complement of such sets . in short, see also figure [ plane ] , we have -\infty,{\underline x}_1(x_2 ) ] \cup [ { \bar x}_1(x_2),+\infty [ \big\ } = : r , \\ \big \ { { \mathcal{m}}_2 \widetilde v_2 - \widetilde v_2 = 0\big\ } = \big\{(x_1,x_2 ) : x_1 \in ] { \underline x}_1(x_2),{\bar x}_1(x_2 ) [ , \,\ , x_2 \in ] -\infty,{\underline x}_2(x_1 ) ] \cup[ { \bar x}_2(x_1),+\infty [ \big\ } = : b , \\{ { \mathcal{m}}_1 \widetilde v_1 - \widetilde v_1 < 0 , { \mathcal{m}}_2 \widetilde v_2 - \widetilde v_2 < 0\big\ } = \big\{(x_1,x_2 ) : x_1 \in ] { \underline x}_1(x_2),{\bar x}_1(x_2 ) [ , \,\ , x_2 \in ] { \underline x}_2(x_1),{\bar x}_2(x_1)[\big\ } = : w.\end{gathered}\ ] ] we recall once more the interpretation : is the region where player 1 intervenes ( red area in the picture ) , is the region where player intervenes ( blue area in the picture ) , is the region where no one intervenes ( white area in the picture ) . bywe then get ( red ) , ( blue ) , ( white ) depending on possible players interventions . ] to go on , we need to estimate and the operators .let us start with .the only differential operator in is , which suggests the change of variable and , so that the pde becomes a second - order linear ode in the variable , which is easily solvable for each fixed .then , after reintroducing the original variables , we get the following solution to : where are suitable real functions , are the two roots of and ( the particular solution to the corresponding ode ) is we now estimate and , where the operators have been defined in .let and let us start from the functions in : the definition here reads heuristically , it is reasonable to assume that the function has a unique maximum point and that this point belongs to the continuation region ( where , by definition , ) ; we can argue similarly for .thus we get then , by the definition in we have for each .we finally get the following ( heuristic ) candidates for the value functions : notice that the derivative of ( that is in the region ) is [ [ conditions . ] ] conditions .+ + + + + + + + + + + we now collect all the conditions the candidates have to satisfy .we just write the equations for , the ones for being symmetric .more in detail , we need to set the optimality condition for and to impose the regularity required in the assumptions of theorem [ thm : verification ] , that is ( is the continuation region of player ) let be the intersections of the four curves , as in figure [ plane ] . as for the condition , we have to set a -pasting in the boundaries between the three regions , that are the curved segments and the two vertical curves . as for the condition ( is the central horizontal strip ) , we have to add a -pasting in the segments and . as for the condition , it is satisfied by definition .* _ optimality of . _ as is by definition the maximizer of , for each we have the following first - order condition : * _ continuity ._ we first set the continuity on the curve ( left vertical curve in the picture ) .the function has two different expressions in the central vertical strip , one in the white region and one in the blue region , so that we need two separate continuity conditions , one in the segment and one outside such segment : , \\ & { \varphi}_1\big({x^*}_1(x_2),x_2\big )- c_1 = { \varphi}_1\big({\underline x}_1(x_2),{x^*}_2\big({\underline x}_1(x_2)\big)\big ) , & & x_2 \in { \mathbb{r}}\setminus \big[x_2^a , x_2^b\big ] .\end{aligned}\ ] ] similarly , for the continuity on the curve ( right vertical curve in the picture ) : , \\ & { \varphi}_1\big({x^*}_1(x_2),x_2\big ) - c_1 = { \varphi}_1\big({\bar x}_1(x_2),{x^*}_2\big({\bar x}_1(x_2)\big)\big ) , & & x_2 \in { \mathbb{r}}\setminus \big[x_2^d , x_2^c\big ] .\end{aligned}\ ] ] we now set the continuity on the segment , which belongs to the curve ( lower horizontal curve in the picture ) : ^a , x_1^d\big[.\ ] ] similarly , for the continuity on the segment , which belongs to the curve ( upper horizontal curve in the picture ) : ^b , x_1^c\big[.\ ] ] * _ differentiability . _ we now set a -pasting on the segment , which belongs to the curve ( left vertical curve in the picture ) .as is a two - dimensional function , we need to set one condition for each derivative ( for we use and notice that the first term is zero because of the optimality condition ) : , \\ & \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({\underline x}_1(x_2),x_2\big ) = \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({x^*}_1(x_2),x_2\big ) , & & x_2 \in \big[x_2^a , x_2^b\big ] .\end{aligned}\ ] ] similarly , for the -pasting on the segment , which belongs to the curve ( right vertical curve in the picture ) : , \\ & \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({\bar x}_1(x_2),x_2\big ) = \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({x^*}_1(x_2),x_2\big ) , & & x_2 \in \big[x_2^d , x_2^c\big ] . \end{aligned}\ ] ] we can finally collect all the conditions our candidate function must satisfy : , \\ { \varphi}_1\big({x^*}_1(x_2),x_2\big )= { \varphi}_1\big({\underline x}_1(x_2),{x^*}_2\big({\underline x}_1(x_2)\big)\big ) + c_1 , & x_2 \in { \mathbb{r}}\setminus \big[x_2^a , x_2^b\big ] , \\ { \varphi}_1\big({x^*}_1(x_2),x_2\big ) = { \varphi}_1\big({\barx}_1(x_2),x_2\big ) + c_1 , & x_2 \in \big[x_2^d , x_2^c\big ] , \\ { \varphi}_1\big({x^*}_1(x_2),x_2\big )= { \varphi}_1\big({\bar x}_1(x_2),{x^*}_2\big({\bar x}_1(x_2)\big)\big ) + c_1 , & x_2 \in { \mathbb{r}}\setminus \big[x_2^d , x_2^c\big ] , \\ { \varphi}_1\big(x_1,{x^*}_2(x_1)\big )= { \varphi}_1\big(x_1,{\underline x}_2(x_1)\big ) , & x_1 \in \big]x_1^a , x_1^d\big [ , \\ { \varphi}_1\big(x_1,{x^*}_2(x_1)\big )= { \varphi}_1\big(x_1,{\bar x}_2(x_1)\big ) , & x_1 \in \big]x_1^b , x_1^c\big [ , \\\big(\frac{\partial { \varphi}_1}{\partial x_1}\big ) \big({\underline x}_1(x_2),x_2\big ) = 0 , & x_2 \in \big[x_2^a , x_2^b\big ] , \\\big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({\underline x}_1(x_2),x_2\big ) = \big(\frac{\partial { \varphi}_1}{\partial x_2}\big )\big({x^*}_1(x_2),x_2\big ) , & x_2 \in \big[x_2^a , x_2^b\big ] , \\ \big(\frac{\partial { \varphi}_1}{\partial x_1}\big )\big({\bar x}_1(x_2),x_2\big ) = 0 , & x_2 \in \big[x_2^d , x_2^c\big ] , \\ \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({\bar x}_1(x_2),x_2\big ) = \big(\frac{\partial { \varphi}_1}{\partial x_2}\big ) \big({x^*}_1(x_2),x_2\big ) , & x_2 \in \big[x_2^d , x_2^c\big ] . \end{cases}\ ] ] then , we have to consider the 11 equations above , along with the corresponding ones for .therefore if a solution to such system exists , then we have a well - defined candidate and we can safely apply the verification theorem , as we did in the one - player case . this problem remains still open .in particular , concerning the above system for , we considered the three test cases when and are : constant , linear and quadratic functions of . in none of these casewe could find a satisfactory answer to our problem , which would most probably require the use of viscosity solutions as in in order to go beyond the case of smooth value functions .this is postponed to future research .we focus here on the case when one of the two players , say player , never changes her retail price ( in this sense she is a stubborn competitor ) .therefore her retail price is constant , i.e. for every .this can be artificially seen as a particular case of the two - player retail game of the previous section , supposing that player has an infinite intervention cost , . in other terms ,player 2 intervention cost is so high that it is never optimal for her to change the retail price .moreover , in order to base our intuition on the results we obtained in the one - player case , we assume that the wholesale price is driftless , i.e. . in this situation the objective functional of player ( recall equation ) is given by ,\end{aligned}\ ] ] for every initial state ( recall that ) and every strategy ( recall that player does not intervene ) .equivalently , choosing as state variables and , the above functional reads \end{aligned}\ ] ] for every initial state , where and , every strategy and where . +the problem is clearly simplified , since only one market actor is playing the game . nevertheless , the setting is still bi - dimensional , since the state variables are and .notice that the impulses of player modify only the state . from now onwe consider only the maximization problem of player . since there is no ambiguity about which player is optimizing her objective , we drop both the subscript and the superscript from the notation . for every , player value function is defined as the supremum of over all admissible strategies .analogously to the heuristics in the previous section , it is reasonable to assume that the continuation region is included in the strip and the candidate for the value function is ( recall that player never intervenes because she has infinite intervention cost ) : where and where solves with hence we expect to be of the form where and are suitable real functions , and is a particular solution of the ode ( [ odephi ] ) .we conjecture that the continuation region is \underline p(s ) , \overline p(s ) [ \ } ] , she intervenes to push her retail price towards a target price , where , for , is obtained as the maximizer of the function in ( [ f(p , s ) ] ) since , being , the optimizer of is the same as the maximiser of ( compare to proposition [ res : gooddefonepl1st ] ) .it is also reasonable to assume that this maximum point is unique and it belongs to the continuation region , so that we have a simple computation gives .\ ] ] essentially , each time her price falls outside the continuation region , she intervenes to push the price towards the target .moreover , intervention costs being fixed for player , we guess that and are equidistant from in as in the one - player case ( see proposition [ res : gooddefonepl1st ] ) , hence in the continuation region .now , notice that and that the point belongs to the boundary of the intervention region .moreover for any price the market share of player is zero , so that the set is contained in the intervention region . since and are equidistant from at all point in the continuation region , they have to be equidistant from at the point as well , which implies that at the point .this situation is summarized in figure [ figurec2infty ] , where and intersect at the point , the continuation region is in white ( w ) and the intervention area is in red ( r ) .( red ) and ( white ) depending on possible player interventions in the case of a stubborn competitor .] more precisely , we have : \underline p(s),\overline p(s ) [ , \,\ , s < s_a \big\ } = w , \\ \big \{ { \mathcal{m}}\widetilde v - \widetilde v = 0\big\ } = r = w^c,\end{gathered}\ ] ] where we set .so , the candidate function is ( notice that ) : the regularity conditions of the value function required by the verification theorem are : , so that it suffices to ask optimality of and a and a -pasting at the frontier of : for all we have as the points belong to the continuation region for each , we have moreover , as the point belongs to the intervention region , we have . since is continuous by definition , we deduce that is not continuous at , giving one more argument urging the use of viscosity solutions for a rigorous treatment of such models . from an economical point of view, this situation makes sense : by keeping her retail price constant when the sourcing cost increases , player 2 forces her opponent ( player 1 ) to increase unilaterally her price and thus to lose a bigger and bigger market share until she exits the market .this is a strategy that could be implemented by financially sound players ( i.e. able to endure financial losses due to a retail price lower than the sourcing cost ) to push their weak competitors out of the market .in this paper we consider a general two - player nonzero - sum impulse game , whose state variable follows a diffusive dynamics driven by a multi - dimensional brownian motion . after setting the problem , we provide a verification theorem giving sufficient conditions in order for the solutions of a suitable system of quasi - variational inequalities to coincide with the value functions of the two players . to the best of our knowledgethis result is new to the literature on impulse games and it constitutes the major mathematical contribution of the present paper .the general setting is motivated by a model of competition among retailers in electricity markets , which we also treat in both one - player and two - player cases .while in the one - player case we gave a full rigorous treatment of the impulse control problem , in the two - player case we provide a detailed heuristic study of the shape of the value functions and their optimal strategies .making the heuristics fully rigorous would most probably require the use of viscosity solutions , which looks far from being an easy extension of the methods employed in for zero - sum impulse games .this is left to future research .99 m. basei , _ topics in stochastic control and differential game theory , with application to mathematical finance_. ph.d .thesis in mathematics , university of padova ( 2016 ) .a. bensoussan , a. friedman , _ nonzero - sum stochastic differential games with stopping times and free boundary problems _ , trans .society 231 ( 1977 ) , no .2 , 275327 . a. bensoussan , j. l. lions , _contrle impulsionnel et inquations quasi variationnelles _ ,1 , dunod , paris , 1982 .a. cadenillas , p. lakner , p. pinedo , _ optimal control of a mean - reverting inventory _ , operations research 58(9 ) ( 2010 ) , 16971710 .d. chang , h. wang , z. wu , _ maximum principle for non - zero sum differential games of bsdes involving impulse controls _ , control conference ( ccc ) , 2013 32nd chinese , ieee , 15641569 .d. chang , z. wu , _stochastic maximum principle for non - zero sum differential games of fbsdes with impulse controls and its application to finance _, journal of industrial and management optimization , 11.1 ( 2015 ) , 2740 .n. chen , m. dai , x. wan , _ a nonzero - sum game approach to convertible bonds : tax benefit , bankruptcy cost , and early / late calls _, mathematical finance 23 ( 2013 ) , no .1 , 5793 .a. cosso , _ stochastic differential games involving impulse controls and double - obstacle quasi - variational inequalities _ , siam j. control optim .51 ( 2013 ) , no .3 , 21022131 . t. de angelis , g. ferrari , j. moriarty . _ nash equilibria of threshold type for two - player nonzero - sum games of stopping_. preprint ( 2015 ) , arxiv:1508.03989v1 [ math.pr ] .a. friedman , _ stochastic games and variational inequalities _rational mech .51 ( 1973 ) , no . 5 , 321346 .p. joskow , j. tirole , _ electricity retail competition _ , rand journal of economics , 37 ( 2006 ) , no .4 , 799815 . b.k .ksendal , a. sulem , _ applied stochastic control of jump diffusions , second edition _ , springer - verlag , berlin - heidelberg , 2007 .von der fehr , p.v .hansen , _ electricity retailing in norway _ , the energy journal 13 ( 2010 ) , no .1 , 2545 .in this appendix we have gathered all auxiliary results and proofs that have been used in the one - player section [ sec : oneplayer ] .proposition [ res : gooddefonepl1st ] follows from lemmas [ lem : exist ] and [ lem : ordercond ] ._ first step ._ let us start by equation .for a fixed , we are looking for the strictly positive zeros of the function defined by for each .the derivative is we need to consider two cases , according to the value of .let if we have in ,\infty[ ] and in \widetilde y , \infty[ ] if and only if , \bar a[ ] , we define where is well - defined by the first step . we are going to prove that this concludes the proof : indeed ,if holds , it follows that the equation , which is just a rewriting of , has exactly one solution , \bar a[ ] ( since we have by definition and in ^ * , { \bar x}[.}\ ] ] moreover , as by and } { \varphi}_a,}\ ] ] which concludes the proof . _ condition ( iii ) ._ we have to prove that for every we have in {\underline x } , { \bar x}[ ] , we already know by that . then , to conclude we have to prove that {\underline x } , { \bar x}[ ] . as a consequence, we can decompose each variable as a sum of suitable exit times from {\underline x } , { \bar x}[ ] .then , we have and for every , where the variables are independent and distributed as . as a consequence ,we have = { \mathbb{e}_{x}}\bigg[\sum_{k \geq 2 } e^{- \rho \big(\zeta^x + \sum_{l=1}^{k-1 } \zeta^{x^*}_l\big ) } \bigg ] = { \mathbb{e}_{x}}\bigg [ e^{-\rho \zeta^x}\sum_{k \geq 2 } \prod_{l=1,\dots , k-1 } e^{-\rho \zeta^{x^*}_l } \bigg].\ ] ] by the fubini - tonelli theorem and the independence of the variables : = { \mathbb{e}_{x}}\big [ e^{-\rho \zeta^x}\big]\sum_{k \geq 2 } \,\ , \prod_{l=1,\dots , k-1 } { \mathbb{e}_{x}}\big [ e^{-\rho \zeta^{x^*}_l } \big].\ ] ] as the variables are identically distributed with , we can conclude : = \sum_{k \geq 2 } { \mathbb{e}_{x}}\big [ e^{-\rho \zeta^{x^ * } } \big]^{k-1 } < \infty,\ ] ] which is a converging geometric series .
|
we study the notion of nash equilibrium in a general nonzero - sum impulse game for two players . the main mathematical contribution of the paper is a verification theorem which provides , under some regularity conditions , the system of quasi - variational inequalities identifying the value functions and the optimal strategies of the two players . as an application , we propose a model for the competition among retailers in electricity markets . we first consider a simplified one - player setting , where we obtain a quasi - explicit expression for the value function and the optimal control . then , we turn to the two - player case and we provide a detailed heuristic analysis of the retail impulse game , conducted along the lines of the verification theorem obtained in the general setting . this allows to identify reasonable candidates for the intervention and continuation regions of both players and their strategies . + * keywords : * stochastic differential game , impulse control , nash equilibrium , quasi - variational inequality , retail electricity market .
|
in the past few decades , a series of attractive achievements in the quantum information field have been made . quantum computation and quantum communicationwill no longer be a dream . in quantum communication, there are some important protocols have been proposed , such as quantum teleportation , quantum key distribution , quantum state sharing , quantum secure direct communication , and so on .photon is the best candidate to carry and distribute quantum information , because it has fast transmission speed and is easy to control .unfortunately , the unavoidable absorption and scattering in a transmission quantum channel places a serious limitation on the length of the communication distances .the photon loss becomes one of the main obstacles in long - distance quantum communication .it not only decreases the efficiency of the communication , but also will make the communication insecure , because the detection loophole . during the past decades , people developed two powerful quantum technologies to resist the photon loss .the first quantum technology is the quantum repeaters . by dividing the whole channel into several segments ,they first generate the entanglement in each segment . finally , by entanglement swapping , they can set up the entanglement in the whole distance .the second quantum technology which will be detailed in this paper is the quantum state amplification . though the quantum repeater can extend the entanglement between the adjacent segments , they still require to distribute entanglement in each short - distance segment . in this way , during the transmission, the photons will lose with some probability . briefly speaking, the photon loss will cause the single photon degrade to a mixed state as , which means the photon may be completely lost in the probability of . in 2009 ,ralph and lund first proposed the concept of the noiseless linear amplification ( nla ) to distill the new mixed state with relatively high fidelity from the input mixed state with low fidelity . since then , various nla protocols have been proposed .current nla protocol can be divided into three groups .the first group focused on the single photon .the second group focused on the single - photon entanglement ( spe ) , for the spe is the simplest entanglement form , but it has important applications in cryptography , state engineering , tomography , entanglement purification , entanglement concentration , the third group focused on the continuous variables systems .for example , in 2012 , osorio _ et al ._ experimentally realized the heralded noiseless amplification for the single - photon state ( sps ) with the help of the single - photon source and the linear optics ._ demonstrated the heralded noiseless amplification of a photon polarization qubit .et al . _ also proposed an nla protocol for protecting the spe .so far , all the existing nla protocols for the sps and spe can only be performed for one time . that is the fidelity of the initial statecan be increased for one step . in the practical high noisy applications ,the photon loss is usually high . in this way , after performing the amplification protocol , the quality of the entanglement may not reach the standard for secure and highly efficient long - distance quantum communication . because in order to close the detection loophole , the fidelity of the state is the higher the better . in this way, we should seek for the efficient approach to realize the amplification . in this paper , based on the linear optics, we propose an efficient nla protocol for protecting both the sps and spe , respectively . in our protocol, the amplification is cascaded . that is the fidelity of the sps and specan be increased step by step .this paper is organized as follows : in sec .ii , we present the cascaded nla protocol for sps . in sec .iii , we present our cascaded amplification protocol for spe . in sec .iv , we present a discussion .finally , in sec .v , we present a conclusion .before we start to describe our protocol , we first introduce the basic principle of the nla protocol , based on the work of gisin _ _ et al.__ .it is composed of the variable fiber beam splitter ( vbs ) with the transmittance of and the 50:50 beam splitter ( bs ) .the schematic drawing is shown in fig .1 . the mixed state of the sps can be described as in ref. , an auxiliary single photon is required .the auxiliary photon passes through the vbs , which can generate a single - photon entangled state as where the coefficient is the success probability of the protocol is we denote the amplification factor , so that we can obtain in order to realize the amplification , it is required that .it can be calculated that we can obtain only if . in the section, we put forward an efficient cascaded amplification protocol for the sps with the help of the nla unit .we first describe the two - level cascaded nla protocol , as described in fig .2 . from fig.2 , we require two single photons and as auxiliary .after one of the single - photon detectors or registers one photon , the state in the spatial mode must be an amplified mixed state of the form in eq.([new1 ] ) . in this way, the new mixed state can be regarded as the initial state in the second amplification , with the help of another auxiliary single photon . in the second round, we still choose the case that only one of the single - photon detectors or register the photon .if the protocol is successful , we can obtain a new mixed state as with it is straightforward to extend this two - level cascaded nla protocol to the arbitrary level cascaded nla protocol , as described in fig .3 . in the protocol, we make the input photon pass through n nla units , successively . the output photon state from the previous nla unit enter the next nla unit as the input one .the transmittance of the vbs in each nla unit is required to meet . with the help ofn auxiliary photons , say a , a , a , the initial sps can be cascaded to amplify for n time . based on the description in sec .ii , after n nla unit , we can obtain the output new mixed photon state as the is the fidelity of the output mixed state after nla .we can calculate the fidelity of the mixed state after the photon pass through each nla unit as after nla , we define the total amplification factor g as under the case that the transmittance of each vbs meet , we can ensure arbitrary .therefore , increasing the number of nla unit ( ) can effectively increase the fidelity of the final output sps . especially ,if , we can make .meanwhile , the success probability to distill the new mixed state after each nla can be written as ,\nonumber\\ & \cdots\cdots&\nonumber\\ p_{n}&=&p_{n-1}[t+\eta_{n-1}-2\eta_{n-1}t].\label{p1}\end{aligned}\ ] ] obviously , as arbitrary , increasing the number of the nla unit will reduce the success probability . therefore , it is a trade - off between the fidelity and the success probability . in order to obtain the sps with high fidelity , we need to consume large number of the input single photons .in 2012 , the setup of nla described in ref . was developed to amplify spe , which is shown in fig .2 . briefly speaking, a single - photon source s emits single - photon entangled state , which is in the spatial mode and .the form of spe can be written as for realizing the amplification , two auxiliary photons are required and both the two parties need to run the same operation as described above , simultaneously .after the amplification , we can obtain the new mixed state as with the success probability of the fidelity of the new mixed state is interestingly , it can be found that the form of the fidelity in eq .( [ f2 ] ) is the same as that in eq .( [ f1 ] ) .therefore , we can also obtain that when the transmittance of each vbs meets , the amplification factor . in the section , the nla unit will be used to realize the cascade amplification for the spe state . as shown in fig .5 , due to the environmental noise , alice and bob share a mixed state as eq .( [ entangle ] ) . for realizing the amplification , each of them needs to prepare n nla units , say nla , nla, nla , and nla , nla , nla , respectively .each of alice and bob makes the single photon in his / her hand pass through the n units , simultaneously .similarly , during the amplification process , each of them needs to introduce n auxiliary photons , say b , b , b , and b , b , b . after the n nla units , we can obtain the final output quantum state as where is the fidelity of the mixed state when both the input photons from alice and bob pass through n nla units .we can also calculate the fidelity and success possibility of the cascaded amplification protocol for the spe state .the fidelity of the mixed state when both the two parties make the input photon pass through arbitrary n nla units can be written as where the subscript `` 1 '' , `` 2 '' , `` n '' mean the number of the nla units adopted by each of the two parties .similarly , it can be found eq .( [ fidelity2 ] ) has the same form as eq .( [ fidelity1 ] ) . therefore , when both the two parties make their photon pass through n nla units , the total amplification factor is the same as eq .( [ factor ] ) . in this way , under the case that , we can effectively increase the amplification factor by increasing the number of the nla units . certainly , we can also calculate the success probability of the cascaded amplification protocol as ,\nonumber\\ & \cdots\cdots&\nonumber\\ p_{n}'&=&p_{n-1}'[(1 - 2\eta_{n-1}')t^{2}+\eta_{n-1 } ' t],\label{p2}\end{aligned}\ ] ] where the subscript `` 1 '' , `` 2 '' , `` n '' are the number of the nla units used in each of the two parties .similarly , increasing the fidelity will also sacrifice the success probability . for obtaining the spe with high fidelity , we still need to consume large amount of initial input state .so far , based on the work of refs. , we have fully described our cascaded nla protocol for both sps and spe .the nla unit which is composed of the vbs and bs , is the key element of the two protocols . in our protocol , we make the target photon pass through n nla units , successively . under the casethat the transmittance of the vbs ( ) meets , when the target photon pass through a nla unit , we can realize an amplification with the help of an auxiliary photon . in this way , by making the photon pass through n nla units successively , we can finally realize n cascaded amplification .+ ) of the distilled new mixed state as a function of the transmittance ( ) of the vbs , when n ( 2n ) nla units were used to cascade amplify the sps ( spe ) . for comparison , we make n=1 , 2 , 3 , 4 and 5 , respectively .meanwhile , we suppose that both the two protocols are operated under low initial fidelity ( ) , medium initial fidelity ( ) , and high initial fidelity ( ) ., width=566 ] according to eq .( [ fidelity1 ] ) and eq .( [ fidelity2 ] ) , the fidelity ( ) of the output mixed states largely depends on the initial fidelity ( ) of the input state , the transmittance ( ) of the vbs , and the number ( ) of the nla unit .6 shows as a function of . for comparison, we suppose the protocols are operated under low initial fidelity ( ) , medium initial fidelity ( ) and high initial fidelity ( ) , respectively , and make n=1 , 2 , 3 , 4 , and 5 . the fidelity curve of the protocols in ref . is the same as that for .it can be found that reduces with the growth of .the five curves in each figure interact at the point of . under the case that , , that is , the output mixed state is the same as the input state .actually , when , the vbs become the bs , the whole amplification process is converted to the standard teleportation process . on the other hand , increasing the number of the nla unit can effectively increase the fidelity of the output states , especially under low initial fidelity condition . in practical applications , the photon loss is usually high .7 shows the the fidelity altered with under practical high photon loss condition ( ) .it can be found that when , is only 0.5 , while can reach 0.996 .therefore , by selecting the suitable vbs and the suitable value of n , we can make . in this way, our protocols may provide an effectively way to close the detection loophole in qkd .certainly , we should point out that though the increases with , we can not reach .the limitation of .that is the is a fixed point for the iterative equations in both eq .( [ fidelity1 ] ) and eq .( [ fidelity2 ] ) . on the other hand , under high photon loss condition ( ) , the protocols in refs . can obtain relatively high fidelity only under the extreme condition that .for example ,when , is 0.69 under , and can reach 0.96 under . in current experimental conditions ,the vbs with is unavailable .however , with the growth of , the requirement for is largely reduced .for example , when , can reach 0.996 under .therefore , it is much easier for us to obtain high fidelity under practical experimental conditions . altered with the iteration number .here we let and the initial fidelity .,width=226 ] finally , we will discuss the experimental realization of our protocol .the vbs and bs are the key elements . reported the experimental results about the nla with the help of the vbs .the protocol can increase the probability of the single photon from a mixed state by adjusting the splitting ratio of vbs from 50:50 to 90:10 . based on the experiments , our requirement for be easily realized under current technology . in our protocol , the processing of the photons passing through the bs is essentially the hong - ou - mandel ( hom ) interference , so that the two photons should be indistinguishable in every degree of freedom . also measured the hom interference on each bs .their experimental results for each bs are 93.4 .9% and 92.1 5.7% , respectively . in fig .8 , we design the possible experimental realization of the two - level amplification for the sps . based on ref . , with the help of spontaneous parametric down - conversion ( spdc ) source , we make a pump pulse of ultraviolet light pass through a beta barium borate ( bbo ) crystal and produce correlated pairs of photons into the modes a and b .then it is reflected on the mirror and traverses the crystal for a second time , and produces correlated pairs of photons into the modes a and b .the hamiltonian can be approximately described as , b , a , and b mode .the photon in the a mode is the target photon , which pass through the vbs to generate the input mixed state .the photons in a and b modes are used as the auxiliary photons ., width=377 ] in the experiment , we select the item with the help of the coincidence measurement , which generates four photons in the modes a , b , c , and d , simultaneously . under this case, we make the photon in a mode pass through vbs with the transmittance of t to generate a mixed state in the a .if the sps is reflected to the a mode , it means that the photon is lost .the photon in the mode b can be used to judge the single - photon in a mode . by changing t ,we can obtain the different mixed state .the photon in both a and b mode are used as the auxiliary photons . after making it pass through vbs with the transmittance , we make the photons in the a and b modes pass through the bs and detect the photon in the d and d modes . in this way, the first level of amplification can be realized .next , the photon in the a mode is used as the second auxiliary photon .similarly , we make it pass through vbs with the transmittance , and then make the photons in the a and b modes pass through the bs .we can finally realize the second amplification . in current technology , with the help of cascaded spdc sources , the generation of eight - photon entanglement has been realized .such cascaded spdc sources can be used to implement the experiment for multi - level cascaded nla protocol .in conclusion , we put forward an efficient cascaded amplification nla protocol for both the sps and spe , respectively . in our protocol , we make target photon pass through several nla units , successively . with the help of some auxiliary single photons, we can realize the cascaded amplification for both the sps and spe .this protocol is based on the linear optics , which is extremely suitable in current technology . in the discussion, we also design a possible realization with current spdc source .the most advantage of this protocol is that the fidelity can be iterated to obtain a higher value .it provides us that this protocol is extremely useful in a large noisy channel , which may be used to close the detection loophole in current long - distance quantum communication .this work is supported by the national natural science foundation of china under grant nos .11474168 and 61401222 , the qing lan project in jiangsu province , and the priority academic program development of jiangsu higher education institutions .
|
photon loss is one of the main obstacles in current long - distance quantum communications . the approach of noiseless linear amplification ( nla ) is one of the powerful way to distill the single - photon state ( sps ) from a mixed state , which comprises both the sps and vacuum state . however , existing nla protocol can only perform the amplification for one time . that is the fidelity of the sps can not be increased anymore . in this paper , we put forward an efficient cascaded nla protocol for both the sps and single - photon entanglement ( spe ) , respectively , with the help of some auxiliary single photons . by repeating this protocol for sever times , the fidelity of the sps and spe can reach near 100% , which may make this protocol is extremely useful to close the detection loophole in quantum key distribution . moreover , this protocol is based on the linear optics , which makes it feasible in current technology .
|
the networks of interacting genes and proteins that are responsible for regulation , signalling and response , and which , ultimately , orchestrate cell function , are under the effect of noise .this randomness materialises in the form of fluctuations of the number of molecules of the species involved , subsequently leading to fluctuations in their activity . besides external perturbations, biochemical reactions can be intrinsically noisy , especially when the number of molecules is very low .far from necessarily being a mere disturbance , fluctuations are an essential component of the dynamics of cellular regulatory systems which , in many instances , are exploited to improve cell function . for example, randomness has been shown to enhance the ability of cells to adapt and increase their fitness in random or variable environments .random noise also serves the purpose of assisting cell populations to sustain phenotypic variation by enabling cells to explore the phase space .one of the mechanisms that allows noise - induced phenotypic variability relays on multi - stability .the basis of this mechanism was first proposed by kauffman , who associated phenotypes or differentiated states to the stable attractors of the dynamical systems associated to gene and protein interaction networks . in the presence of noise, the corresponding phase space generates an epigenetic landscape , where cells exposed to the same environment and signalling cues coexist in different cellular phenotypes .multi - stability is also an essential element in the control of cell response and function via signalling pathways . in particular , bi - stability as a means to generate reliable switching behaviour is widely utilised in numerous pathways such as the apoptosis , cell survival , differentiation , and cell - cycle progression pathways .for example , bi - stability is used to regulate such critical cell functions such as the transition from quiescence to proliferation through bistable behaviour associated with the rb - e2f switch within the regulatory machinery of the mammalian cell - cycle . a common theme which appears when trying to model cell regulatory systems is separation of time scales , i.e. the presence of multiple processes evolving on widely diverse time scales .when noise is ignored and systems are treated in terms of deterministic mean - field descriptions , such separation of time scales and the associated slow - fast dynamics are often exploited for several forms of model reduction , of which one of the most common is the so - called quasi - steady state approximation ( qssa ) .this approximation is ubiquitously used whenever regulatory processes involve enzyme catalysis , which is a central regulation mechanism in cell function . in this paper, we investigate the effects of intrinsic noise on the bi - stability of two particular systems , namely , an enzyme - catalysed system of mutual inhibition and a gene regulatory circuit with self - activation . the mean - field limit of both these systems has been shown to exhibit bi - stability . the aim of this paper is to analyse how noise alters the mean - field behaviour associated to these systems when they operate under quasi - steady state conditions .we note that this work does not concern the subject of noise - induced bifurcations .such phenomenon has been studied in many situations , including biological systems .an example which is closely related to the systems we analyse here is the so - called enzymatic futile cycles .samoilov et al . have shown that noise associated to the number of enzymes induce bistability . in the absence of this source of noise ,i.e. in the mean - field limit , the system does not exhibit bistable behaviour .the treatment of this phenomena would require to go to higher orders in the wkb expansion , which we do not explore here .the issue of separation of time scales in stochastic models of enzyme catalysis has been addressed using a number of different approaches .several such analysis have been carried out in which the qssa is directly applied to the master equation by setting the fast reactions in partial equilibrium ( i.e. the probability distribution corresponding to the fast variables remains unchanged ) , and letting the rest of the system to evolve according to a reduced stochastic dynamic .other approaches have been proposed such as the qssa to the exact fokker - planck equation that can be derived from the poisson representation of the chemical master equation .approaches based on enumeration techniques have also been formulated .furthermore , thomas et al. have recently formulated a rigorous method to eliminate fast stochastic variables in monostable systems using projector operators within the linear noise approximation .methods for model reduction based on perturbation analysis have been developed in .additionally , driven by the need of more efficient numerical methods , there has been much activity regarding the development of numerical methods for stochastic systems with multiple time - scales .several of these methods are variations of the stochastic simulation algorithm or the -leap method where the existence of fast and slow variables is exploited to enhance their performance with respect to the standard algorithms .another family of such numerical methods is that of the so - called hybrid methods , where classical deterministic rate equations or stochastic langevin equations for the fast variables are combined with the classical stochastic simulation algorithm for the slow variables .other related methods were studied in . here , we advance the formalism developed in , in which a method based on the semi - classical approximation of the chemical master equation allows to evaluate the effects of intrinsic random noise under quasi - steady conditions . in our analysis of the michaelis - menten model of enzyme catalysis in , we showed that the semi - classical quasi - steady state approximation reveals that the velocity of the enzymatic reaction is modified with respect to the mean - field estimate by a quantity which is proportional to the total number of molecules of the ( conserved ) enzyme . in this paper , we extend this formalism to show that , associated to each conserved molecular species , the associated ( constant ) number of molecules is a bifurcation parameter which can drive the system into bi - stability beyond the predictions of the mean - field theory .we then proceed to test our theoretical results by means of direct numerical simulation of the chemical master equation using the stochastic simulation algorithm .we should note the hamiltonian formalism derived from the semi - classical approximation is formulated on a continuum of particles , which requires the number of particles to be large enough .this must hold true for all the species in our model , both fast and slow . since this separation between fast and slow species is based on their relative abundance , one must be careful that the scaling assumptions are consistent , particularly in the case of the model of self - activating gene regulatory circuit where the number of binding sites is typically small .this assumption , however , has been used in previous studies .also we show that our simulation results of the full stochastic processes agree with our analysis and , therefore , our re - scaled equations are able to predict the behaviour of the system .we note that the mean - field limit , which is conventionally obtained by ignoring noise in the limit of large particle numbers , is obtained by setting the momenta in our phase - space formalism to .the approximation we develop in this paper falls within the general framework of the optimal fluctuation path theory .this framework is a particular case of the large deviation theory which allows us to study rare events ( i.e. events whose frequency is exponentially small with system size ) . within these frameworkwe will show that , upon carrying out the qssa , the only source of noise in the system is associated to the random initial conditions of the species whose numbers are conserved .we therefore predict that a population of cells , each having a random number of conserved molecules , will have a bimodal distribution .this paper is organised as follows .section 2 is devoted to a detailed exposition of the semi - classical quasi - steady state approximation for stochastic systems . in sections 3 and 4, we apply this formalism to analyse the behaviour of a bistable enzyme - catalysed system and a gene regulatory circuit of auto - activation , respectively .we will show that our semi - classical quasi - steady state theory allows us to study the effect of intrinsic noise on the behaviour of these systems beyond the predictions of their mean - field descriptions .we also verify our theoretical predictions by means of direct stochastic simulations . finally in section 5 ,we summarise our results and discuss their relevance .our aim in this paper is to formulate a stochastic generalisation of the quasi - steady state approximation for enzyme - catalysed reactions and simple circuits of gene regulation and use such approximation to determine if the presence of noise has effects on the behaviour of the system beyond the predictions of the corresponding mean - field models .specifically , we analyse stochastic systems for which the mean - field models predicts bi - stability and investigate how such behaviour is affected by stochastic effects . our analysis is carried out in the context of markovian models of the corresponding reaction mechanisms formulated in terms of the so - called chemical master equation ( cme ) .two example of such stochastic systems , a bistable enzyme - catalysed system and a gene regulatory circuit of auto - activation , are formulated and analysed in detail in sections [ sec : enzyme ] and [ sec : gene ] , respectively .following , we formulate the qss approximation for the asymptotic solution of the cme obtained by means of large deviations / wkb approximations .the cme is given : where is the transition rate corresponding to reaction channel and is a vector whose entries denote the change in the number of molecules of each molecular species when reaction channel fires up , i.e. . an alternative way to analyse the dynamics of continuous - time markov processes on a discrete space of states is to derive an equation for the generating function , of the corresponding probabilistic density : where is the solution of the master equation ( [ eq : cme ] ) . satisfies a partial differential equation ( pde ) which can be derived from the master equation .this pde is the basic element of the so - called momentum representation of the master equation .although closed , analytic solutions are rarely available , the pde for the generating function admits a perturbative solution , which is commonly obtained by means of the wkb method .more specifically , the ( linear ) pde that governs the evolution of the generating function can be written as : where the operator is determined by the reaction rates of the master equation ( [ eq : cme ] ) .furthermore , the solution to this equation must satisfy the normalisation condition for all .this pde , or , equivalently , the operator , are obtained by multiplying both sides of the master equation ( [ eq : cme ] ) by and summing up over all the possible values of from the mathematical point of view , eq .( [ eq : charfuncpde ] ) is a schrdinger - like equation and , therefore , there is a plethora of methods at our disposal in order to analyse it .in particular , when the fluctuations are ( assumed to be ) small , it is common to resort to wkb methods .this approach is based on the wkb - like ansatz that . by substituting this ansatz in eq .( [ eq : charfuncpde ] ) we obtain the following hamilton - jacobi equation for the function : instead of directly tackling the explicit solution of eq .( [ eq : hamjac ] ) , we will use the so - called semi - classical approximation .we use the feynman path - integral representation which yields a solution to eq .( [ eq : charfuncpde ] ) of the type : where indicates integration over the space of all possible trajectories and is given by : where the position operators in the momentum representation have been defined as with the commutation relation =s_{0,i}\delta_{i , j} ] quantitative comparison between our asymptotic analysis and the simulation results follows the same procedure as in section [ sec : enzyme ] , i.e. we look at how the variance aforementioned behaviour regarding unbounded increase of fluctuations close to a bifurcation is used to locate the critical the variance , with changes as the control parameter varies : the maximum of as a function of the control parameter corresponds to the critical value . according to fig .[ fig : bifdiagselfk3](b ) , the critical value of , , is approximately given by .our asymptotic analysis ( see fig . [fig : bifdiagselfk3](b ) ) predicts that .by means of the semi - classical quasi - steady state approximation , section [ sec : theory ] , we have analysed stochastic effects affecting the onset of bi - stability in cell regulatory systems .our theory shows that there exists a conserved momentum coordinate associated to each conserved chemical species . in the case of the enzyme - catalysed bistable system , section [ sec : enzyme ] , there are three such conserved momenta , associated to each of the conserved chemical species , i.e. cdh1 and its activating and inhibiting enzymes .for the self - activation gene regulatory network , we have one conserved momentum , corresponding to conservation of the number of binding sites of the gene s promoter region . according to the scqssa analysis of , the maximum rate achieved by an enzymatic reaction , , predicted by the mean - field theory renormalised by a factor which equals the value of the ( constant ) momentum coordinate associated to the conserved enzyme : where is the maximum rate predicted by the scqssa .similarly , we have shown that the mean - field maximum activation rate associated to the auto - activation gene regulatory model , , is renormalised in the presence of noise by a factor equal to the conserved momentum coordinate corresponding to the number of binding sites in the gene promoter , , i.e. , with being the scqssa maximum activation rate . as a consequence of this parameter renormalisation , we have shown that variation in the value of the conserved momenta can trigger bifurcations leading to the onset of bistable behaviour beyond the predictions of the mean - field limit , i.e. for values of parameters where the mean - field limit predicts the system to be mono - stable , the scqssa predicts bi - stability , and vice versa ( see figs .[ fig : bifantn ] and [ fig : bifdiagselfact ] ) .furthermore , we have established that the value of the constant momenta is actually determined by the probability distribution of the associated conserved chemical species , and , ultimately , by the number of molecules of these species ( see eqs .( [ eq : laplaceapprox ] ) and ( [ eq : selfact - p2])-([eq : selfact - p2-poisson ] ) ) .therefore , our theory establishes that the numbers of molecules of the conserved species are order parameters whose variation should trigger ( or cancel ) bistable behaviour in the associated systems .this prediction is fully confirmed by direct numerical simulation using the stochastic simulation algorithm ( see figs .[ fig : ratios ] , [ fig : p1 ] , and [ fig : histselfact ] ) .quantitative comparison between the predictions of our asymptotic analysis and the simulation results ( see fig .[ fig : sigma2enzyme ] and [ fig : bifdiagselfk3 ] ) shows that our theoretical approach slightly underestimates the critical value for the bistable enzyme - regulated system .the theoretical prediction for the self - activating gene regulatory network appears to slightly overestimate the critical value .our results allow us to propose a means of controlling cell function .for example , regarding the enzyme - catalysed bistable model analysed in section [ sec : enzyme ] , varying the number of molecules of the three conserved chemical species ( cdh1 and the associated activating and inhibiting enzymes ) enables us to lock the system into either of the g or the s - g-m stable fixed points or to drive the system into its bistable regime where random fluctuations will trigger switching between these two states .this could be accomplished by ectopically increasing the synthesis of the corresponding molecule or by targeting the enzymes with enzyme - targeted drugs .similarly , the dynamics of the self - activating gene regulatory system could be driven into or out of its bistable regime by supplying an inhibitor that irreversibly binds to the promoter region , thus decreasing the effective number of binding sites .this result allows us to explore strategies , for example , in the field of combination therapies in cancer treatment .cellular quiescence is a major factor in resistance to unspecific therapies , such as chemo- and radio - therapy , which target proliferating cells .bi - stability is central to control cell - cycle progression and to regulate the exit from quiescence , with enzyme catalysis ( usually accounted for by ( mean - field ) michaelis - menten , quasi - steady state dynamics ) being ubiquitously involved .our findings will allow us to formulate combination strategies in which chemo- or radio - therapy are combined with a strategy aimed at driving cancer cells into proliferation or quiescence depending on the phase of the treatment cycle .evaluation of the viability and efficiency of such combination requires the formulation of multi - scale models whose analysis is beyond this scope of this paper , and it is therefore postponed for future work .our approach differs from previous work , such as dykman et al. in a significant aspect , namely , whilst their aim is to estimate the rate of noise - induced transition between metastable states in systems exhibiting multi - stability , the purpose of our analysis is to ascertain whether noise can alter the multi - stability status of the system .dykman et al. do not address such issue .( [ eq : hjqss - x1])-([eq : hjqss - p4 ] ) and ( [ eq : qss - sa - q1])-([eq : qss - sa - q2 ] ) are derived from a semi - classical approximation of the master equation ( or its equivalent description in terms or the generating function pde ) .this approximation yields a set hamilton equations ( eqs .( 8)-(9 ) ) whose solutions are the optimal fluctuation paths and , as such , they describe fluctuation - induced phenomena which can not be accounted for by the mean - field approximation .one of the best known examples of this is exit problems from meta - stable states in noisy systems ( e.g. extinctions ) , where the semi - classical approximation provides the optimal escape path from which information such as mean - first passage time or waiting time for extinction can be obtained ( see , for example , references ) .furthermore , eqs .( [ eq : hjqss - x1])-([eq : hjqss - p4 ] ) and ( [ eq : qss - sa - q1])-([eq : qss - sa - q2 ] ) are derived from the general hamilton equations , eqs .( 8)-(9 ) , by means of an approximation based on separation of time scales , not on any mean - field assumption .a closely related subject to that analysed in this paper is that of noise - induced bifurcations . such phenomenon has been studied in biological systems where the mean - field limit does not predict bistability , such as the so - called enzymatic futile cycles where noise associated to the number of enzymes induce bistability . in the absence of this source of noise, the system does not exhibit bistable behaviour .we have not dealt with such noise - induced phenomena in the present paper , in the sense that all the systems analysed in this paper are such that their mean - field limit exhibits bistability .we leave the interesting issue of whether our scqssa framework can be used to analyse noise - induced bifurcation phenomena for future research .r.c . and t.a .acknowledge the spanish ministry for science and innovation ( micinn ) for funding mtm2011 - 29342 and generalitat de catalunya for funding under grant 2014sgr1307 .r.c . acknowledges agaur - generalitat de catalunya for funding under its doctoral scholarship programme .thanks the wellcome trust for financial support under grant 098325 .83ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) ** , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
|
we analyse the effect of intrinsic fluctuations on the properties of bistable stochastic systems with time scale separation operating under1 quasi - steady state conditions . we first formulate a stochastic generalisation of the quasi - steady state approximation based on the semi - classical approximation of the partial differential equation for the generating function associated with the chemical master equation . such approximation proceeds by optimising an action functional whose associated set of euler - lagrange ( hamilton ) equations provide the most likely fluctuation path . we show that , under appropriate conditions granting time scale separation , the hamiltonian can be re - scaled so that the set of hamilton equations splits up into slow and fast variables , whereby the quasi - steady state approximation can be applied . we analyse two particular examples of systems whose mean - field limit has been shown to exhibit bi - stability : an enzyme - catalysed system of two mutually - inhibitory proteins and a gene regulatory circuit with self - activation . our theory establishes that the number of molecules of the conserved species are order parameters whose variation regulates bistable behaviour in the associated systems beyond the predictions of the mean - field theory . this prediction is fully confirmed by direct numerical simulations using the stochastic simulation algorithm . this result allows us to propose strategies whereby , by varying the number of molecules of the three conserved chemical species , cell properties associated to bistable behaviour ( phenotype , cell - cycle status , etc . ) can be controlled .
|
there are many problems in computational physics that involve solving partial differential equations ( pdes ) in complex geometries .examples include fluid flows in complicated systems , vein networks in plant leaves , and tumours in human bodies .standard solution methods for pdes in complex domains typically involve triangulation and unstructured grids .this rules out coarse - scale discretizations and thus efficient geometric multi - level solutions .also , mesh generation for three - dimensional complex geometries remains a challenge , in particular if we allow the geometry to evolve with time . in the past several years, there has been much effort put into the development of numerical methods for solving partial differential equations in complex domains .however , most of these methods typically require tools not frequently available in standard finite element and finite difference software packages . examples of such approaches include the extended and composite finite element methods ( e.g. , ) , immersed interface methods ( e.g. , ) , virtual node methods with embedded boundary conditions ( e.g. , ) , matched interface and boundary methods ( e.g. , ) , modified finite volume / embedded boundary / cut - cell methods / ghost - fluid methods ( e.g. , ) . in another approach , known as the fictitious domain method ( e.g. , ) ,the original system is either augmented with equations for lagrange multipliers to enforce the boundary conditions , or the penalty method is used to enforce the boundary conditions weakly .see also for a review of numerical methods for solving the poisson equation , the diffusion equation and the stefan problem on irregular domains .an alternate approach for simulating pdes in complex domains , which does not require any modification of standard finite element or finite difference software , is the diffuse - domain method . in this method ,the domain is represented implicitly by a phase - field function , which is an approximation of the characteristic function of the domain .the domain boundary is replaced by a narrow diffuse interface layer such that the phase - field function rapidly transitions from one inside the domain to zero in the exterior of the domain .the boundary of the domain can thus be represented as an isosurface of the phase - field function .the pde is then reformulated on a larger , regular domain with additional source terms that approximate the boundary conditions .although uniform grids can be used , local grid refinement near domain boundaries improves efficiency and enables the use of smaller interface thicknesses than are achievable using uniform grids .a related approach involves the level - set method to describe the implicitly embedded surface and to obtain the appropriate surface operators ( e.g. , ) .the diffuse - domain method ( ddm ) was introduced by kockelkoren et al. to study diffusion inside a cell with zero neumann boundary conditions at the cell boundary ( a similar approach was also used in using spectral methods ) .the ddm was later used to simulate electrical waves in the heart and membrane - bound turing patterns .more recently , diffuse - interface methods have been developed for solving pdes on stationary and evolving surfaces .diffuse - domain methods for solving pdes in complex evolving domains with dirichlet , neumann and robin boundary conditions were developed by li et al. and by teigen et al . who modelled bulk - surface coupling .the ddm was also used by aland et al . to simulate incompressible two - phase flows in complex domains in 2d and 3d , and by teigen et al . to study two - phase flows with soluble surfactants .li et al . showed that in the ddm there exist several approximations to the physical boundary conditions that converge asymptotically to the correct sharp - interface problem .li et al .presented some numerical convergence results for a few selected problems and observed that the choice of boundary condition can significantly affect the accuracy of the ddm .however , li et al .did not perform a quantitative comparison between the different boundary - condition approximations , nor did they estimate convergence rates .further , li et al . did not address the source of the different levels of accuracy they observed for the different boundary - condition approximations . in the context of dirichlet boundary conditions ,franz et al. recently presented a rigorous error analysis of the ddm for a reaction - diffusion equation and found that the method converges only with first - order accuracy in the interface thickness parameter , which they confirmed numerically .similar results were obtained numerically by reuter et al . who reformulated the ddm using an integral equation solver .reuter et al . demonstrated that their generalized ddm , with appropriate choices of approximate surface delta functions , converges with first - order accuracy to solutions of the poisson equation with dirichlet boundary conditions .here , we focus on neumann and robin boundary conditions and we present a matched asymptotic analysis of general diffuse - domain methods in a fixed complex geometry , focusing on the poisson equation for robin boundary conditions and a steady reaction - diffusion equation for neumann boundary conditions. however , our approach applies to transient problems and more general equations in the same way as shown in .our analysis shows that for certain choices of the boundary condition approximations , the ddm is second - order accurate in , which in practice is proportional to the smallest mesh size .however , for other choices the ddm is only first - order accurate .this helps to explain why the choice of boundary condition approximation is important for rapid global convergence and high accuracy .further , inspired by the work of karma and rappel and almgren , who incorporated second - order corrections in their phase field models of crystal growth and by the work of folch et al. who added second - order corrections in phase - field models of advection , we also suggest correction terms that may be added to yield a more accurate version of the diffuse - domain method .simple modifications of first - order boundary condition approximations are proposed to achieve asymptotically second - order accurate schemes .our analytic results are confirmed numerically for selected test problems .the outline of the paper is as follows . in [ sec : dda ] we introduce and present an analysis of general diffuse - domain methods . in [ sec : discretization ] the numerical methods are described , and in [ sec : results ] the test cases are introduced and numerical results are presented and discussed .we finally give some concluding remarks in [ sec : conclusion ] .the main idea of the ddm is to extend pdes that are defined inside complex and possibly time - dependent domains into larger , regular domains . as a model problem ,consider the poisson equation in a domain , with neumann or robin boundary conditions . as shown in li et al . , the results for the poisson equation can be used directly to obtain diffuse - domain methods for more general second - order partial differential equations in evolving domains .the ddm equation is defined in a larger , regular domain as see [ fig : ddadomain ] . here approximates the characteristic function of , and bc is chosen to approximate the physical boundary condition , cf. .this typically involves diffuse - interface approximations of the surface delta function .a standard approximation of the characteristic function is the phase - field function , here is the interface thickness and is the signed - distance function with respect to , which is taken to be negative inside .( 0,0 ) rectangle(9,5 ) ; ( 2,1 ) .. controls ( 1,1 ) and ( 1,4 ) .. ( 3,3.5 ) .. controls ( 5,3 ) and ( 4,5 ) .. ( 6,4 ) node[above right ] .. controls ( 8,3 ) and ( 8,1.5 ) .. ( 7,1.5 ) .. controls ( 3,1.5 ) and ( 3,1 ) .. ( 2,1 ) ; at ( 2.3,2.3 ) ; at ( 7.6,0.8 ) ; at ( 4.8,2.4 ) ; at ( 5.4,0.5 ) ; as li et al . described , there are a number of different choices for bc in [ ddm ] . for example , in the neumann case , where on , one may take : in the robin case , where on , one may use analogous approximations : note that the terms and approximate the surface delta function .following li et al . we assume that is extended constant in the normal direction off and that is smooth up to and is extended into the exterior of constant in the normal direction .we next perform an asymptotic analysis to estimate the rate of convergence of the corresponding approximations . to show asymptotic convergence , we need to consider the expansions of the diffuse - domain variables in powers of the interface thickness in regions close to and far from the interface .these are called inner and outer expansions , respectively .the two expansions are then matched in a region where both are valid , see [ fig : regions ] , which provides the boundary conditions for the outer variables .we refer the reader to and for more details and discussion of the general procedure .[ yscale=0.8 , interface/.style = thick , inner/.style = fill = gray , dotted , fill opacity=0.2 , outer/.style = fill = gray , dashed , fill opacity=0.3 , labels/.style = above right , font= , ] ( 0,0 ) rectangle(9,5 ) ; ( 2,1 ) .. controls ( 1,1 ) and ( 1,4 ) .. ( 3,3.5 ) .. controls ( 5,3 ) and ( 4,5 ) .. ( 6,4 ) .. controls ( 8,3 ) and ( 8,1.5 ) .. ( 7,1.5 ) node ( g1 ) .. controls ( 3,1.5 ) and ( 3,1 ) .. ( 2,1 ) ; at ( 7.4,0.4 ) ; at ( 2.0,2.3 ) ; ( a ) at ( 2.4,1.6 ) ; ( b ) at ( 3.4,1.6 ) ; ( c ) at ( 2.4,0.6 ) ; ( d ) at ( 3.4,0.6 ) ; ( c ) rectangle ( b ) ; \(a ) ( 0 , 3 ) ; ( b ) ( 8 , 3 ) ; ( c ) ( 0,-3 ) ; ( d ) ( 8,-3 ) ; ( 0,-3 ) rectangle ( 8,3 ) ; ( a ) ( 0 ,3 ) ; ( b ) ( 8 , 3 ) ; ( 0,0 ) .. controls ( 2 , 0.5 ) and ( 3 , 0.5 ) .. ( 4,0 ) .. controls ( 5,-0.5 ) and ( 6,-1.0 ) .. ( 8,0 ) ; ( 0,2.0 ) .. controls ( 2,2.5 ) and ( 3,2.5 ) .. ( 4,2.0 ) .. controls ( 5,1.5 ) and ( 6,1.0 ) .. ( 8,2.0 ) ( 8,-2.0 ) .. controls ( 6,-3.0 ) and ( 5,-2.5 ) .. ( 4,-2.0 ) .. controls ( 3,-1.5 ) and ( 2,-1.5 ) .. ( 0,-2.0 ) cycle ; ( 0,3.0 ) ( 8,3.0 ) ( 8,1.0 ) .. controls ( 6,0.0 ) and ( 5,0.5 ) .. ( 4,1.0 ) .. controls ( 3,1.5 ) and ( 2,1.5 ) .. ( 0,1.0 ) cycle ; ( 0,-3.0 ) ( 8,-3.0 ) ( 8,-1.0 ) .. controls ( 6,-2.0 ) and ( 5,-1.5 ) .. ( 4,-1.0 ) .. controls ( 3,-0.5 ) and ( 2,-0.5 ) .. ( 0,-1.0 ) cycle ; at ( 0.1,2.25 ) outer region ; at ( 0.1,1.25 ) overlapping region ; at ( 0.1,0.25 ) inner region ; ( 8.1 , 3.0 ) node[right=0.5em ] ( 8.1 , 0.1 ) ; ( 8.1,-0.1 ) node[right=0.5em ] ( 8.1,-3.0 ) ; at ( 8.1,0 ) ; the outer expansion for the variable is simply the outer expansion of an equation is then found by inserting the expanded variables into the equation .the inner expansion is found by introducing a local coordinate system near the interface , where is a parametrization of the interface , is the interface normal vector that points out of , is the stretched variable and is the signed distance from the point to . in the local coordinate system ,the derivatives become where is the curvature of the interface .note that .the inner variable is now given by and the inner expansion is to obtain the matching conditions , we assume that there is a region of overlap where both the expansions are valid . in this region , the solutions have to match .in particular , if we evaluate the outer expansion in the inner coordinates , this must match the limits of the inner solutions away from the interface , that is insert the expansions into [ eq : outer , eq : inner ] to get the terms on the left - hand side can be expanded as a taylor series , where and .now we end up with the matching equation which must hold when the interface width is decreased , that is . in the matching regionit is required that . under this condition , if we let , we get the following asymptotic matching conditions : and as , where the quantities on the right - hand side are the limits from the interior ( ) and exterior ( ) of . here means that the expressions approach equality when .that is , is defined such that if some function , then we have .now we are ready to consider the poisson equation with robin boundary conditions , where .consider a general ddm approximation , where represents the bc approximation in the ddm .the scaling factor is taken for later convenience .if we assume that is local to the interface ( e.g. , vanishes to all orders in away from ) and that is independent of ( e.g. , is smooth in a neighbourhood of and is extended constant in the normal direction out of ) , which is the case for the approximations bc1 and bc2 given in [ eq : bc_robin ] , then the outer solution to this equation when satisfies now , if satisfies [ eq : poiss_robin ] and then the outer expansion and the ddm is asymptotically first - order accurate . however , if , then and the ddm is asymptotically second - order accurate . determining which of these is the case requires matching the outer solutions to the solutions of the inner equations .before we analyse the inner expansions , we develop a higher - order matching condition based on [ eq : match2,eq : match3 ] that matches a robin boundary condition for .first we take the derivative of [ eq : match3 ] with respect to and subtract times [ eq : match2 ] , which gives move the terms that make up a robin condition for to the left - hand side , and move the rest to the right - hand side , that is the laplacian can be decomposed into normal and tangential components as which can be shown by writing the gradient vector as .we can therefore write where we have assumed that satisfies the system , as demonstrated below .if we insert this into the matching condition , we get as .now consider the inner expansion of [ eq : general_dda ] , expand , , and in powers of , to get and then collect the leading order terms .note that because is smooth up to and extended constantly outside we have that is independent of .the lowest power of gives suppose that , which is the case as we show below for bc1 and bc2 , then we obtain . by the matching condition , this gives , where is the limiting value of .the next order terms give integrating from to in and using the matching condition , we get to obtain a robin boundary condition for , we need that now consider the zeroth order terms , if we subtract from both sides of [ eq : ddarobin0 ] , we get where we have taken into account the cancellation of terms and used the fact that and are independent of .the latter holds when is extended as a constant in the normal direction off , e.g. , and is independent of and .next , we integrate and use the matching condition on the left - hand side , if the right - hand side of [ order 1 bc ] vanishes , then we obtain from which we can conclude that the outer solution since is harmonic : .next we analyse the boundary condition approximations bc1 and bc2 .the bc1 approximation corresponds to since in the outer region ( interior part of ) , we conclude that vanishes in the outer region . in the inner region, we have where we have used that and is independent of and .since , we conclude from our analysis in [ sec : dda_robin_inner ] that and hence .the next orders of [ expansion of bc1 ] give a direct calculation then shows that as desired .thus , the leading order outer solution satisfies the problem ( [ eq : poiss_robin ] ) . to continue , we must first consider [ eq : first_bc ] , from which we get where is the limiting value of the outer solution ( e.g. , see [ eq : match2 ] ) . combining this with [ order 1 bc ] , we obtain further , it follows from the definition of the phase - field function that is an odd function .therefore the integral on the right - hand side of [ order 1 bc 1 ] is equal to zero .thus and so by our arguments below [ eq : dda_outer ] , the ddm with bc1 is second - order accurate in .when the bc2 approximation is used , we obtain accordingly , in the inner region , we obtain since , we get as desired . from [ eq : first_bc ] we have using that , this gives where combining [ u1 inner solve b2,u1 inner solve b2 a , order 1 bc ] we get direct calculations show that and using these in [ order 1 bc 2 ], we get this shows that the ddm with bc2 is only first - order accurate because the solution of is in general not equal to , e.g. , . in order to modify bc2 to achieve second - order accuracy ,we introduce such that that is , perturbs only the higher order terms in the inner expansion and is chosen to cancel the term on the right - hand side of [ order 1 bc 2 final ] in order to achieve , which in turn implies that and the new formulation is second - order accurate .the correction does not affect the or orders in the system .thus , and are unchanged from the previous subsection .now becomes so we wish to determine such that two simple ways of achieving this are to take putting everything together , we can obtain bc2 m , a second - order version of bc2 , using the resulting ddm is an elliptic system since , as required for the robin boundary condition . in each instance , this is guaranteed if the interface thickness is sufficiently small . thus far , we have taken advantage of integration to achieve second - order accuracy .alternatively , one may try to add correction terms to directly obtain second - order boundary conditions without relying on integration .for example , from [ order 1 bc ] to achieve second - order accuracy we may take where is a functional of .this provides another prescription of how to obtain a second - order accurate boundary condition , which could in principle lead to faster asymptotic convergence since it directly cancels a term in the inner expansion of the asymptotic matching . as an illustration ,let us use bc1 as a starting point even though this boundary condition is already second - order accurate . through the prescription in [ local correction ]above , we derive another second - order accurate boundary condition . to see this ,write then from [ local correction ] we get this can be achieved by taking where is the signed distance to as defined earlier .note that we can also achieve second - order accuracy by taking instead where we use the fact that the integral involving vanishes in [ order 1 bc ] .we refer to these choices , which are by no means exhaustive , as we remark , however , that this prescription may not always lead to an optimal numerical method .for example , when using bc1m2 , the system is guaranteed to be elliptic when for , which puts an effective restriction on the interface thickness depending on the values of and .when bc1m1 is used , the situation is more delicate since ellipticity can not be guaranteed when due to the term .recall that outside the original domain and so this issue is associated with the extending the modified boundary condition outside . in future work, we plan to consider different extensions that automatically guarantee ellipticity . to summarize, we have shown that the ddm is a second - order accurate approximation of the system when bc1 , bc2 m , and bc1 m are used .when bc2 is used , the ddm is only first - order accurate .since the poisson equation with neumann boundary conditions does not have a unique solution , we instead consider the steady reaction - diffusion equation with neumann boundary conditions , again we consider a general ddm approximation , under the same conditions on as in the previous section , the outer solution now satisfies as in the robin case , if satisfies [ eq : poiss_neumann ] and then the ddm is first - order accurate .however , if , then the ddm is second - order accurate . to construct the boundary condition for , we follow the approach from the robin case and combine [ eq : match3,laplace decomp ] to get assuming that satisfies the system as demonstrated below , and to get as .the inner expansion of [ eq : reacdiff_general_dda ] is analogous to the robin case derived in [ sec : dda_robin_inner ] .as before , if then and , the limiting value of the outer solution . still holds at the next order and so to get the desired boundary condition for , we need analogously to [ eq : ddarobin0 ] the next order equation is subtracting from [ eq : o1 ] we get where we have used as justified below . integrating , we obtain as in the robin case , if the right - hand side of [ order 1 bc neumann ] vanishes then we may conclude that since satisfies with zero neumann boundary conditions .we next analyse the boundary conditions bc1 and bc2 . when the bc1 approximation is used , we obtain and accordingly , we find that and [ integral constraint neumann ] holds .thus satisfies the system as claimed above . at the next order , from [ eq : first_bc , psi expansion for bc1 neumann ]we obtain thus , combining [ hat u1 neumann , order 1 bc neumann ] we get from which we conclude that and the ddm with bc1 is second - order accurate .when the bc2 approximation is used , we obtain and analogously to the case when bc1 is used , , [ integral constraint neumann ] holds , and satisfies the system . at the next order , from [ eq : first_bc , psi expansion for bc2 neumann ]we obtain combining [ hat u1 neumann bc2,order 1 bc neumann ] we get from which we conclude that and the ddm with bc2 is second - order accurate for the neumann problem as well , which is different from the robin case .analogously to the robin case , to achieve second - order accuracy we may also take following the same reasoning , alternative boundary conditions analogous to those in [ bc1 m versions ] may be derived note that as in the robin case , when bc1m1 is used ellipticity can not be guaranteed when due to the term . to summarize, we have shown that the ddm is a second - order accurate approximation of the system when bc1 , bc2 , and bc1 m are used .the equations are discretized on a uniform grid with the second - order central - difference scheme .the discrete system is solved using a multigrid method , where a red - black gauss - seidel type iterative method is used to relax the solutions ( see ) .the equations are solved in two - dimensions in a domain ^ 2 ] .again let the analytic solution in be so that , , and . in this casethe curvature is zero almost everywhere . to initialize the square domain , the signed - distance function is defined as the phase - field functionis then calculated directly from the signed - distance function in [ eq : characteristic ] .again let be the circle centred at with radius , but now consider the case where the analytic solution is which corresponds to , and note that in the ddm equations , is extrapolated constantly in the normal direction off of the boundary . for the final neumann case we again let ^ 2 ] ,and we consider a case that corresponds to the neumann case 4 where the analytic solution is where .this corresponds to the boundary function and the surface laplacian of the analytic function along the boundary are where along the bottom and top boundaries , and along the left and right boundaries .the convergence results calculated with the norm are presented in [ fig : robin_cases - plot1,fig : robin_cases - plot2,fig : robin_cases - plot3 ] and [ tab : robin_error_per_eps ] .shows the results for case 2 where the norm is used .again the results indicate that bc1m1 performs better than bc1 , although both methods are second - order accurate , as predicted by our analysis .the results also show that bc1 gives better results than bc2 , which is approximately first - order accurate as also predicted by theory .further , as in the neumann case , bc2 is seen to require very fine grids to converge . for small , the requirement exceeds our finest grid . the modified bc2m1 and bc2m2 schemes are also tested .the results with bc2m2 are almost indistinguishable from the results with bc2m1 , so only the latter results are shown in the following figures .all results are listed in [ tab : robin_error_per_eps ] and [ tab : robin_error_per_eps_infty ] .the bc2 m schemes are shown to perform better than the bc2 scheme , but they also require very fine grids to converge .further , the orders of accuracy of bc2m1 and bc2m2 seem to deteriorate somewhat at the smallest values of . as discussed in [ sec : dda_robin_bc2 m ] , this could be due to the influence of higher order terms in the expansion , or the amplification of error and is under study .plot coordinates ( 8.000e-01 , 2.1e-01 ) ( 4.000e-01 , 4.4e-02 ) ( 2.000e-01 , 9.0e-03 ) ( 1.000e-01 , 2.0e-03 ) ( 5.000e-02 , 4.6e-04 ) ( 2.500e-02 , 1.2e-04 ) ; plot coordinates ( 8.000e-01 , 1.2e-01 ) ( 4.000e-01 , 2.7e-02 ) ( 2.000e-01 , 6.4e-03 ) ( 1.000e-01 , 1.6e-03 ) ( 5.000e-02 , 3.8e-04 ) ; plot coordinates ( 8.000e-01 , 1.7e-01 ) ( 4.000e-01 , 5.2e-02 ) ( 2.000e-01 , 2.2e-02 ) ( 1.000e-01 , 1.0e-02 ) ; plot coordinates ( 8.000e-01 , 1.14e-01 ) ( 4.000e-01 , 2.29e-02 ) ( 2.000e-01 , 6.87e-03 ) ( 1.000e-01 , 2.54e-03 ) ; plot coordinates ( 8.000e-01 , 4.21e-01 ) ( 4.000e-01 , 1.02e-01 ) ( 2.000e-01 , 2.19e-02 ) ( 1.000e-01 , 4.77e-03 ) ( 5.000e-02 , 1.09e-03 ) ( 2.500e-02 , 2.57e-04 ) ; plot coordinates ( 8.000e-01 , 1.27e-1 ) ( 4.000e-01 , 4.87e-2 ) ( 2.000e-01 , 1.30e-2 ) ( 1.000e-01 , 3.27e-3 ) ( 5.000e-02 , 8.15e-4 ) ( 2.500e-02 , 2.04e-4 ) ; plot coordinates ( 8.000e-01 , 3.84e-1 ) ( 4.000e-01 , 9.16e-2 ) ( 2.000e-01 , 2.51e-2 ) ( 1.000e-01 , 9.13e-3 ) ; plot coordinates ( 8.000e-01 , 3.48e-1 ) ( 4.000e-01 , 7.24e-2 ) ( 2.000e-01 , 1.56e-2 ) ( 1.000e-01 , 4.65e-3 ) ; plot coordinates ( 8.000e-01 , 3.80e-1 ) ( 4.000e-01 , 9.39e-2 ) ( 2.000e-01 , 2.14e-2 ) ( 1.000e-01 , 4.90e-3 ) ( 0.500e-01 , 1.15e-3 ) ( 0.250e-01 , 2.78e-4 ) ; plot coordinates ( 8.000e-01 , 1.36e-1 ) ( 4.000e-01 , 4.30e-2 ) ( 2.000e-01 , 1.06e-2 ) ( 1.000e-01 , 2.53e-3 ) ( 5.000e-02 , 6.09e-4 ) ( 2.500e-02 , 1.49e-4 ) ; plot coordinates ( 8.000e-01 , 3.24e-1 ) ( 4.000e-01 , 7.49e-2 ) ( 2.000e-01 , 1.91e-2 ) ( 1.000e-01 , 6.98e-3 ) ; plot coordinates ( 8.000e-01 , 3.04e-1 ) ( 4.000e-01 , 6.94e-2 ) ( 2.000e-01 , 1.72e-2 ) ( 1.000e-01 , 6.40e-3 ) ; plot coordinates ( 8.000e-01 , 7.9e-02 ) ( 4.000e-01 , 1.6e-02 ) ( 2.000e-01 , 3.7e-03 ) ( 1.000e-01 , 9.0e-04 ) ; plot coordinates ( 8.000e-01 , 3.6e-02 ) ( 4.000e-01 , 7.4e-03 ) ( 2.000e-01 , 1.7e-03 ) ( 1.000e-01 , 4.3e-04 ) ; plot coordinates ( 8.000e-01 , 8.2e-01 ) ( 4.000e-01 , 1.9e-02 ) ( 2.000e-01 , 5.9e-03 ) ; plot coordinates ( 8.000e-01 , 6.75e-02 ) ( 4.000e-01 , 1.27e-02 ) ( 2.000e-01 , 2.78e-03 ) ; clrlrlrlrlr & bc1 & & bc1m1 & & bc2 & & bc2m1 & & bc2m2 + & & & & & & & & & & + + 0.800&2.111 & & 1.201 & & 1.701 & & 1.141 & & 1.151 + 0.400&4.402&2.3 & 2.722&2.1 & 5.212&1.7&2.292&2.3&2.312 & 2.3 + 0.200&8.993&2.3 & 6.423&2.1 & 2.182&1.3&6.873&1.7&6.923 & 1.7 + 0.100&1.953&2.2 & 1.573&2.0 & 1.032&1.1&2.543&1.4&2.553 & 1.4 + 0.050&4.574&2.1 & 3.794&2.0 + 0.025&1.234&1.9+ + 0.800&4.211 & & 1.271 & & 3.841 & & 3.481 & & 3.491 & + 0.400&1.021&2.0&4.872&1.4&9.162&2.1 & 7.242&2.3&7.262&2.3 + 0.200&2.192&2.2&1.302&1.9&2.512&1.9 & 1.562&2.2&1.572&2.2 + 0.100&4.773&2.2&3.273&2.0&9.133&1.5 & 4.653&1.7&4.673&1.7 + 0.050&1.093&2.1&8.154&2.0 + 0.025&2.574&2.1&2.044&2.0 + + 0.800&7.892 & & 3.602 & & 8.232 & & 6.752 & & 6.802 & + 0.400&1.642&2.3&7.383&2.3 & 1.892&2.2&1.272&2.3&1.282&2.4 + 0.200&3.703&2.2&1.713&2.1 & 5.903&1.7&2.783&2.2&2.813&2.2 + 0.100&9.044&2.0&4.284&2.0 + clrlrlrlrlr & bc1 & & bc1m1 & & bc2 & & bc2m1 & & bc2m2 + & & & & & & & & & & + + 0.800&3.801 & & 1.361 & & 3.241 & & 3.041 & & 3.051 & + 0.400&9.392&2.0&4.302&1.7&7.492&2.1 & 6.942&2.1&6.952&2.1 + 0.200&2.142&2.1&1.062&2.0&1.912&2.0 & 1.722&2.0&1.732&2.0 + 0.100&4.903&2.1&2.533&2.1&6.983&1.5 & 6.403&1.4&6.423&1.4 + 0.050&1.153&2.1&6.094&2.1 + 0.025&2.784&2.1&1.494&2.0 + shows a plot of the solutions of case 1 with at .the plot shows the solutions with bc1 ( black dashed ) , bc1m1 ( black doted ) , bc2 ( blue dashed ) and bc2m1 ( blue dotted ) .we see that the solutions with the modified schemes bc1m1 and bc2m1 perform better than the corresponding schemes with bc1 and bc2 . .the solutions with bc2m1 and bc2m2 are indistinguishable , so only bc2m1 is shown .( a ) shows the full slice , where the domain boundary is depicted as thin vertical lines at .( b ) a zoom - in that shows the solutions near the left boundary.,scaledwidth=80.0% ] .the solutions with bc2m1 and bc2m2 are indistinguishable , so only bc2m1 is shown .( a ) shows the full slice , where the domain boundary is depicted as thin vertical lines at .( b ) a zoom - in that shows the solutions near the left boundary.,scaledwidth=80.0% ]we have performed a matched asymptotic analysis of the ddm for the poisson equation with robin boundary conditions and for a steady reaction - diffusion equation with neumann boundary conditions .our analysis shows that for certain choices of the boundary condition approximations , the ddm is second - order accurate in the interface thickness .however , for other choices the ddm is only first - order accurate .this is confirmed numerically and helps to explain why the choice of boundary - condition approximation is important for rapid global convergence and high accuracy .this helps to explain why the choice of boundary - condition approximation is important for rapid global convergence and high accuracy .in particular , the boundary condition bc1 , which arises from representing the surface delta function as , is seen to give rise to a second - order approximation for both the neumann and robin boundary conditions and thus is perhaps the most reliable choice .the boundary condition bc2 , which arises from approximating the surface delta function as yields a second - order accurate approximation for the neumann problem , but only first - order accuracy for the robin problem .in addition , bc2 requires very fine meshes to converge .our analysis also suggests correction terms that may be added to yield a more accurate diffuse - domain method .we have presented several techniques for obtaining second - order boundary conditions and performed numerical simulations that confirm the predicted accuracy , although the order of accuracy may deteriorate at the smallest values of possibly due to amplification errors associated with conditioning of the system or the influence of higher order terms in the asymptotic expansion .this is currently under study .further , the correction terms do not improve the mesh requirements for convergence .a common feature of the correction terms is that the interface thickness must be sufficiently small in order for the ddm to remain an elliptic equation .in addition , one choice of boundary condition involves the use of the surface laplacian of the solution , which could in principle lead to faster asymptotic convergence since it directly cancels terms in the inner expansion of the asymptotic matching .however , the extension of this term outside the domain of interest can cause the loss of ellipticity of the ddm . as such, this is an intriguing but not a practical scheme .nevertheless , as a proof of principle , we still considered the effect of this term , however , by using the surface laplacian of the analytic solution in the ddm .we found that this choice gave the smallest errors in nearly all the cases considered . by incorporating different extensions of the boundary conditions in the exterior of the domain that automatically guarantee ellipticity, we aim to make this method practical .this is the subject of future investigations .we plan to extend our analysis to the dirichlet problem where the boundary condition approximations considered by li et al . seem only to yield first - order accuracy .our asymptotic analysis thus has the potential to identify correction terms that can be used to generate second - order accurate diffuse - domain methods for the dirichlet problem .kyl acknowledges support from the fulbright foundation for a visiting researcher grant to fund a stay at the university of california , irvine .kyl also acknowledges support from statoil and gdf suez , and the research council of norway ( 193062/s60 ) for the research project enabling low emission lng systems .jl acknowledges support from the national science foundation , division of mathematical sciences , and the national institute of health through grant p50gm76516 for a center of excellence in systems biology at the university of california , irvine .the authors gratefully thank bernhard mller ( ntnu ) and svend tollak munkejord ( sintef energy research ) for helpful discussions and for feedback on the manuscript .the authors also wish to thank the anonymous reviewers for comments that greatly improved the manuscript . , _ an efficient second - order accurate cut - cell method for solving the variable coefficient poisson equation with jump conditions on irregular domains _ , int .fluids , 52 ( 2006 ) , pp . 723 - 748 . , _ solving partial differential equations on irregular domains with moving interfaces , with applications to superconformal electrodeposition in semiconductor manufacturing _ , j. comput . phys ., 227 ( 2008 ) , pp .6411 - 6447 . , _ a diffuse - interface approach for modeling transport , diffusion and adsorption / desorption of material quantities on a deformable interface _ , communications in mathematical sciences , 7 ( 2009 ) , pp. 10091037 . ,_ a second - order sharp numerical method for solving the linear elasticity equations on irregular domains and adaptive grids application to shape optimization _ , j. comput .phys . , 233 ( 2013 ) ,430 - 448 .
|
in recent work , li et al . ( comm . math . sci . , 7:81 - 107 , 2009 ) developed a diffuse - domain method ( ddm ) for solving partial differential equations in complex , dynamic geometries with dirichlet , neumann , and robin boundary conditions . the diffuse - domain method uses an implicit representation of the geometry where the sharp boundary is replaced by a diffuse layer with thickness that is typically proportional to the minimum grid size . the original equations are reformulated on a larger regular domain and the boundary conditions are incorporated via singular source terms . the resulting equations can be solved with standard finite difference and finite element software packages . here , we present a matched asymptotic analysis of general diffuse - domain methods for neumann and robin boundary conditions . our analysis shows that for certain choices of the boundary condition approximations , the ddm is second - order accurate in . however , for other choices the ddm is only first - order accurate . this helps to explain why the choice of boundary - condition approximation is important for rapid global convergence and high accuracy . our analysis also suggests correction terms that may be added to yield more accurate diffuse - domain methods . simple modifications of first - order boundary condition approximations are proposed to achieve asymptotically second - order accurate schemes . our analytic results are confirmed numerically in the and norms for selected test problems . karl yngve lervg and john lowengrub numerical solution of partial differential equations , phase - field approximation , implicit geometry representation , matched asymptotic analysis .
|
graph processing continues to increase in popularity with the emergence of applications such as social network mining , real - time network traffic monitoring , etc . due to their data - intensive nature, the performance and dependability of such applications depends upon how well the choice of runtime data structure matches the input data characteristics and availability of memory ( low memory can prevent the applications from completing ) .programmers often choose specific , fixed data structures when developing graph applications .the memory used by the data structure can be greatly influenced by the input data characteristics .thus , it is possible that the characteristics of data may not match the choice of the data structure .this is particularly problematic when the application is expected to encounter a wide range of input data characteristics , and these characteristics may change during the course of execution .for example , matrices can be represented in the compressed column storage ( ccs ) format , appropriate for sparse matrices , or the array representation , appropriate for dense matrices .an application , e.g. , matrix multiplication , programmed to use the sparse ccs format , could take longer to complete when presented with a dense input . similarly ,evolving graphs , where nodes or edges are added during execution , are another example of changes in input data characteristics .the data structure selection based on input pre - analysis will fail under such scenario .therefore , in our approach , _ adaptive applications tailor the choice of data structure to match input data characteristics at runtime ._ since real - world applications often do not run in isolation , they share the available memory resources with other applications . there could be times where the application experiences a resource crunch , caused by other running programs . in this scenariothe performance of the application may be degraded , or the application may even be prematurely terminated . therefore , in our approach , _ adaptive applications tailor the choice of data structure to match availability of memory at runtime ._ it is well known that for data - intensive applications , the choice of data structure is critical to memory usage and execution time .there has been previous work on data structure identification , as well as data structure prediction and selection . while these prior approaches help in data structure selection , none of them support switching from one data structure to another as the application executes .there has also been work on dynamically adapting the representation of individual data items for impacting memory usage and performance employing data compression or replacing float data with int data .these techniques are orthogonal to our work that switches between alternate high level data structures .other approaches dynamically switch between implementations .elastin allows a program to switch between versions using dynamic software update techniques ; however , it does not consider switching between alternate high level data structures .k42 operating system supports hot - swapping classes as a mechanism for performing dynamic updates .scenario based optimization , a binary level online optimization technique dynamically changes the course of execution through a route meant for a particular runtime scenario as predefined by developer .wang et al . proposed dynamic resource management techniques based on user - specific , application - specific and hardware - specific management policies . in contrast , our objective is to simultaneously support alternate data structures and switch between them . in this paperwe consider several widely - used graph applications and study how data structure representations impact execution time and memory consumption on a range of input graphs ( section [ sec_motivation ] ) .the input graphs consist of both real - world graphs such as wikipedia metadata , gnutella network topology ( from the snap library ) , and synthetic graphs . based upon the observations from our study , we design a concrete adaptation system that supports switching between alternate representations of the data in memory ( section [ sec_approach ] ) .we demonstrate that the cost of performing the runtime adaptations is quite small in comparison to the benefits of adaptation ( section [ sec_eval ] ) .moreover , the lightweight monitoring we employ to detect adaptation opportunities imposes acceptable overhead even when no adaptations are triggered at runtime .thus , our adaptive versions have nearly the same performance as the most appropriate non - adaptive versions for various input characteristics .we compare our approach with related work in section [ sec_rel ] , and in section [ sec_conc ] we conclude .in this section we study the execution time and memory usage behavior of a set of graph applications . the goal of this study is two fold .first , we want to quantify how input data characteristics and the choice of data structures used to represent the graphs impact memory usage and execution time .second , we would like to develop a simple characterization of program behavior that can be used to guide data structure selection at runtime .we considered six graph algorithms : muliple source shortest path ( mssp ) finds the shortest path from all the nodes to every other node ; betweenness centrality ( bc ) computes the importance of a node in a network ; breadth first search ( bfs ) traverses the graph with each node as root per iteration ; boruvka s algorithm ( mst - b ) and kruskal s algorithm ( mst - k ) , finds the minimum spanning tree ; preflow push ( pp ) , finds out the maximum flow in a network starting with each individual node as source .the core data structure used in these applications is a graph .we consider two different representations of graphs : adjacency list ( ` adjlist ` ) ; and adjacency matrix ( ` adjmat ` ) .when the graph is sparse , it is expected that ` adjlist ` will use less memory than ` adjmat ` . on the other hand , for highly dense graphs ` adjmat ` may use less memory than ` adjlist ` .determining whether a pair of nodes is connected by an edge can be done in constant time using ` adjmat ` while it may require searching through a list with ` adjlist ` .thus , the runtime memory usage and execution time depend upon the sparsity , or conversely the density , of the input graph . the input graphs with relevant properties and densities were generated to study program behavior . to observe the trade - offs of using the alternative representations of graphs, we executed each of the programs using the two representations .the programs were run on inputs consisting of randomly - generated graphs with varying density which is computed as , where and are number of nodes and edges in the graph .the inputs were selected such that the trade - offs could be exposed easily .the results of these executions are summarized as follows : we present the relative memory usage and execution time of program versions in table [ tbl : perfranges ] . in particular, we computed the ratios of memory usages and execution times for ` adjlist ` and ` adjmat ` versions across all graph densities considered .the minimum and maximum values of observed ratios is given in table [ tbl : perfranges ] .as we can see , in terms of both memory usage and execution time , the relative performances vary a great deal . moreover, neither representation gives the best memory usage or execution time performance across all graph densities .hence , it is crucial to select the data structure at runtime , based upon the input data characteristics . for the purpose of runtime data structure selection ,we characterize the behavior of each application as shown in table [ dsranges ] .note that graph densities are divided into three subranges . in the first range ( e.g. , for mssp )the ` adjlist ` is both more memory- and time - efficient than ` adjmat ` . in the second range( e.g. , ) ` adjlist ` is more memory - efficient while ` adjmat ` is more time - efficient .thus , the selection can be made at runtime based upon memory availability . finally , in the third range ( e.g. , for mssp ) `adjmat ` is both more memory and time efficient than ` adjlist ` .[ sec_approach ] [ fig : framework ] we now present our approach for building adaptive applications ; an overview is shown in figure [ fig : framework ] . the starting point is the _ annotated source code _ : in the source code , programmers add _ annotations _ to identify the alternative data structures , e.g. , ds and ds , and functions operating on them .the compiler takes heed of these annotations and generates the _ source code with transition logic _ , that is capable of dynamically switching among alternative data structure representations .the transitions are allowed at selected program points where the processing of an input item has just completed and that of another item is about to begin .lastly , the _ adaptation module _ consists of the runtime monitors for tracking input data characteristics and memory usage as well as the code that implements the transition policy that triggers the switch from one data structure representation to another . the adaptation can be triggered by a mismatch between the input data characteristics and the data structure currently in use . to discover this mismatch the characterization of application behavior as performed in the previous section is used .the adaptation can also be triggered by the system during high memory usage . to enable adaptation , the programmer implements the alternate data structures .in addition , a compute - intensive function during whose execution adaptation may be performed , must be coded as follows .first , it should contain a variable that tracks the progress in terms of processing steps defined as either the amount of input processed or results produced .second , it should be written so that it can commence execution from any point between two processing steps .the latter is needed because we allow execution to switch from one data representation to another at these points .we used a set of pragmas in our approach to identify alternate data structure representations , enable generation of code that transfers code from one representation to another , and identify program points where transitions may be performed .first , the programmer identifies the data structure to the compiler .the programmer annotates the alternate representation of data structures in multiple files with ` # pragma adp(<src_filename > , " data1_def " ) ` . `< src_filename > ` s presence clearly differentiates the alternate representation of the data structure in multiple files .if there are multiple data structures with alternate representations in different files , then they could be annotated with a different index , e.g. , ` # pragma adp(<src_filename > , " data2_def " ) ` .second , the programmer uses several pragmas to identify the key methods ( insert , delete , traverse , and fetch ) that manage data stored in the data structure .another pragma allows access to the initialization parameters which must be migrated from one data structure to another .all of this information is used to generate the code for data and function migration when we switch between data structures .the adaptation module decides whether or not to switch between data structures based upon the input from runtime monitors and the transition policy . since the adaptation could be program - triggered or system - triggered , there are two kinds of monitors which are required by the adaptation module .the input data monitor captures input data characteristics and the memory monitor reports the available system memory .the transition policy defines which data structure representation is better for what range of input data characteristics in terms of execution time and memory consumption .its specification consist of three parts , as illustrated below : .... / * execution time * / ds1 [ 0,9 ) ds2 [ 9,100 ] /*memory*/ ds1 [ 0,25 ) ds2 [ 25,100 ] /*threshold*/ memory 100 .... the first part indicates the ranges for which a particular data structure representation is best in terms of execution time : under ` execution time ` in the figure , the input data property for which ` adjlist ` ( ds1 ) is better is denoted by directives ` ds1 ` , which means that ` adjlist ` is favorable in terms of execution time if the input data property or density of the graph ( in case of mssp ) is in between 0% and 9% .the second part consists of the ranges of the input data property for which a particular data structure representation is better in terms of memory . according to the figure , under ` memory ` , we see that ` adjlist ` ( ds1 ) is better when the density of the input graph is between 0% and 25% while ` adjmatrix ` ( ds2 ) is better when the density of the graph is between 26% and 100% .the third part is the threshold for memory , defined by the programmer to notify the system that if the available memory is below this threshold then , regardless of input data characteristics always use the representation requiring least memory ; in the figure ( under ` threshold ` ) the threshold is set to 100 mb . .... datamigrationds1ds2(void * ds1 , void * ds2 ) { initializationparameters * ip ; ip = getinitializationparameter(ds1 ) ; initializeds2(&ds2,ip ) ; transferdatads1ds2(&ds1,&ds2 ) deleteds1(&ds1 ) ; } transferdatads1ds2(void * * ds1 , void * * ds2 ) { i = 0 ; void * datavalue ; for(i = 0;i < * * ds1->maxdata;i++ ) { datavalue = fetchdatads1(i,*ds1 ) ; if(datavalue ! = null ) { insertdatads2(*ds2 , datavalue , i);deletedatads1(i,*ds1 ) ; } } } .... [ fig : src_datamigration ] .... # pragma adp("ds1 " , " ds1_op1 " ) void computemssp_ds1 ( void * graph , void * rs , int * progress ) ; ... computemssp_ds1(graph , rs , progress ) ; ... .... & .... //#pragma adp("ds1 " , " ds1_op1 " ) void computemssp_ds1 ( void * graph , void * rs , int * progress ) ; ... callop1(graph , rs , progress , startds ) ; ... .... + the data structure transition logic is inserted into the source files by the compiler , guided by the pragmas .this transition logic carries out on - the - fly transitions from one data structure representation to another whenever required . to accomplish the transition ,the in - memory data must be transformed from one representation to another , along with the functions operating on them .the transition logic handles this by function migration and in - memory data methods contained in the logic .when the code for transition logic is inserted , appropriate header files are also inserted such that source code after modification compiles and links properly .to avoid recomputation of already - computed results , the result transfer logic ( injected into code along with the transition logic ) will transfer the already - computed results from one representation to the other representation ..... void callop1(void * ds , void * rs , int progress , currentds ) { extern int changereq ; void * newds ; void * newrs ; while(progress < 100 ) { if(changereq = = 1 ) { switch(currentds ) { case 1 : currentds = 2 ; datamigrationds1ds2(ds , newds ) ; resultmigrationrs1rs2(rs , newrs ) ; ds = newds ; newds = null ; rs = newrs ; newrs = null ; computemsspds2(ds , rs , progress ) ; break ; case 2 : currentds = 1 ; datamigrationds2ds1(ds , newds ) ; resultmigrationrs2rs1(rs , newrs ) ; ds = newds ; newds = null ; rs = newrs ; newrs = null ; computemsspds1(ds , rs , progress ) ; break ; } } else { switch(currentds ) { case 1 : computemsspds1(ds , rs , progress ) ; break ; case 2 : computemsspds2(ds , rs , progress ) ; break ; } } } } .... an example data migration function is shown in figure [ fig : src_datamigration ] .the code in the figure transfers the data from the data structure representation ds1 to another representation ds2 .it begins with initialization of the ds2 data structure representation .the initialization parameters are fetched from ds1 and they consist of standard parameters that are invariant in both ds1 and ds2 .for example , in the mssp benchmark the invariant data is the number of nodes . in the pp benchmarkthe invariant data consists of number of nodes , the height , capacity and flow of each node .the ` transferdata ` function is generated from ` traversedata ` function of ds1 as provided by the developer .this function traverses through the data by reading each data value , migrating it to ds2 representation using ` insertdatads2 ` and also deleting that data from ds1 using ` deletedatads1 ` thus releasing memory .the ` deleteds1 ` clears memory which contains the data regarding the initialization parameters .the transition between implementations , i.e. , switching from one set of functions operating on representation ds1 to functions operating on representation ds2 must be carefully orchestrated .the developer denotes an operation with a directive such as ` # pragma adp("ds1","data1_op1 " ) ` , which informs the compiler that the function is compute - intensive , as shown in figure [ fig : code1 ] .any call to that function is replaced by our customized method , which checks and executes operations with the suitable data structure . in this example ` computemssp_ds1 ` is replaced by ` callop1 ` .the additional parameter , ` startds ` , denotes the type of the current data structure representation in memory .the other three parameters are the data structure , a progress gauge , and the result set for storing the result .for example in the case of mssp , a method that finds mssp has the signature ` void computemssp_ds1(void * graph , void * rs , int * progress ) ` .the first parameter is the input graph and the second parameter ` rs ` stands for the result set and its declaration must be annotated by the programmer with ` # pragma adp("ds1 " , " data1_res1 " ) ` .the last parameter identifies the progress , which is the iteration number of the outer most long running loop .for example , if the method is called with a progress value 10 , then the execution is started from progress value 10 and continuously updated with the loop iteration number ..... void computemssp_ds1 ( void * graph , void * rs , int * progress ) { ... # pragma adp("ds1 " , " ds1_op1_safe " ) ... } .... & .... void computemssp_ds1 ( void * graph , void * rs , int * progress ) { ... //#pragma adp("ds1 " , " ds1_op1_safe " ) if(checkchangestatus()==1 ) { * progress = curprogress ; return ; } } .... + the detailed function selection and migration activity is shown in figure [ fig : src_fnmigration]for mssp benchmark .an external variable ` changereq ` , set by the adaptation module , is checked ( line 4 ) . if a transition has been requested , then first the data is migrated from one data structure representation to another ( lines 6 and 12 ) .next , if needed , the result is migrated from one representation to another ( lines 7 and 13 ) .finally , the corresponding mssp function for that data structure is called ( lines 9 and 15 ) and the operation is resumed from the progress point . if there is a change request from the adaptation module , then operation is paused and it returns back to ` callop1 ` .this process continues until the mssp computation completes .the question arises where ongoing mssp computations should be interrupted to check if the adaptation module has requested a change or not . to solve this problem , we rely on the programmers to use the directive ` # pragma adp("ds1 " , " ds1_op1_safe " ) ` to indicate the safe transition points in ` operation1 ` as shown in figure [ fig : code2 ]this directive notifies our framework that , if the operation is paused and the transformation is performed at that point , then there is minimal recomputation of result .this is typically the end of an iteration in long - running loops . since the programmer is well aware of the long running loops in the compute - intensive function , it is best to have the programmer mark the points appropriate for the insertion of adaptation module interrupts .the directive is replaced by an interrupt which checks if there is a change required and thus returns back to ` callop1 ` .[ pt - table ][ sec_eval ] in this section we evaluate the performance of adaptive versions of graph algorithms and compare them with corresponding non - adaptive versions of the applications .the goals of these experiments are as follows .first , we evaluate the efficiency of our approach by measuring its benefits and overhead .second , we consider the benefits of adaptation under two scenarios : adaptation triggered by the input characteristic , i.e. , graph density ; and system triggered adaptation .all experiments were run on a 24-core machine ( 4 six - core amd opteron 8431 processors ) with 32 gb ram .the system ran ubuntu 10.04 , linux kernel version 2.6.32 - 21-server .the sources were compiled with gcc 4.4.3 . _ real world data - sets : _ we evaluate our system on some of the real - world graphs from the snap graph library .the first graph , wiki - vote , contains the who - votes - for - whom graph in wikipedia administrator elections .this graph has 7,115 nodes and 103,689 edges .the second graph , p2p - gnutella , is a snapshot of gnutella , a decentralized peer to peer file sharing network from august 9 , 2002 .this graph has 8,114 nodes representing hosts and 26,013 edges representing the connections between these hosts . for experiments , in cases where a more dense graph was needed , we added edges in both the graphs to raise the required density .the programmers need to add annotations to transform off - the - shelf applications to adaptive ones .in addition to this , programmers also need to modify the compute - intensive methods so they can be executed in incrementalized fashion .the number of pragmas added and the number of additional lines of code added to modify the methods are shown in table [ tbl_preffort ] .as we can see , these numbers are fairly modest . .programming effort . [ cols="<,^,^,^,^,^,^,^",options="header " , ] the additional execution time taken by the adaptive version over the non - adaptive ` adjlist ` version can be divided into three categories : time spent on converting from one data structure representation to another ; time spent on runtime monitoring and transition logic to trigger adaptation ; and the time lost due to running the application in suboptimal mode , i.e. , with the ` admat ` data structure .the breakdown of the extra execution time into the three categories is shown in table [ tbl_breakdown ] . as we can see, the majority of the time is spent on runtime monitoring and transition logic .the next significant component is the time spent due to running the program in the suboptimal configuration before the transition occurs .note that the time spent on converting one data structure into another ( column 2 ) is the least .an intuitive way to visualize adaptation is to plot how the memory used by applications varies before , during , and after adaptation . in figure[ pt - mem ] we show how memory ( -axis ) varies over time ( -axis ) when starting the application in the ` adjmat ` representation and then through adaptation , the application transitions to ` adjlist ` . the charts point out several aspects .first , since we are using sparse graphs , as expected , the memory used is reduced significantly ( tens of megabytes ) when we switch from the ` adjmat ` to ` adjlist ` representation .second , the switch from one data structure to the other takes place fairly early in the execution of the program .third , the time to perform adaptation and the extra memory used during adaptation are very low . in figure [ overall ]we show the execution time of the adaptive version for varying input densities over the range where we expect the adaptive application to switch from the ` adjlist ` to the ` adjmat ` representation . for these experiments ,we have used graph size of 4000 nodes and varied densities .the execution times of the non - adaptive versions that use fixed representations ( ` adjlist ` and ` adjmat ` ) are also shown .as we can see , the performance of the adaptive application is very close to the best of the two non - adaptive versions .[ st - wiki ] in this section we study the second scenario , i.e. , when the adaptation is triggered by the system .the graph used for these experiments was p2p - gnutella at 20% density .however , we select ` adjmat ` as the initial data structure representation so that no adaptation was triggered due to the mismatch between the data structure and graph density .instead we provided the program with a system trigger that forces the program to reduce its memory consumption .this causes adaptation to be triggered , and the program to switch from ` adjmat ` to ` adjlist ` representation to save memory .as expected , the execution takes longer .since the conversion from one representation to another can be triggered at any time during a program s execution , in this study we present data for different trigger points after 25% , 50% , and 75% of total processing .we controlled the trigger point by tracking the amount of processing that has been completed .the results are presented in figure [ st - table ] .the execution times of the following versions are presented : non - adaptive version in the ` adjmat ` representation ( leftmost bar ) ; three adaptive versions with different trigger points ( middle three bars ) ; and non - adaptive ` adjlist ` ( rightmost bar ) .all times are normalized with respect to the time for non - adaptive ` adjlist ` .as we can see , the execution time of the adaptive version is always greater than the non - adaptive ` adjmat ` version and less than the non - adaptive ` adjlist ` version . in other words ,if large amounts of memory are available for longer duration , the adaptive version yields greater reduction in execution time over the non - adaptive ` adjlist ` version . to study the behavior of our approach when there are multiple transitions , we ran experiments on wiki - vote at 10% density in the following scenario .for each benchmark , the execution was started with ` adjmat ` and then switched to ` adjlist ` and vice versa after 20 % , 40% , 60% and 80% .we controlled the triggers for memory changes from the system by tracking the amount of processing that has been completed .we present the results in figure [ st - wiki ] .we can clearly see that , during a resource crunch when available memory decreases , our applications adapt to decrease their memory requirements accordingly , hence running slower ; after the resource crunch is over , our applications re - assume the uncompressed representation and their performance increases .first , our approach is only useful when the alternative data structures offer a significant trade - off between memory usage and execution time .for example , for the _ agglometric clustering _benchmark , when we tried using two alternate data structures of kd - tree and r - tree , we observed no significant trade - off between memory usage and execution time .since there is a need to bulk load the data , the kd - tree always outperforms the r - tree .second , our approach is only useful when the application is sufficiently compute and data intensive to justify the cost of runtime monitoring and transition logic .for example , in the case of the max cardinality bipartite matching benchmark , although the trade - off exists , the benchmark is not sufficiently compute - intensive to justify the adaptation cost .[ sec_rel ] there is a large body of work on program transformations applied at compile - time or runtime to enhance program performance , which also influences resource usage .some of these techniques can be used to support adaptation .contexterlang supports the construction of self - adaptive software using different call back modules .compiler - enabled adaptation techniques include altering of the contentiousness of an application , which enables co - location of applications without interfering with their performance ; data spreading migrates the application across multiple cores ; adaptive loop transformation allows a program to execute in more than one way during execution based on runtime information .multiple applications that are running on multicore systems can significantly impact each other s performance as they must share hardware resources ( e.g. , last level cache , access paths to memory ) .the impact of interference on program performance can be predicted and estimated , and contention management techniques guided by last level shared cache usage and lock contention have been developed .huang et al .proposed self adaptive containers where they provide the developer with a container library which adjusts the underlying data structure associated with the container to meet service level objectives ( slo ) ; adaptation occurs during slo violations .similarly , coco allows adaptation by switching between java collections during execution depending on the size of collection .these methods are orthogonal to our approach as they do not have scope for user - defined data structures , and the space - time tradeoff is not taken into consideration .[ sec_conc ] graph applications have resource requirements that vary greatly across runs due to differences in graph characteristics ; moreover , the required memory might not be available due to pressure from co - located applications .we have observed that data structure choice is crucial for allowing the application to get the best out of available resources .we propose an approach that uses programming and runtime support to allow graph applications to be transformed into adaptive applications by choosing the most appropriate data structure .experiments with graph - manipulating applications which adapt by switching between data structure representations show that our approach is easy to use on off - the - shelf applications , is effective at performing adaptations , and imposes very little overhead .this work was supported in part by nsf grants ccf-0963996 and ccf-1149632 .this research was sponsored by the army research laboratory and was accomplished under cooperative agreement number w911nf-13 - 2 - 0045 ( arl cyber security cra ) .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies , either expressed or implied , of the army research laboratory or the u.s .government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation here on .
|
graph processing is used extensively in areas from social networking mining to web indexing . we demonstrate that the performance and dependability of such applications critically hinges on the graph data structure used , because a fixed , compile - time choice of data structure can lead to poor performance or applications unable to complete . to address this problem , we introduce an approach that helps programmers transform regular , off - the - shelf graph applications into adaptive , more dependable applications where adaptations are performed via runtime selection from alternate data structure representations . using our approach , applications dynamically adapt to the input graph s characteristics and changes in available memory so they continue to run when faced with adverse conditions such as low memory . experiments with graph algorithms on real - world ( e.g. , wikipedia metadata , gnutella topology ) and synthetic graph datasets show that our adaptive applications run to completion with lower execution time and/or memory utilization in comparison to their non - adaptive versions . runtime data structure selection , space - time trade - off
|
during the last three decades continuous efforts have been devoted to extend applicability of the density functional theory ( dft) to large - scale systems , which leads to realization of more realistic simulations being close to experimental conditions .in fact , lots of large - scale dft calculations have already contributed for comprehensive understanding of a vast range of materials, although widely used functionals such as local density approximation ( lda) and generalized gradient approximation ( gga) have limitation in describing strong correlation in transition oxides and van der waals interaction in biological systems .the efficient methods developed so far within the conventional dft can be classified into two categories in terms of the computational complexity , while the other methods , which deviate from the classification , have been also proposed. the first category consists of o( ) methods , where is the number of basis functions , as typified by the householder - qr method, the conjugate gradient method, and the pulay method, which have currently become standard methods .the methods can be regarded as numerically exact methods , and the computational cost scales as o( ) even if only valence states are calculated because of the orthonormalization process . on the other hand ,the second category involves approximate o( ) methods such as the density matrix method, the orbital minimization method, and the krylov subspace method of which computational cost is proportional to the number of basis functions .the linear - scaling of the computational effort in the o( ) methods can be achieved by introducing various approximations like the truncation of the density matrix or wannier functions in real space .although the o( ) methods have been proven to be very efficient , the applications must be performed with careful consideration due to the introduction of the approximations , which might be one of reasons that the o( ) methods have not been widely used compared to the o( ) methods . from the above reason one may think of whether a numerically exact but low - order scaling method can be developed by utilizing the resultant sparse structure of the hamiltonian and overlap matrices expressed by localized basis functions .recently , a direction towards the development of o( ) methods has been suggested by lin et al . , in which diagonal elements of the density matrix is computed by a contour integration of the green function calculated by making full use of the sparse structure of the matrix. also, an efficient scheme has been presented by li et al . to calculate diagonal elements of the green function for electronic transport calculations, which is based on the algorithm by takahashi etal. and erisman and tinney. however , except for the two methods mentioned above the development of numerically exact o( ) methods , which are positioned in between the o( ) and o( ) methods , has been rarely explored yet for large - scale dft calculations . in this paperwe present a numerically exact but low - order scaling method for large - scale dft calculations of insulators and metals using localized basis functions such as pseudo - atomic orbital ( pao), finite element ( fe), and wavelet basis functions. the computational effort of the method scales as o( ) , o( ) , and o( ) for one , two , and three dimensional ( 1d , 2d , and 3d ) systems , respectively . in spite of the low - order scaling ,the method is a numerically exact alternative to the conventional o( ) methods . the key idea of the method is to directly compute selected elements of the density matrix by a contour integration of the green function evaluated with a set of recurrence formulas .it is shown that a contour integration method based on a continued fraction representation of the fermi - dirac function can be successfully employed for the purpose , and that the number of poles used in the contour integration does not depend on the size of the system .we also derive a set of recurrence formulas based on the nested dissection of the sparse matrix and a block factorization using the schur complement to calculate selected elements of the green function .the computational complexity is governed by the calculation of the green function .in addition to the low - order scaling , the method can be particularly advantageous to the massively parallel computation because of the well separated data structure .this paper is organized as follows : in sec .ii the theory of the proposed method is presented together with detailed analysis of the computational complexity . in sec .iii several numerical calculations are shown to illustrate practical aspects of the method within a model hamiltonian and dft calculations using the pao basis functions . in sec .iv we summarize the theory and applicability of the numerically exact but low - order scaling method .let us assume that the kohn - sham ( ks ) orbital is expressed by a linear combination of localized basis functions such as pao, fe, and wavelet basis functions as : where is the number of basis functions . throughout the paper , we consider the spin restricted and -independent ks orbitals for simplicity of notation .however , the generalization of our discussion for these cases is straightforward . by introducing lda or gga for the exchange - correlation functional ,the ks equation is written in a sparse matrix form : where is the eigenvalue of state , a vector consisting of coefficients , and and are the hamiltonian and overlap matrices , respectively .due to both the locality of basis functions and lda or gga for the exchange - correlation functional , both the matrices possess the same sparse structure .it is also noted that the charge density can be calculated by the density matrix : by remembering that is localized in real space , one may notice that the product is non - zero only if they are closely located each other .thus , the number of elements in the density matrix required to calculate the charge density scales as o( ) . as well as the calculation of the charge density ,the total energy is computed by only the corresponding elements of the density matrix within the conventional dft as : & = & { \rm tr}(\rho h_{\rm kin } ) + \int d{\bf r}n({\bf r } ) v_{\rm ext}({\bf r } ) \\ & & + \int\int d{\bf r } d{\bf r } ' \frac{n({\bf r})n({\bf r } ' ) } { \vert { \bf r}-{\bf r}'\vert } + e_{\rm xc}[n],\end{aligned}\ ] ] where is the matrix for the kinetic operator , an external potential , and an exchange - correlation functional .since the matrix possesses the same sparse structure as that of , one may find an alternative way that the selected elements of the density matrix , corresponding to the non - zero products , are directly computed without evaluating the ks orbitals .the alternative way enables us to avoid an orthogonalization process such as gram - schmidt method for the ks orbitals , of which computational effort scales as o( ) even if only the occupied states are taken into account .the direct evaluation of the selected elements in the density matrix is the starting point of the method proposed in the paper .the density matrix can be calculated through the green function as follows : where the factor 2 is due to the spin degeneracy , the fermi - dirac function , chemical potential , electronic temperature , the boltzmann factor , and a positive infinitesimal .also the matrix expression of the green function is given by where is a complex number .therefore , from eqs .( 5 ) and ( 6 ) , our problem is cast to two issues : ( i ) how the integration of the green function can be efficiently performed , and ( ii ) how the selected elements of the green function in the matrix form can be efficiently evaluated . in the subsequent subsectionswe discuss the two issues in detail .we perform the integration of the green function , eq .( 5 ) , by a contour integration method using a continued fraction representation of the fermi - dirac function. in the contour integration the fermi - dirac function is expressed by where with , and are poles of the continued fraction representation and the associated residues , respectively .the representation of the fermi - dirac function is derived from a hypergeometric function , and can be regarded as a pad approximant when terminated at the finite continued fraction .the poles and residues can be easily obtained by solving an eigenvalue problem as shown in ref .[ ] . by making use of the expression of eq .( 7 ) for eq .( 5 ) and considering the contour integration , one obtain the following expression for the integration of eq .( 5 ) : where , and is the zeroth order moment of the green function which can be computed by with a large real number . the structure of the poles distribution , that all the poles are located on the imaginary axis like the matsubara pole , but the density of the poles becomes smaller as the poles go away from the real axis , has been found to be very effective for the efficient integration of eq .it has been shown that only the use of the 100 poles at 600 k gives numerically exact results within double precision. thus , the contour integration method can be regarded as a numerically exact method even if the summation is terminated at a practically modest number of poles .moreover , it should be noted that the number of poles to achieve convergence is independent of the size of system . giving the green function in the lehmann representation , eq .( 8) can be rewritten by although the expression in the second line is obtained by just exchanging the order of the two summations , the expression clearly shows that the number of poles for convergence does not depend on the size of system if the spectrum radius is independent of the size of system .since the independence of the spectrum radius can be found in general cases , it can be concluded that the computational effort is determined by that for the calculation of the green function .the energy density matrix , which is needed to calculate forces on atoms within non - orthogonal localized basis functions , can also be calculated by the contour integration method as follows : with defined by where and are the the zeroth and first order moments of the green function , and can be computed by solving the following simultaneous linear equation : the equation is derived by terminating the summation over the order of the moments in the moment representation of the green function . by letting and be and , respectively , and are explicitly given by where should be a large real number , and is used in this study so that the higher order terms can be negligible in terminating the summation in the moment representation of the green function . inserting eqs .( 13 ) and ( 14 ) into eq .( 10 ) , we obtain the following expression which is suitable for the efficient implementation in terms of memory consumption : with and defined by one may notice that the number of poles for convergence does not depend on the size of system even for the calculation of the energy density matrix because of the same reason as for the density matrix .it is found from the above discussion that the computational effort to compute the density matrix is governed by that for the calculation of the green function , consisting of an inversion of the sparse matrix of which computational effort by conventional schemes such as the gauss elimination or lu factorization based methods scales as o( ) .thus , the development of an efficient method of inverting a sparse matrix is crucial for efficiency of the proposed method . herewe present an efficient low - order scaling method , based on a nested dissection approach, of computing only selected elements in the inverse of a sparse matrix .the low - order scaling method proposed here consists of two steps : ( 1 ) _ nested dissection _ : by noting that a matrix is sparse , a structured matrix is constructed by a nested dissection approach . in practice ,just reordering the column and row indices of the matrix yields the structured matrix .inverse by recurrence formulas _ : by recursively applying a block factorization to the structured matrix , a set of recurrence formulas is derived . using the recurrence formulas ,only the selected elements of the inverse matrix are directly computed . the computational effort to calculate the selected elements in the inverse matrix using the steps ( i ) and ( ii ) scales as o( ) ,o( ) , and o( ) for 1d , 2d , and 3d systems , respectively , as shown later .first , we discuss the nested dissection of a sparse matrix , and then derive a set of recurrence formulas of calculating the selected elements of the inverse matrix .-valent nntb and its corresponding matrix , ( b ) the renumbering for atoms by the first step in the nested dissection and its corresponding matrix , ( c ) the renumbering for atoms by the second step in the nested dissection and its corresponding matrix , ( d ) the binary tree structure representing hierarchical interactions between domains in the structured matrix by the numbering shown in fig ., width=302 ] as an example the right panel of fig .1(c ) shows a structured matrix obtained by the nested dissection approach for a finite chain model consisting of ten atoms , where we consider a -valent nearest neighbor tight binding ( nntb ) model .when one assigns the number to the ten atoms as shown in the left panel of fig .1(a ) , then is a tridiagonal matrix , of which diagonal and off - diagonal terms are assumed to be and , respectively , as shown in the right panel of fig .1(a ) . as the first step to generate the structured matrix shown in the right panel of fig .1(c ) , we make a _ dissection _ of the system into the left and right _ domains_ by renumbering for the ten atoms , and obtain a dissected matrix shown in the right panel of fig .the left and right domains interact with each other through only a _ separator _ consisting of an atom 10 . as the second step we apply a similar dissection for each domain generated by the first step , andarrive at a _nested_-_dissected _ matrix given by the right panel of fig .the subdomains , which consist of atoms 1 and 2 and atoms 3 and 4 , respectively , in the left domain interact with each other through only a separator consisting of an atom 5 .the similar structure is also found in the right domain consisting of atoms 6 , 7 , 9 , and 8 .it is worth mentioning that the resultant nested structure of the sparse matrix can be mapped to a binary tree structure which indicates hierarchical interactions between ( sub)domains as shown in fig .by applying the above procedure to a sparse matrix , one can convert any sparse matrix into a nested and dissected matrix in general .however in practice there is no obvious way to perform the nested dissection for general sparse matrices , while a lot of efficient and effective methods have been already developed for the purpose. here we propose a rather simple but effective way for the nested dissection by taking account of a fact that the basis function we are interested in is localized in real space , and that the sparse structure of the resultant matrix is very closely related to the position of basis functions in real space .the method bisects a system into two domains interacting through only a separator , and recursively applies to the resultant subdomains , leading to a binary tree structure for the interaction .our algorithm for the nested dissection of a general sparse matrix is summarized as follows : \(i ) _ ordering_. let us assume that there are basis functions in a domain we are interested in .we order the basis functions in the domain by using the fractional coordinate for the central position of localized basis functions along -axis , where , and 3 . as a result of the ordering , each basis functioncan be specified by the _ ordering number _ , which runs from 1 to in the domain of the central unit cell .the ordering number in the periodic cells specified by , where , is given by , where is the corresponding ordering number in the central cell . in isolated systems, one can use the cartesian coordinate instead of the fractional coordinate without losing any generality .\(ii ) _ screening of basis functions with a long tail_. the basis functions with a long tail tend to make an efficient dissection difficult .the sparse structure formed by the other basis functions with a short tail is latescent due to the existence of the basis functions with a long tail .thus , we classify the basis functions with a long tail in the domain as members in the separator before performing the dissection process . by the screening of the basis functions with a long tail , it is possible to expose concealed sparse structure when atomic basis functions with a variety of tails are used , while a systematic basis set such as the fe basis functions may not require the screening .\(iii ) _ finding of a starting nucleus_. among the localized basis functions in the domain , we search a basis function which has the smallest number of non - zero overlap with the other basis functions . once we find the basis function , we set it as a starting _ nucleus _ of a subdomain .\(iv ) _ growth of the nucleus_. staring from a subdomain given by the procedure ( iii ) , we grow the subdomain by increasing the size of nucleus step by step .the growth of the nucleus can be easily performed by managing the minimum and maximum ordering numbers , and , which ranges from 1 to .we define the subdomain by basis functions with the successive ordering numbers between the minimum and maximum ordering numbers and . at each step in the growth of the subdomain, we search two basis functions which have the minimum ordering number and maximum ordering number among basis functions overlapping with the subdomain defined at the growth step . in the periodic boundary condition , can be smaller than zero , and can be larger than the number of basis functions .then , the number of basis functions in the subdomain , the separator , and the other subdomain can be calculated by , , and , respectively , at each growth step . by the growth processone can minimize being a measure for quality of the dissection , where the measure takes equal bisection size of the subdomains and minimization of the size of the separator into account .also , if is larger than , then this situation implies that the proper dissection can be difficult along the axis .\(v ) _ dissection_. by applying the above procedures ( i)-(iv ) to each -axis , where , and 3 , and we can find an axis which gives the minimum .then , the dissection along the axis is performed by renumbering for basis functions in the domain , and two subdomains and one separator are obtained .evidently , the same procedures can be applied to each subdomain , and recursively continued until the size of domain reaches the threshold . as a result of the recursive dissection , we obtain a structured matrix by the nested dissection . as an illustrationwe apply the method for the nested dissection to the finite chain molecule shown in fig . 1 .we first set all the system as _domain _ , and start to apply the series of procedures to the domain .the procedure ( i ) is trivial for the case , and we obtain the numbering of atoms and the corresponding matrix shown in fig .also it is noted that the screening of the basis functions with a long tail is unnecessary , and that we only have to search the chain direction . in the procedure ( iii ) , atoms 1 and 10 in fig .1(a ) satisfy the condition .choosing the atom 1 as a starting nucleus of the domain , and we gradually increase the size of the domain according to the procedure ( iv ) .then , it is found that the division shown in fig .1(b ) gives the minimum .renumbering for the basis functions based on the analysis yields the dissected matrix shown in the right panel of fig .1(b ) . by applying the similar procedures to the left and right subdomains, one will immediately find the result of fig .-valent nntb , of which unit cell contains 1024 atoms with periodic boundary condition .the right blue and red circles correspond to atoms in two domains and a separator , respectively , at the first step in the nested dissection .( b ) the square lattice model at the final step in the nested dissection .the separator at the innermost and the outermost levels are labeled as separators 0 and 5 , respectively , and the separators at each level are constructed by atoms with a same color . , width=332 ] in addition to the finite chain molecule , as an example of more general cases , the above algorithm for the nested dissection is applied to a -valent nntb square lattice model of which unit cell contains 1024 atoms with periodic boundary condition . at the first step in the nested dissection , the separator is found to be red atoms as shown in fig .2(a ) . due to the periodic boundary condition , the separator consists of two _lines_. at the final step , the system is dissected by the recursive algorithm as shown in fig.2 ( b ) .the separator at the innermost and the outermost levels are labeled as separators 0 and 5 , respectively , and each subdomain at the innermost level includes 9 atoms .as demonstrated for the square lattice model , the algorithm can be applied for systems with any dimensionality , and provides a well structured matrix for our purpose in a single framework .we directly compute the selected elements of the inverse matrix using a set of recurrence formulas which can be derived by recursively applying a block factorization to the structured matrix obtained by the nested dissection method as shown below . to derive the recurrence formulas ,we first introduce the block factorization for a symmetric square matrix : where and are diagonal block matrices , and and an off - diagonal block matrix and its transposition , and is given by also the schur complement of the block element is defined by then , it is verified that the inverse matrix of is given by we now consider calculating the selected elements of the inverse of the structured matrix given in fig .1(c ) using eq .( 21 ) , and rewrite the matrix in fig .1(c ) in a block form as follows : where and correspond to and , respectively , and the other block elements can be deduced .also the blank indicates a block zero element . using eq .( 20 ) the schur complement of is given by where is calculated by eq .( 19 ) and can be transformed using eq .( 21 ) to a recurrence formula as follows : with the definitions : )^{t},\\ v_{1,0,1}^{t } & = & a_{0,1}^{-1}(b_{1,0}[b_{0,1}])^{t},\end{aligned}\ ] ] and )^{t } \right).\\\end{aligned}\ ] ] in eqs .( 25 ) , ( 26 ) , and ( 27 ) , we used a bra - ket notation [ ] which stands for a part of the block element .for example , $ ] means a part of which has the same columns as those of .it is noted that one can obtain a similar expression for as well as eq .( 24 ) for . to address a more general case wherethe dissection for the sparse matrix is further nested , we suppose that the matrix has the same inner structure as then one may notice the recursive structure in eq .( 24 ) , and can derive the following set of recurrence relations for general cases : )^t \right ) , \label{eqn : e20}\\\\ \nonumber & & v_{p , m+1,n}^{t } = \left ( \begin{array}{c } v_{p , m,2n}^{t } \\v_{p , m,2n+1}^{t } \\ 0 \end{array } \right ) + \left ( \begin{array}{c } l_{m,2n}^{t } \\l_{m,2n+1}^{t } \\-i \end{array } \right ) q_{p , m+1,n}^{t}.\\ \label{eqn : e21}\end{aligned}\ ] ] equation ( [ eqn : e21 ] ) is the central recurrence formula coupled with eq .( [ eqn : e20 ] ) , where the initial block elements are given by )^{t}. \label{eqn : e22}\end{aligned}\ ] ] also and can be calculated by a set of eqs .( [ eqn : e20])-(32 ) enables us to calculate all the inverses of the schur complements and . in the recurrence equations eqs .( [ eqn : e20 ] ) and ( [ eqn : e21 ] ) , three indices of , , and are involved , and they run as follows : the index denotes the level of hierarchy in the nested dissection and the innermost and outermost levels are set to 0 and , respectively .then , it is noted that the total system is divided into domains at the innermost level . as well as the index is also related to the level of hierarchy in the nested dissection , and runs from 0 to .the index is a rather intermediate one , being dependent on .the indices in eq .( 30 ) is dependent on and and they run as follows : since the set of the recurrence formulas eqs .( [ eqn : e20])-(32 ) proceed according to eqs .(33)-(35 ) , the development of recurrence can be illustrated as in fig .the recurrence starts from eq .( 30 ) with , and eqs .( 31 ) and ( 32 ) follow. then , is incremented by one , and climbs up to 1 .the increment of and the climbing of are repeated until and . at for each , and evaluated by eqs .( 31 ) and ( 32 ) , and the inverse of is calculated by a conventional method such as lu factorization , which are used in the next recurrence for the higher level of hierarchy . the numbers in the right hand side of fig . 3give the multiplicity for similar calculations by eq .( [ eqn : e21 ] ) coming from the index at each , since runs from 0 to as given in eq . ( 35 ) .the computational complexity can be estimated by fig . 3 , and we will discuss its details later . )-(32 ) , which implies that the recurrence starts from and ends at .the number in the right hand side is the multiplicity for similar calculations by eq .( [ eqn : e21 ] ) due to the index at each . , width=321 ] we are now ready to calculate the selected elements of the green function using the inverses of the schur complements and calculated by the recurrence formulas of eqs .( [ eqn : e20])-(32 ) . by noting that eq( 21 ) has a recursive structure and the matrix is structured by the nested dissection , one can derive the following recurrence formula : where the recurrence formula eq .( 38 ) starts with , adds contributions at for every , and at last yields the inverse of the matrix as . since the calculation of each element for the inverse of can be independently performed , only the selected elements can be computed without calculating all the elements .the selected elements to be calculated are elements in the block matrices , , and , each of which corresponds to a non - zero overlap matrix as discussed before .thus , we can easily compute only the selected elements using a table function which stores the position for the non - zero elements in the block matrices , , and .a simple but nontrivial example is given in appendix a to illustrate how the inverse of matrix is computed by the recurrence formulas , and also a similar way is presented to calculate a few eigenstates around a selected energy in appendix b , while the proposed method can calculate the total energy of system without calculating the eigenstates . as well as the conventional dft calculations , in the proposed method the chemical potential has to be adjusted so that the number of electrons can be conserved .however , there is no simpler way to know the number of electrons under a certain chemical potential before the contour integration by eq .( 8) with the chemical potential .thus , we search the chemical potential by iterative methods for the charge conservation .since the contour integration is the time - consuming step in the method , a smaller number of the iterative step directly leads to the faster calculation .therefore , we develop a careful combination of several iterative methods to minimize the number of the iterative step for sufficient convergence . in general ,the procedure for searching the chemical potential can be performed by a sequence ( 1)-(2 ) or ( 5)-(1)-(3)-(1)-(4)-(1)-(4)-(1) in terms of the following procedures .as shown later , the procedure enables us to obtain the chemical potential conserving the number of electrons within electron / system by less than 5 iterations on an average .\(1 ) _ calculation of the difference in the total number of electrons_. the difference in the total number of electrons is defined with calculated using eq .( 8) at a chemical potential by where is the number of electrons that the system should possess for the charge conservation .if is zero , the chemical potential is the desired one of the system .\(2 ) _ using the retarded green function_. if the difference is large enough so that the interpolation schemes ( 3 ) and ( 4 ) can fail to guess a good chemical potential , the next trial chemical potential is estimated by using the retarded green function .when the chemical potential of is considered , the correction to estimated by the retarded green function is given by where and are defined by with a small number ( 0.01 ev in this study ) and the integration in eq .( 41 ) is numerically evaluated by a simple quadrature scheme such as trapezoidal rule with a similar number of points as for that of poles in eq .( 8) , and the integration range can be determined by considering the surviving range of .the search of is performed by a bisection method until , where is a criterion for the convergence , and electron / system is used in this study .it should be noted that the evaluation of green function being the time - consuming part can be performed before the bisection method and a set of is stored for computational efficiency .\(3 ) _ linear interpolation / extrapolation method_. in searching the chemical potential , if two previous results ( ) and ( ) are available , a trial chemical potential is estimated by a linear interpolation / extrapolation method as : \(4 ) _ muller method_ . in searching the chemical potential ,if tree previous results ( ) , ( ) , and ( ) are available , they can be fitted to a quadratic equation : where , , and are found by solving a simultaneous linear equation of in size. then , giving is a solution of eq .( 45 ) , and given by the selection of sign is unique because of the condition that the gradient at the solution must be positive , and the branching is taken into account to avoid the round - off error . as the iteration proceeds in search of the chemical potential, we have a situation that the number of available previous results is more than three . for the case , it is important to select three chemical potentials having smaller _ and _ the different sign of among three chemical potentials , since the guess of can be performed as the interpolation rather than the extrapolation .\(5 ) _ extrapolation of chemical potential for the second step_. during the self - consistent field ( scf ) iteration , the chemical potential obtained at the last scf step is used as the initial guess in the current scf step .in addition , we estimate the second trial chemical potential by fitting results , , and , where the subscript and the superscript in and mean the iteration step in search of the chemical potential and the scf step , respectively , at three previous scf steps to the following equation : where , , and are found by solving a simultaneous linear equation of in size .then , the chemical potential giving can be estimated by solving eq . ( 47 ) with respect to as follows : it is found from numerical calculations that eq .( 48 ) provides a very accurate guess in most cases as the scf calculation converges . .some of and in eq .( 50 ) for a finite 1d chain , a finite 2d square lattice , and a finite 3d cubic lattice described by the -valent nntb model .they depends on or for the 2d and 3d systems in a rather complicated way , while for all the cases .the unit for each case is given in parenthesis . [ cols="<,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] e.r .davidson , in methods in computational molecular physics , vol .113 of nato advanced study institute , series c : mathematical and physical sciences , edited by g.h.f .diercksen and s. wilson ( plenum , new york , 1983 ) , p. 95in a precise sense , we use _ domain _ to mean a system that we are now trying to bisect into two _subdomains_. in the recursive dissection , we obtain two subdomains after the dissection of the domain .once we move to each subdomain to perform the next dissection , then the subdomain is called domain in the precise sense . in the textboth the terms are distinguished for cases which may cause confusion .although the coefficients , , and can be analytically evaluated , the round - off error in the analytically evaluated solution is nonnegligible as converges to zero . in order to avoid the numerical problem ,we refine , , and by minimizing a function with the analytically evaluated coefficients as initial values .we find that the coefficients refined by the minimization are much more accurate than the analytic ones for serious cases , and that the refinement is quite effective to avoid the numerical instability .it is noted that the recurrence formula derived by lin et al . allows us to compute selected elements in o operations for 3d systems, which is superior to our recurrence formulas .however , the size of separators in their way for the nested dissection can be larger than that of our separators especially for the case that basis functions overlap with a number of other basis functions like in the pao basis functions , which leads to a large prefactor for the computational cost in spite of the lower scaling .also , the comparison with the algorithm by takahashi et al. and erisman and tinney will be in a future work .
|
an efficient low - order scaling method is presented for large - scale electronic structure calculations based on the density functional theory using localized basis functions , which directly computes selected elements of the density matrix by a contour integration of the green function evaluated with a nested dissection approach for resultant sparse matrices . the computational effort of the method scales as o( ) , o( ) , and o( ) for one , two , and three dimensional systems , respectively , where is the number of basis functions . unlike o( ) methods developed so far the approach is a numerically exact alternative to conventional o( ) diagonalization schemes in spite of the low - order scaling , and can be applicable to not only insulating but also metallic systems in a single framework . it is also demonstrated that the nested algorithm and the well separated data structure are suitable for the massively parallel computation , which enables us to extend the applicability of density functional calculations for large - scale systems together with the low - order scaling .
|
the most familiar formulation of supervised classification associates single feature - vectors with single labels , hence it is called single - instance single - label ( sisl ) . for example , svm and logistic regression are sisl classifiers .one common setup involving sisl classifiers is to use a segmentation algorithm to extract `` syllables '' or calls of bird sound from a recording , each of which is described by a feature vector .a sisl classifier is trained on a collection of syllables paired with species labels , then predicts the species for a new syllable .many of the audio recordings used in sisl experiments are collected with a directional microphone aimed by a person at the bird of interest .this method produces recordings where the targeted bird is louder than other sound sources in the environment .audio data collected by unattended microphones for the purpose of acoustic monitoring , and audio collected with mobile devices for citizen science are less ideal ; it is common to have multiple simultaneously vocalizing bird species , in addition to other sources of noise such as non - bird species , wind , rain , streams , and motor vehicles .few works have addressed these complexities in real - world data .there are two kinds of structure in bird sound data that can be exploited through alternative frameworks for supervised classification .first , bird sound is naturally decomposed into a collection of parts , e.g. , syllables , which motivates a multi - instance learning ( mil ) approach .second , multi - label classification ( mlc ) is a natural fit for bird sound because an audio recording can be associated with a set of species ( and other sounds ) that are present .multi - instance multi - label learning ( miml ) combines both ideas .miml has previously been used for classification of bird sound recordings containing multiple simultaneously vocalizing species . however , prior work on miml for bird sound has focussed more on the multi - instance structure of the sound , and less on structure in the species / label sets .the mlc framework has not been directly applied to bird sound ( although some miml algorithms which have been applied to bird sound can be considered a reduction to mlc , e.g. , miml - knn and miml - rbf ) .ensemble of classifier chains ( ecc ) is an algorithm for mlc which has recently been applied to species distribution modeling , where the goal is to predict the set of bird species present at a site from a feature vector describing physical and biological properties of the site .yu et al . suggested that ecc achieves better performance in this domain than binary relevance because it can exploit correlations in the label sets . considering this observation, we hypothesize that ecc can exploit the same structure while predicting sets of bird species from an acoustic feature vector instead of environmental covariates .we formulate the classification problem similarly to .the training data consists of audio recordings paired with a set of species that are present .the goal is to predict the set of species in a new recording which is not part of the training data . to apply mlc ,it is necessary to represent each audio recording with a fixed - length feature vector .we apply a 2d time - frequency supervised segmentation algorithm similar to , then compute the same features as in to describe each segment. then we use a clustered codebook to obtain a histogram - of - segments for each recording . used histograms to represent variable - length sequences of syllables . used histograms of frame - level features ( spectrum and mfcc ) to represent an audio recording with a single species of bird .we compare ecc , binary relevance ( br ) , and results from prior work on two real - world datasets of birdsong with multiple simultaneously vocalizing species .the first dataset was collected with unattended omnidirectional microphones in the h. j. a. ( hja ) experimental research forest , and has previously been used in several classification experiments the second dataset is new , and consists of recordings of birds made with an iphone in a residential neighborhood ( collected and labeled by the authors ) .the new iphone birdsong dataset presents the same multi - species issues as the hja birdsong dataset , but is arguably more challenging because there are more / louder sources of background noise and non - bird classes ( especially motor vehicles and insects ). results are analyzed in terms of standard multi - label error measures : hamming loss , set 0/1 loss , rank loss , 1-error , and coverage .ecc achieves better results than br in the majority of comparisons , and ecc with no parameter tuning is better than one and worse than two of the miml algorithms ( which have an unfair advantage of using post - hoc parameter tuning ) .in mlc , the training dataset is where is a feature vector , and is a subset of possible class labels . the goal is to learn a classifier which predicts a label set from a given feature vector .it is common to implement and evaluate multi - label classifiers based on a score function for each class , which represents the predicted confidence that label is in the set .the set predictor is defined in terms of the score functions .the mlc framework maps to acoustic species classification as follows : each audio recording is associated with a feature vector , and the set of species audible in the recording is the label set .miml is a related framework where the training data consists of bags - of - instances paired with label sets , we will use miml as an intermediate representation of audio recordings of bird sound , and solve the problem by a reduction from miml to mlc .binary relevance is one of the simplest algorithms for mlc .it is a reduction to sisl where binary prediction of each label is treated as a completely separate / independent problem . to refer to a bit in the binary representation of a label set ,let ] ] = score[\pi_l(j ) ] + p_{lj}$ ] we implement ecc with a random forest ( rf ) as the base - sisl classifier , hence we call the proposed classifier ecc - rf . because rf outputs a probability ,the ensemble can be viewed as an instance of the ensemble of probabilistic classifier chains ( epcc ) algorithm .therefore it is reasonable to aggregate probabilities from each sisl classifier rather than 0/1 votes .the aggregated probabilities are used as the score - functions for each class .algorithm [ alg : eccrf_classify ] gives pseudocode we use to generate a class - score vector with ecc - rf , given input .sometimes class scores are sufficient , for example to rank species from most likely to least likely to be present .however , it is often desirable to obtain a specific predicted label set .a label set can be obtained by comparing each score to a threshold .the simplest method is to use a single threshold for all classes .we instead select a separate threshold for each class , which is calibrated using out - of - bag ( oob ) estimation ( for both br and ecc - rf ) .consider one of the binary rf s in br or ecc - rf , or .let its oob estimate on instance in the training dataset be ( for br ) or ( for ecc ) .for each class , we select a threshold to minimize the 0/1 error on that class , comparing ground - truth labels for class with oob estimates .the threshold used in br for class is = y_i^{j}]\end{aligned}\ ] ] the same algorithm is applied to ecc - br by defining .two real - world birdsong datasets are used in our experiments . [ [ hja - birdsong ] ] hja birdsong + + + + + + + + + + + + the hja birdsong dataset consists of 548 ten - second audio recordings collected in the h. j. a. experimental research forest , using songmeter sm1 recording devices .there are 13 species in this dataset , with between 1 and 5 species per recording ( 2.144 average ) .the most common sources of noise in this dataset include streams and wind .further details of this dataset are available in . used 5-fold cross - validation for this dataset .we use the same 5-fold partitions , so the results are comparable . [ [ iphone - birdsong ] ] iphone birdsong + + + + + + + + + + + + + + + we collected 150 five - second audio recordings of bird sound with an iphone 4 g in a residential neighborhood .54 of the recordings were collected during the dawn chorus on a single day , and the rest were collected at different times of day over several months in 201213 .we filtered the original 150 recordings down to 91 which are more suited for a cross - validated species classification experiment .there were 32 recordings with bird species we were unable to identify , and many more with non - bird sounds .we removed all recordings containing unknown bird species , amphibians , human voice , dogs barking , and the iphone vibrating due to receiving a message .finally , we remove all recordings containing a species which appears only once in the dataset ( cross - validation is not reasonable in this case ) .the filtered subset of 91 recordings contains 14 species .many of these recordings still contain motor vehicle noise , loud insects , and `` click noises '' which appear as vertical lines in the spectrogram .table [ tbl : iphone_dataset ] lists each species , and the number of recordings it appears in .note that the dataset is highly unbalanced . because this is smaller dataset , we use 10-fold cross - validation instead of 5-fold ..the number of recordings containing each species in the iphone birdsong dataset . [ cols="<,^",options="header " , ] table [ tbl : results ] lists results .because rf and ecc are randomized , we run 10 trials , and report results averaged over all trials and folds of cross - validation . following recommendations in , we summarize results for multiple classifiers on multiple datasets by win - loss counts ( and do not discard any result as `` insignificant '' ) .however , unlike the scenario considered by , we compare mlc classifiers rather than sisl classifiers , so there are multiple performance measures . because there are only a few datasets and more performance measures , we aggregate win / loss counts over all measures .comparing br and ecc - rf on two datasets with five different performance measures gives 10 comparisons between the two algorithms . over both datasets ,the win - loss count for ecc - rf vs. br is 7 - 3 . on the iphone dataset ,the result is less decisive ; the count for ecc - rf vs. br is 3 - 2 . on the hja birdsong dataset ,the count for ecc - rf vs. br is 4 - 1 .overall these results suggest there is an advantage to using ecc - rf over br for multi - label classification of bird species sets , given the histogram - of - segments representation .next we consider the win - loss counts on the hja birdsong dataset for ecc - rf vs. mimlsvm , mimlrbf , and miml- .the counts are 5 - 0 , 1 - 4 , and 0 - 5 , respectively , i.e. mimlsvm is worse than ecc - rf in all comparisons , but mimlrbf and miml- are better than ecc - rf . however, this is not an entirely fair comparison due to post - hoc parameter selection in the miml experiments .we suggest that the performance advantage of mimlrbf and miml- over ecc - rf may be attributed to better representation of the multi - instance structure in the data ( compared to our histogram - of - segments representation ) .based on comparisons between ecc and br , better modeling of structure in the label set is beneficial when compared with the same features and base - sisl classifier .we focussed on learning to predict species label sets .another interesting problem is to train on recordings with multiple labels , but classify segments with a single label .such an approach reduces the labeling effort required to train sisl segment / syllable classifiers such as .this problem is naturally formulated in the framework of miml instance annotation .a related formulation is to associate each segment with a set of candidate labels , only one of which is correct .this formulation is called ambiguous label classification , or superset label learning .arthur , david and vassilvitskii , sergei .k - means++ : the advantages of careful seeding . in _ proc .18th acm - siam symposium on discrete algorithms _ , pp . 10271035 .society for industrial and applied mathematics , 2007 .brandes , t scott . feature vector selection and use with hidden markov models to identify frequency - modulated bioacoustic signals amidst noise . _audio , speech , and language processing , ieee transactions on _ , 160 ( 6):0 11731180 , 2008 .briggs , f. , fern , x.z . , and raich , r. rank - loss support instance machines for miml instance annotation . in _ proceedings of the 18th acm sigkdd international conference on knowledge discovery and data mining _ , pp. 534542 .acm , 2012 .briggs , f. , lakshminarayanan , b. , neal , l. , fern , x.z . , raich , r. , hadley , s.j.k . ,hadley , a.s . , and betts , m.g .acoustic classification of multiple simultaneous bird species : a multi - instance multi - label approach . _ the journal of the acoustical society of america _ , 131:0 4640 , 2012 .briggs , forrest , raich , raviv , and fern , xiaoli z. audio classification of bird species : a statistical manifold approach . in _ data mining , 2009 .ninth ieee international conference on _ , pp . 5160 .ieee , 2009 .damoulas , theodoros , henry , samuel , farnsworth , andrew , lanzone , michael , and gomes , carla .bayesian classification of flight calls with a novel dynamic time warping kernel . in _machine learning and applications ( icmla ) , 2010 ninth international conference on _ , pp .ieee , 2010 .dembczynski , krzysztof , cheng , weiwei , and hllermeier , eyke .bayes optimal multilabel classification via probabilistic classifier chains . in _ proceedings of the 27th international conference on machine learning ( icml-10 ) _ , pp . 279286 , 2010 .somervuo , panu and harma , aki .bird song recognition based on syllable pair histograms . in _acoustics , speech , and signal processing , 2004 .proceedings.(icassp04 ) .ieee international conference on _ , volume 5 , pp .ieee , 2004 .yu , jun , wong , weng - keen , dietterich , tom , jones , julia , betts , matthew , frey , sarah , shirley , susan , miller , jeffery , and white , matt .multi - label classification for species distribution modeling . in _ proc .icml 2011 workshop on machine learning for global challenges _ , 2011 .zhang , min - ling . a k - nearest neighbor based multi - instance multi - label learning algorithm . in _tools with artificial intelligence ( ictai ) , 2010 22nd ieee international conference on _ , volume 2 , pp . 207212 .ieee , 2010 .
|
bird sound data collected with unattended microphones for automatic surveys , or mobile devices for citizen science , typically contain multiple simultaneously vocalizing birds of different species . however , few works have considered the multi - label structure in birdsong . we propose to use an ensemble of classifier chains combined with a histogram - of - segments representation for multi - label classification of birdsong . the proposed method is compared with binary relevance and three multi - instance multi - label learning ( miml ) algorithms from prior work ( which focus more on structure in the sound , and less on structure in the label sets ) . experiments are conducted on two real - world birdsong datasets , and show that the proposed method usually outperforms binary relevance ( using the same features and base - classifier ) , and is better in some cases and worse in others compared to the miml algorithms .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.