article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
the brownian bridge ( bb ) is a one - dimensional brownian motion , , which is conditioned to return to the starting point . without loss of generalityone can postulate that the bb starts and returns to the origin ( see fig .[ fig1 ] ) : .bbs admit numerous interpretations .for instance , a bb can be regarded as a stationary -dimensional edwards - wilkinson interface in a box with periodic boundary conditions ( see , e.g. , ) .bbs naturally arise in the analysis of convex hulls of planar brownian motions and of dephasing due to electron - electron interactions in quasi- wires , they have been used to model a random potential in studies of diffusion in presence of a strong periodic disorder and are also relevant for diffusion in disordered non - periodic potentials as they are related to the statistics of transients .bbs appear in mathematical statistics , e.g. , in kolmogorov - smirnov tests of the difference between the empirical distributions calculated from a sample and the true distributions governing the sample process ( see also for the applications in mathematical finance ) .bbs are often used in computer science , e.g. , in the analysis of the maximal size reached by a dynamic data structure over a long period of time . in ecology , bbs have been used for an analysis of animal home ranges and migration routes , as well as for estimating the influence of resource selection on movement .extremal value statistics of the bbs , e.g. , statistics of a maximum , a minimum , or a range on the entire time interval ] and on the entire interval ] .this pdf is given by ( see , e.g. , refs . ) another quantity is , the pdf that the bm does not reach a fixed level within the time interval ] which is achieved on a subinterval ] , i.e. , while in the opposite limit we have which is a standard expression for the moments of the global maximum of a brownian bridge . the moments as functions of are plotted in fig .[ fig3 ] . fromwe derive the distribution of the gap between and . rescaling the gap and the gap distribution we get \ , e^{-g^2}\ , { \rm erfc}\left(\sqrt{\dfrac{z}{1-z}}\,g\right)\,.\end{aligned}\ ] ] this distribution is depicted in fig .[ fig4 ] . for ,the distribution is unimodal with maximum at . for ,the distribution is bimodal in addition to the maximum at ( due to the delta - peak ) there is a second maximum which moves away from the origin as . 0.45 vs. .the dashed curves ( top to bottom ) correspond to .the solid curves ( top to bottom ) correspond to .panel ( b ) : the variance of the gap , eq . , as a function of ., title="fig : " ] 0.45 vs. .the dashed curves ( top to bottom ) correspond to .the solid curves ( top to bottom ) correspond to .panel ( b ) : the variance of the gap , eq ., as a function of ., title="fig : " ] the moments of the gap are found from to give where are the associated legendre functions of the first kind . for even latter are polynomials , so that the moments of the even order are polynomials of .for instance the moments of odd order contain an additional inverse trigonometric function : using these explicit results one can compute cumulants .for instance , the variance reads in fig .[ fig5 ] we plot vs for several integer values of , while fig .[ fig8 ] presents the variance of the gap vs .let us determine the pearson correlation coefficient of partial and global maxima .by definition we have already computed all terms in eq .apart from the cross - moment of two maxima .this cross - moment can be determined from to give leading to with defined in eq . . in fig .[ fig10 ] we plot the pearson s coefficient as a function of .the pearson coefficient approaches unity , , when , i.e. .indeed , and are almost completely correlated in this region .the more precise asymptotic behavior is conversely , when implying that and become uncorrelated .more precisely , one gets implying that correlations vanish slowly , .we have determined the joint statistics and temporal correlations between a partial and global extremes of one - dimensional brownian bridges . we have calculated the joint probability distribution function of two maxima , the pdf of the partial maximum and the pdf of the gap .we also derived exact expressions for the moments and with arbitrary and computed the pearson correlation coefficient quantifying the correlations between and .our results for the one - dimensional brownian bridges can be generalized to the general bessel process the radius of -dimensional brownian motion , with the bridge constraint .the calculations are very similar , one should use explicit expressions for obtained in .chicheportiche r and bouchaud j - p , _ some applications of first - passage ideas to finance _, in _ first - passage phenomena and their applications _ , r. metzler , g. oshanin and s. redner , eds . , ( world scientific publishers , singapore , 2014 )
we analyze the joint distributions and temporal correlations between the partial maximum and the global maximum achieved by a brownian bridge on the subinterval ] , respectively . we determine three probability distribution functions : the joint distribution of both maxima ; the distribution of the partial maximum ; and the distribution of the gap between the maxima , . we present exact results for the moments of these distributions and quantify the temporal correlations between and by calculating the pearson correlation coefficient . ,
the majority of exoplanets discovered to date have been detected indirectly , by looking for effects these planets have on their host stars .directly imaging exoplanets will provide a great deal of additional information unobtainable by most indirect detection methods and make discoveries expanding the population of known exoplanets .while direct imaging of exoplanets has been demonstrated with ground based instruments , these have all been very young , very large , and self - luminous planets on long - period orbits .imaging of smaller and more earth - like planets will likely require space observatories such as the wide - field infrared survey telescope - astrophysics focused telescope assets ( wfirst - afta ) .such observatories are major undertakings requiring extensive planning and design .building confidence in a mission concept s ability to achieve its science goals is always desirable .unfortunately , accurately modeling the science yield of an exoplanet imager can be almost as complicated as designing the mission . while each component of the system is modeled in great detail as it proceeds through its design iterations ,fitting these models together is very challenging . making statementsabout expected science returns over the course of the whole mission requires a large number of often unstated assumptions when such results are presented .this makes it challenging to compare science simulation results and also to systematically test the effects of changing just one part of the mission or instrument design from different groups .we seek to address this problem with the introduction of a new modular , open source mission simulation tool called exosims ( exoplanet open - source imaging mission simulator ) .this software is specifically designed to allow for systematic exploration of exoplanet imaging mission science yields .the software framework makes it simple to change the modeling of just one aspect of the instrument , observatory , or overall mission design . at the same time , this framework allows for rapid prototyping of completely new mission concepts by reusing pieces of previously implemented models from other mission simulations .modeling the science yield of an exoplanet imager is primarily difficult because it is completely conditional on the true distributions of planet orbital and physical parameters , of which we so far have only partial estimates .this makes the mission model an inherently probabilistic one , which reports posterior distributions of outcomes conditioned on some selected priors . since the introduction of observational completeness by robert brown , it is common to approach exoplanet mission modeling with monte carlo methods .various groups have pursued such modeling , often focusing on specific aspects of the overall mission or observation modeling .a second challenge is correctly including all of the dynamic and stochastic aspects of such a mission .given a spacecraft orbit , a target list , and the constraints of the imaging instrument , we can always predict when targets will be observable . incorporating this knowledge into a simulation , however ,can be challenging if a single calculated value represents the predictions , i.e. , the number of planets discovered .similarly , while it is simple to write down the probability of detecting a planet upon the first observation of a star , it is more challenging to do the same for a second observation an arbitrary amount of time later , without resorting to numerical simulation .exosims deals with these challenges by explicitly simulating every aspect of the mission and producing a complete timeline of simulated observations including the specific targets observed at specific times in the mission and recording the simulated outcomes of these observations . while one such simulation does not answer the question of expected mission science yield , an ensemble of many thousands of such simulations gives the data for the posterior distributions of science yield metrics .exosims is designed to generate these ensembles and provide the tools to analyze them , while allowing the user to model any aspect of the mission as detailed as desired . in [ sec : exosims ] we provide an overview of the software framework and some details on its component parts . as the software is intended to be highly reconfigurable , we focus on the operational aspects of the code rather than implementation details .we use the coronagraphic instrument currently being developed for wfirst - afta as a motivating example for specific implementations of the code . in [ sec : wfirst ] we present mission simulation results for various iterations of the wfirst - afta coronagraph designs using components that are being adapted to build the final implementation of exosims .exosims is currently being developed as part of a wfirst preparatory science investigation , with initial implementation targeted at wfirst - afta .this development includes the definition of a strict interface control , along with corresponding prototypes and class definitions for each of the modules described below .the interface control document and as - built documentation are both available for public review and comment at .initial code release is targeted for fall 2015 , with an alpha release in february of 2016 and continued updates through 2017 .future development of exosims is intended to be a community - driven project , and all software related to the base module definitions and simulation execution will be made publicly available alongside the interface control documentation to allow mission planners and instrument designers to quickly write their own modules and drop them directly into the code without additional modifications made elsewhere .we fully expect that exosims will be highly useful for ensuring the achievement of the wfirst - afta science goals , and will be of use to the design and planning of future exoplanet imaging missions .exosims builds upon previous frameworks described in ref . and ref ., but will be significantly more flexible than these earlier efforts , allowing for seamless integration of independent software modules , each of which performs its own well - defined tasks , into a unified mission simulation .this will allow the wider exoplanet community to quickly test the effects of changing a single set of assumptions ( for example , the specific model of planet spectra , or a set of mission operating rules ) on the overall science yield of the mission , by only updating one part of the simulation code rather than rewriting the entire simulation framework .the terminology used to describe the software implementation is loosely based on the object - oriented framework upon which exosims is built .the term module can refer to either the object class prototype representing the abstracted functionality of one piece of the software , or to an implementation of this object class which inherits the attributes of the prototype , or to an instance of this object class .thus , when we speak of input / output definitions of modules , we are referring to the class prototype .when we discuss implemented modules , we mean the inherited class definition . finally , when we speak of passing modules ( or their outputs ) , we mean the instantiation of the inherited object class being used in a given simulation .relying on strict inheritance for all implemented module classes provides an automated error and consistency - checking mechanism , as we can always compare the outputs of a given object instance to the outputs of the prototype .this means that it is trivial to pre - check whether a given module implementation will work with the larger framework , and thus allows for the flexibility and adaptability described above .flowchart of mission simulation .each box represents a component software module which interacts with other modules as indicated by the arrows .the simulation modules ( those that are not classified as input modules ) pass all input modules along with their own output .thus , the survey ensemble module has access to all of the input modules and all of the upstream simulation modules . ][ fig : codeflow ] shows the relationships of the component software modules classified as either input modules or simulation modules .the input modules contain specific mission design parameters .the simulation modules take the information contained in the input modules and perform mission simulation tasks .any module may perform any number or kind of calculations using any or all of the input parameters provided .they are only constrained by their input and output specification , which is designed to be as flexible as possible , while limiting unnecessary data passing to speed up execution .the specific mission design under investigation determines the functionality of each of the input modules , but the inputs and outputs of each are always the same ( in terms of data type and what the variables represent ) .these modules encode and/or generate all of the information necessary to perform mission simulations . herewe briefly describe the functionality and major tasks for each of the input modules .the optical system description module contains all of the necessary information to describe the effects of the telescope and starlight suppression system on the target star and planet wavefronts .this requires encoding the design of both the telescope optics and the specific starlight suppression system , whether it be an internal coronagraph or an external occulter .the encoding can be achieved by specifying point spread functions ( psf ) for on- and off - axis sources , along with ( potentially angular separation - dependent ) contrast and throughput definitions . at the opposite level of complexity, the encoded portions of this module may be a description of all of the optical elements between the telescope aperture and the imaging detector , along with a method of propagating an input wavefront to the final image plane .intermediate implementations can include partial propagations , or collections of static psfs representing the contributions of various system elements .the encoding of the optical train will allow for the extraction of specific bulk parameters including the instrument inner working angle ( iwa ) , outer working angle ( owa ) , and mean and max contrast and throughput .if the starlight suppression system includes active wavefront control , i.e. , via one or more deformable mirrors ( dm ) , then this module must also encode information about the sensing and control mechanisms .again , this can be achieved by simply encoding a static targeted dm shape , or by dynamically calculating dm settings for specific targets via simulated phase retrieval .as wavefront control residuals may be a significant source of error in the final contrast budget , it is vitally important to include the effects of this part of the optical train. the optical system description can optionally include stochastic and systematic wavefront - error generating components .again , there is a wide range of possible encodings and complexities. they could be gaussian errors on the contrast curves sampled during survey simulation to add a random element to the achieved contrast on each target .alternatively , in cases where an active wavefront control system is modeled , stochastic wavefront errors could be introduced by simulating the measurement noise on the wavefront sensor ( either again as drawn from pre - determined distributions , or additively from various detector and astrophysical noise sources ) .systematic errors , such as mis - calibration of deformable mirrors , closed - loop control delays , and non - common path errors , may be included to investigate their effects on contrast or optical system overhead . in cases where the optical system is represented by collections of static psfs, these effects must be included in the diffractive modeling that takes place before executing the simulation . for external occulters ,we draw on the large body of work on the effects of occulter shape and positioning errors on the achieved contrast , as in ref . .finally , the optical system description must also include a description of the science instrument or instruments .the baseline instrument is assumed to be an imaging spectrometer , but pure imagers and spectrometers are also supported .each instrument encoding must provide its spatial and wavelength coverage and sampling .detector details such as read noise , dark current , and quantum efficiency must be provided , along with more specific quantities such as clock induced charge for electron multiplying ccds .optionally , this portion of the module may include descriptions of specific readout modes , i.e. , in cases where fowler sampling or other noise - reducing techniques are employed . in cases where multiple science instruments are defined ,they are given enumerated indices in the specification , and the survey simulation module must be implemented so that a particular instrument index is used for a specific task , i.e. , detection vs. characterization .the overhead time of the optical system must also be provided and is split into two parameters .the first is an integration time multiplier for detection and characterization modes , which represents the individual number of exposures that need to be taken to cover the full field of view , full spectral band , and all polarization states in cases where the instrument splits polarizations . for detection modes , we will typically wish to cover the full field of view , while possibly only covering a small bandpass and only one polarization , whereas for characterizations , we will typically want all polarizations and spectral bands , while focusing on only one part of the field of view .the second overhead parameter gives a value for how long it will take to reach the instrument s designed contrast on a given target .this overhead is separate from the one specified in the observatory definition , which represents the observatory settling time and may be a function of orbital position , whereas the contrast floor overhead may depend on target brightness .if this value is constant , as in the case of an observing strategy where a bright target is used to generate the high contrast regions , or zero , as in the case of an occulter , then it can be folded in with the observatory overhead .the star catalog module includes detailed information about potential target stars drawn from general databases such as simbad , mission catalogs such as hipparcos , or from existing curated lists specifically designed for exoplanet imaging missions .information to be stored , or accessed by this module will include target positions and proper motions at the reference epoch ( see [ sec : time ] ) , catalog identifiers ( for later cross - referencing ) , bolometric luminosities , stellar masses , and magnitudes in standard observing bands .where direct measurements of any value are not available , values are synthesized from ancillary data and empirical relationships , such as color relationships and mass - luminosity relations .this module will not provide any functionality for picking the specific targets to be observed in any one simulation , nor even for culling targets from the input lists where no observations of a planet could take place .this is done in the target list module as it requires interactions with the planetary population module ( to determine the population of interest ) , the optical system description module ( to define the capabilities of the instrument ) , and observatory definition module ( to determine if the view of the target is unobstructed ) .the planet population description module encodes the density functions of all required planetary parameters , both physical and orbital .these include semi - major axis , eccentricity , orbital orientation , and planetary radius and mass .certain parameter models may be empirically derived while others may come from analyses of observational surveys such as the keck planet search , kepler , and ground - based imaging surveys including the gemini planet imager exoplanet survey .this module also encodes the limits on all parameters to be used for sampling the distributions and determining derived cutoff values such as the maximum target distance for a given instrument s iwa .the planet population description module does not model the physics of planetary orbits or the amount of light reflected or emitted by a given planet , but rather only encodes the statistics of planetary occurrence and properties . as this encodingis based on density functions , it fully supports modeling ` toy ' universes where all parameters are fixed , in which case all of the distributions become delta functions .we can equally use this encoding to generate simulated universes containing only ` earth - twins ' to compare with previous studies as in ref . or ref . .alternatively , the distributions can be selected to mirror , as closely as possible , the known distributions of planetary parameters . asthis knowledge is limited to specific orbital or mass / radius scales , this process invariably involves some extrapolation .the observatory definition module contains all of the information specific to the space - based observatory not included in the optical system description module .the module has three main tasks : orbit , duty cycle , and keepout definition , which are implemented as functions within the module .the inputs and outputs for these functions are represented schematically in fig .[ fig : observatory ] .depiction of observatory definition module including inputs , tasks , and outputs.,scaledwidth=100.0% ] the observatory orbit plays a key role in determining which of the target stars may be observed for planet finding at a specific time during the mission lifetime .the observatory definition module s orbit function takes the current mission time as input and outputs the observatory s position vector .the position vector is standardized throughout the modules to be referenced to a heliocentric equatorial frame at the j2000 epoch .the observatory s position vector is used in the keepout definition task and target list module to determine which of the stars from the star catalog may be targeted for observation at the current mission time .the duty cycle determines when during the mission timeline the observatory is allowed to perform planet - finding operations .the duty cycle function takes the current mission time as input and outputs the next available time when exoplanet observations may begin or resume , along with the duration of the observational period .the outputs of this task are used in the survey simulation module to determine when and how long exoplanet finding and characterization observations occur .the specific implementation of the duty cycle function can have significant effects on the science yield of the mission .for example , if the observing program is pre - determined , such that exoplanet observations can only occur at specific times and last for specific durations , this significantly limits the observatory s ability to respond dynamically to simulated events , such as the discovery of an exoplanet candidate .this can potentially represent a sub - optimal utilization of mission time , as it may prove to be more efficient to immediately spectrally characterize good planetary candidates rather than attempting to re - observe them at a later epoch .it also limits the degree to which followup observations can be scheduled to match the predicted orbit of the planet .alternatively , the duty cycle function can be implemented to give the exoplanet observations the highest priority , such that all observations can be scheduled to attempt to maximize dynamic completeness or some other metric of interest .the keepout definition determines which target stars are observable at a specific time during the mission simulation and which are unobservable due to bright objects within the field of view such as the sun , moon , and solar system planets .the keepout volume is determined by the specific design of the observatory and , in certain cases , by the starlight suppression system .for example , in the case of external occulters , the sun can not be within the 180 annulus immediately behind the telescope ( with respect to the line of sight ) as it would be reflected by the starshade into the telescope .the keepout definition function takes the current mission time and star catalog module output as inputs and outputs a list of the target stars which are observable at the current time .it constructs position vectors of the target stars and bright objects which may interfere with observations with respect to the observatory .these position vectors are used to determine if bright objects are in the field of view for each of the potential stars under exoplanet finding observation .if there are no bright objects obstructing the view of the target star , it becomes a candidate for observation in the survey simulation module .the observatory definition also includes the target transition time , which encodes the amount of overhead associated with transitioning to a new target before the next observation can begin . for missions with external occulters ,this time includes both the transit time between targets as well as the time required to perform the fine alignment at the end of the transit . for internal coronagraphs, this includes the settling time of the telescope to reach the bus stability levels required by the active wavefront control system. these may all be functions of the orbital position of the telescope , and may be implemented to take into account thermal effects when considering observatories on geocentric orbits .this overhead calculation does not include any additional time required to reach the instrument s contrast floor , which may be a function of target brightness , and is encoded separately in the optical system description .in addition to these functions , the observatory definition can also encode finite resources that are used by the observatory throughout the mission .the most important of these is the fuel used for stationkeeping and repointing , especially in the case of occulters which must move significant distances between observations .we could also consider the use of other volatiles such as cryogens for cooled instruments , which tend to deplete solely as a function of mission time .this module also allows for detailed investigations of the effects of orbital design on the science yield , e.g. , comparing the baseline geosynchronous 28.5 inclined orbit for wfirst - afta with an alternative l2 halo orbit also proposed for other exoplanet imaging mission concepts .the planet physical model module contains models of the light emitted or reflected by planets in the wavelength bands under investigation by the current mission simulation .it uses physical quantities sampled from the distributions defined in the planet population , including planetary mass , radius , and albedo , along with the physical parameters of the host star stored in the target list module , to generate synthetic spectra or band photometry , as appropriate .the planet physical model is explicitly defined separately from the population statistics to enable studies of specific planet types under varying assumptions of orbital or physical parameter distributions , i.e. , evaluating the science yield related to earth - like planets under different definitions of the habitable zone .the specific implementation of this module can vary greatly , and can be based on any of the many available planetary albedo , spectra and phase curve models .the time module is responsible for keeping track of the current mission time .it encodes only the mission start time , the mission duration , and the current time within a simulation .all functions in all modules requiring knowledge of the current time call functions or access parameters implemented within the time module .internal encoding of time is implemented as the time from mission start ( measured in days ) .the time module also provides functionality for converting between this time measure and standard measures such as julian day number and utc time .the rules module contains additional constraints placed on the mission design not contained in other modules .these constraints are passed into the survey simulation module to control the simulation .for example , a constraint in the rules module could include prioritization of revisits to stars with detected exoplanets for characterization when possible .this rule would force the survey simulation module to simulate observations for target stars with detected exoplanets when the observatory module determines those stars are observable .the rules module also encodes the calculation of integration time for an observation .this can be based on achieving a pre - determined signal to noise ( snr ) metric ( with various possible definitions ) , or via a probabilistic description as in ref . .this requires also defining a model for the background contribution due to all astronomical sources and especially due to zodiacal and exozodiacal light .the integration time calculation can have significant effects on science yield integrating to the same snr on every target may represent a suboptimal use of mission time , as could integrating to achieve the minimum possible contrast on very dim targets . changing the implementation of the rules moduleallows exploration of these tradeoffs directly .the post - processing module encodes the effects of post - processing on the data gathered in a simulated observation , and the effects on the final contrast of the simulation . in the simplest implementation, the post - processing module does nothing and simply assumes that the attained contrast is some constant value below the instrument s designed contrast that post - processing has the effect of uniformly removing background noise by a pre - determined factor .a more complete implementation actually models the specific effects of a selected post - processing technique such as loci or klip on both the background and planet signal via either processing of simulated images consistent with an observation s parameters , or by some statistical description .the post - processing module is also responsible for determining whether a planet detection has occurred for a given observation , returning one of four possible states true positive ( real detection ) , false positive ( false alarm ) , true negative ( no detection when no planet is present ) and false negative ( missed detection ) .these can be generated based solely on statistical modeling as in ref . , or can again be generated by actually processing simulated images .the simulation modules include target list , simulated universe , survey simulation and survey ensemble .these modules perform tasks which require inputs from one or more input modules as well as calling function implementations in other simulation modules .the target list module takes in information from the optical system description , star catalog , planet population description , and observatory definition input modules and generates the input target list for the simulated survey .this list can either contain all of the targets where a planet with specified parameter ranges could be observed , or can contain a list of pre - determined targets such as in the case of a mission which only seeks to observe stars where planets are known to exist from previous surveys .the final target list encodes all of the same information as is provided by the star catalog module .the simulated universe module takes as input the outputs of the target list simulation module to create a synthetic universe composed of only those systems in the target list . for each target ,a planetary system is generated based on the statistics encoded in the planet population description module , so that the overall planet occurrence and multiplicity rates are consistent with the provided distribution functions .physical parameters for each planet are similarly sampled from the input density functions .this universe is encoded as a list where each entry corresponds to one element of the target list , and where the list entries are arrays of planet physical parameters . in cases of empty planetary systems, the corresponding list entry contains a null array .the simulated universe module also takes as input the planetary physical model module instance , so that it can return the specific spectra due to every simulated planet at an arbitrary observation time throughout the mission simulation .the survey simulation module takes as input the output of the simulated universe simulation module and the time , rules , and post - processing input modules .this is the module that performs a specific simulation based on all of the input parameters and models .this module returns the mission timeline - an ordered list of simulated observations of various targets on the target list along with their outcomes .the output also includes an encoding of the final state of the simulated universe ( so that a subsequent simulation can start from where a previous simulation left off ) and the final state of the observatory definition ( so that post - simulation analysis can determine the percentage of volatiles expended , and other engineering metrics ) . the survey ensemble module s only task is to run multiple simulations .while the implementation of this module is not at all dependent on a particular mission design , it can vary to take advantage of available parallel - processing resources . as the generation of a survey ensemble is an embarrassingly parallel task every survey simulation is fully independent and can be run as a completely separate process significant gains in execution time can be achieved with parallelization .the baseline implementation of this module contains a simple looping function that executes the desired number of simulations sequentially , as well as a locally parallelized version based on ipython parallel .while the development of exosims is ongoing , we have already produced simulation results with the functionality out of which the baseline exosims implementation is being built . in this section , we present the results of some mission simulations for wfirst - afta using optical models of coronagraph designs generated at jpl during the coronagraph downselect process in 2013 , as well as post - downselect optical models of the hybrid lyot coronagraph ( hlc) generated in 2014 .it is important to emphasize that the instrument designs and mission yields shown here are not representative of the final coronagraphic instrument or its projected performance .all of the design specifics assumed in these simulations are still evolving in response to ongoing engineering modeling of the observatory as a whole and to best meet the mission science requirements .these simulations are instead presented in order to highlight the flexibility of the exosims approach to mission modeling , and to present two important use cases . in [ sec : predown ] we present mission yield comparisons for different instrument designs while all other variables ( observatory , star catalog , planet models , etc . )are kept constant .the results from these simulations are most useful for direct comparisons between different instruments and to highlight particular strengths and weaknesses in specific designs .ideally , they can be used to guide ongoing instrument development and improve the final design science yield . in [ sec : hlcparams ] we investigate a single coronagraph design operating under varying assumptions on observatory stability and post - processing capabilities .these simulations highlight how exosims can be used to evaluate a more mature instrument design to ensure good results under a variety of operating parameters .this section also demonstrates how to incorporate the effects of different assumptions in the pre - simulation optical system diffractive modeling .in addition to the hlc , the first set of optical models includes models for a shaped pupil coronagraph ( spc) and a phase - induced amplitude apodization complex mask coronagraph ( piaa - cmc ) . in the downselect process , the spc and hlc were selected for further development with piaa - cmc as backup .it should be noted that the hlc optical models in the first and second set of simulations shown here represent different iterations on the coronagraph design , and thus represent different instruments .the optical system description is implemented as a static point spread function , throughput curve , and contrast curve based on the jpl optical models .other values describing the detector , science instrument and the rest of the optical train were chosen to match ref . as closely as possible .the integration times in the rules module are determined via modified equations based on ref . to achieve a specified false positive and negative rate , which are encoded as constant in the post - processing module .spectral characterization times are based on pre - selected snr values ( as in ref . ) and match the calculations in ref . .the star catalog is based on a curated database originally developed by margaret turnbull , with updates to stellar data , where available , taken from current values from the simbad astronomical database .target selection is performed with a detection integration time cutoff of 30 days and a minimum completeness cutoff of 2.75% .revisits are permitted at the discretion of the automated scheduler , and one full spectrum is attempted for each target ( spectra are not repeated if the full band is captured on the first attempt ) .the total integration time allotted is one year , spaced over six years of mission time with the coronagraph getting top priority on revisit observations . as a demonstration of exosims ability to compare different instrument designs for a single mission concept, we compare mission simulation results based on optical models of the pre - downselect spc , hlc and piaa - cmc designs . as all of these represent preliminary designs that have since been significantly improved upon , and as our primary purpose here is to demonstrate the simulations utility , we will refer to the three coronagraphs simply as c1 , c2 , and c3 ( in no particular order ) .table [ tbl : corons ] lists some of the parameters of the three coronagraphs including their inner and outer working angles , their minimum and mean contrasts , and maximum and mean throughputs .each design has significantly different operating characteristics in its region of high contrast ( or ` dark hole ' ) .c3 provides the best overall minimum contrast and iwa , but has a more modest mean contrast , whereas c2 has the most stable , and lowest mean contrast over its entire dark hole , at the expense of a larger inner working angle .c1 has the smallest angular extent for its dark hole , but maintains reasonably high throughput throughout .c2 has a constant , and very low throughput , while c3 has the highest throughput over its entire operating region . finally ,while c1 and c3 cover the full field of view with their dark holes , c2 only creates high contrast regions in 1/3 of the field of view , and so requires three integrations to cover the full field .we consider five specific metrics for evaluating these coronagraph designs : 1 .unique planet detections , defined as the total number of individual planets observed at least once .all detections , defined as the total number of planet observations throughout the mission ( including repeat observations of the same planets ) .3 . total visits , defined as the total number of observations . 4 .unique targets , defined as the number of target stars observed throughout the mission .5 . full spectral characterizations , defined as the total number of spectral characterizations covering the entire 400 to 800 nm band . this does not include characterizations where the inner or outer working angle prevent full coverage of the whole band .this number will always be smaller than the number of unique detections based on the mission rules used here . while it is possible to use exosims results to calculate many other values ,these metrics represent a very good indicator of overall mission performance . as it is impossibleto jointly maximize all five in particular , getting more full spectra or additional repeat detections is a direct trade - off to finding additional , new planets these values together describe the pareto front of the mission phase space . at the same time , these metrics serve as proxies for other quantities of interest . for example , taken together , all detections and unique detections indicate a mission s ability to confirm it s own detections during the course of the primary mission , as well as for possible orbit fitting to detected planets .the number of unique targets , compared with the input target list , determines whether a mission is operating in a ` target - poor ' or ` execution time - poor ' regime .the latter can be addressed simply by increasing the mission lifetime , whereas the former can only be changed with an instrument redesign .finally , comparing the numbers of unique detections and full spectra indicates whether an instrument design has sufficient capabilities to fully characterize the planets that it can detect . for each of the coronagraphs we run 5000 full mission simulations , keeping all modules except for the optical description and post - processing constant .in addition to the parameters and implementations listed above , our post - processing module implementation assumes a static factor of either 10 or 30 in terms of contrast improvement due to post - processing .that is , results marked 10x assume that the achieved contrast on an observation is a factor of 10 below the design contrast at the equivalent angular separation .all together , we generated 30,000 discrete mission simulations , in six ensembles .mean values and standard deviations for our five metrics of interest for each ensemble are tabulated in table [ tbl : res2 ] , with the full probability density functions ( pdfs ) shown in figs .[ fig : audets ] - [ fig : spectra ] . assuming either a factor of 10 or 30 in post - processing contrast gains . of particular importance here is the probability of zero detections all of the designs at 10x suppression , and c1 in particular , have a significant ( ) chance of never seeing a planet . [fig : audets],scaledwidth=65.0% ] .note that values of 15 or more typically represent a small number of easily detectable planet that are re - observed many times .re - observations of a single target were capped at four successful detections in all simulations .[ fig : adets],scaledwidth=65.0% ] .[fig : avisits],scaledwidth=65.0% ] . while all three instruments have fairly narrow distributions of this parameter , only c2 with 10x post - processing gainsis completely target limited.[fig : auvisits],scaledwidth=65.0% ] .c3 does comparatively well in this metric due to its lower iwa and high throughput .[ fig : spectra],scaledwidth=65.0% ] from the tabulated values , we see that the three coronagraphs have fairly similar performance in terms of number of planets found and spectrally characterized .overall , c2 is most successful at detecting planets , due primarily to the stability of its contrast over the full dark hole .because of the very low overall throughput , this does not translate into more spectral characterizations than the other two designs .c1 and c2 benefit more from the change from 10x to 30x contrast improvement due to post - processing than does c3 , which already has the deepest overall contrast , but whose contrast varies significantly over the dark hole .the largest differences in the metrics are the total number of observations .these illustrate the direct trade - off between acquiring spectra , which take a very long time , and doing additional integrations on other targets . in cases such as c2 with only 10x contrast improvement ,the spectral characterization times are typically so long that most targets do not stay out of the observatory s keepouts and so the mission scheduling logic chooses to do more observations rather than wasting time on impossible spectral integrations . turning to the figures of the full distributions for these metrics, we see that despite having similar mean values for unique planet detections , the full distributions of detections are quite different , leading to varying probabilities of zero detections .as this represents a major mission failure mode , it is very important to track this value , as it may outweigh the benefits of a given design .c1 with only 10x contrast gain does particularly poorly in this respect , with over 15% of cases resulting in no planets found .however , when a 30x gain is assumed , c1 and c2 end up having the lowest zero detection probabilities .we again see that the effects of even this simple post - processing assumption are not uniform over all designs .this is due to the complicated interactions between each instrument s contrast curve and the assumed distributions of planetary parameters .in essence , if our priors were different ( leading to different completeness values for our targets ) then we would expect different relative gains for the same post - processing assumptions .this is always a pitfall of these simulations and must always be kept in mind when analyzing the results .it should also be noted that there have been multiple iterations of all these coronagraph designs since downselect , resulting in significantly lower probabilities of zero detections , as seen in the next section .another interesting feature are the very long right - hand tails of the all detections and total visits distributions .these do not actually represent outliers in terms of highly successful missions , but rather typically imply the existence of one or a small number of very easy to detect planets .the logic of the scheduler allows the mission to keep returning to these targets for followup observations when it has failed to detect any other planets around the other targets in its list .this situation arises when the design of the instrument and assumptions on planet distributions leave a mission target limited .the distributions of unique targets show this limitation , with very narrow density functions for the actual number of targets observed for each instrument .in particular , fig .[ fig : auvisits ] makes it clear that c2 with 10x post - processing gains runs out of available targets . in order to combat this, the scheduler code prevents revisits to a given target after four successful detections of a planet around it .finally , turning to fig .[ fig : spectra ] we see that all three designs , regardless of post - processing assumptions have greater than 10% probabilities of zero full spectral characterizations .c1 with 10x post - processing gains fares most poorly with zero full spectra achieved in over one third of all cases . .the black dashed line represents the density function used in generating the planetary radii for the simulated planets in all simulations , while the other lines represent the distributions of planetary radii of the planets detected by each of the coronagraphs .the input distribution is based on the kepler results reported in ref . .[ fig : radiusdists],scaledwidth=65.0% ] .the input mass distribution is derived from sampling the radius distribution shown in fig .[ fig : radiusdists ] and converting to mass via an assumed density function .[ fig : massdists],scaledwidth=65.0% ] analysis of the survey ensembles also allows us to measure the biasing effects of the mission on the planet parameters of interest .as we know the input distributions of the simulation , we can think of these as priors , and of the distribution of the ` observed ' planets as the posteriors . figs .[ fig : radiusdists ] and [ fig : massdists ] show the distributions of planetary mass and radius used in the simulations , respectively , along with the output distributions from the various coronagraph designs .the output distributions are calculated by taking the results of all of the simulations in each ensemble together , as the number of planets detected in each individual simulation is too small to produce an accurate distribution . the input mass distribution shown hereis derived from the kepler radius distribution as reported in ref . and is calculated by assuming that this distribution is the same for all orbital periods and via an assumed density function .the frequency spike seen at around 20 earth masses is due to a poor overlap in the density functions used in this part of the phase space .this results in an equivalent spike in the posterior distributions , which slightly biases the results .all of the instruments have fairly similar selection biases , although c1 and c3 , which have smaller inner working angles and higher throughputs , detect more lower mass / radius planets .the effects of the instruments are readily apparent in all cases : lower radius planets , which are predicted to occur more frequently than larger radius ones , are detected at much lower rates . in this section we present the results of survey ensemble analyses for a single instrument a post - downselect hlc design again assuming either 10x or 30x post - processing gains , and assuming either 0.4 , 0.8 , or 1.6 milliarcseconds of telescope jitter .the jitter of the actual observatory will be a function of the final bus design and the operation of the reaction wheels , and its precise value is not yet known , which makes it important to evaluate how different levels of jitter may effect the achieved contrast and overall science yield .the jitter is built directly into the optical system model encoded in the optical system description module ( see krist et al ., this volume , for details ) , while the post - processing is treated as in the previous section . , scaledwidth=65.0% ] .the trend here closely follows the one observed in the results for the unique detections metric.[fig : adets2],scaledwidth=65.0% ] . here , the post - processing improvement factor makes more of a difference than in the previous two figures , as more time must be devoted to spectral characterizations , limiting how much time is available for further observations.[fig : auvisits2],scaledwidth=65.0% ] .the trend here tracks closely to the one observed in the total visits metric , and shows that this coronagraph design is not target limited in any of the studied cases.[fig : avisits2],scaledwidth=65.0% ] . in the worst case, there is an of not getting any spectra .only the case of 0.4 mas jitter with 30x post - processing gain has no simulations in its ensemble with zero full spectra achieved .[ fig : spectra2],scaledwidth=65.0% ] as in the previous section , we run ensembles of 5000 simulations for each of the six cases considered , keeping all modules except for the optical description and post - processing constant .the mean and of the five metrics of interest described in [ sec : predown ] are tabulated in table [ tbl : res3 ] , and the full pdfs for all metrics are shown in figs .[ fig : audets2 ] - [ fig : spectra2 ] .one important observation made immediately obvious by these results is the relatively large effect of increased jitter versus the gains due to post - processing .tripling the assumed gain factor of post - processing on the final achieved contrast has a significantly smaller effect on the number of detections , gaining only one unique detection , on average , as compared with halving the amount of telescope jitter , which increases the number of unique detections by over 30% , on average .this shows us that the telescope jitter may be an effect that fundamentally can not be corrected after the fact , and therefore needs to be tightly controlled , with well defined requirements set during mission design .much of the current development effort for the project is focused on low - order wavefront sensing and control to mitigate these effects .we can also see significant improvements in the coronagraph design since the versions evaluated in [ sec : predown ] , as the probability of zero planet detections is less than 2% in the case of the highest jitter level , and is well below 1% for all other cases .in fact , for both the 0.4 mas jitter ensembles , no simulations had zero detections , indicating a very low probability of complete mission failure for this coronagraph at these operating conditions .similar to the results of the previous section , the trend in the number of total visits does not simply follow those seen in the unique and total detection metrics , but is a function of both the number of detections and how much time is spent on spectral characterizations .we can see how the cases with the highest jitter and lowest post - processing gains are pushed towards larger numbers of observations , and unique targets , as they are able to achieve fewer full spectral characterizations , leaving them with additional mission time to search for new candidates .this is equally reflected in fig .[ fig : spectra2 ] where , despite the good performance seen in fig .[ fig : audets2 ] , all jitter levels have over 5% chance of zero full spectra at the 10x post - processing gain level , and only the 0.4 mas case at 30x gain has no instances of zero full spectra in its ensemble of results .these metrics , taken together , clearly show that further optimization is possible via modification of mission rules , which were kept constant in all these ensembles .for example , the low numbers of spectral characterizations at higher jitter levels suggest that it may be worthwhile to attempt shallower integrations in order to be able to make more total observations and potentially find a larger number of bright planets .this would bias the final survey results towards larger planets , but would increase the probability of spectrally characterizing at least some of the planets discovered .alternatively , this may point to the desirability of investigating whether full spectral characterizations can be achieved for a small number of targets over the course of multiple independent observations .we have presented the design details of exosims a modular , open source software framework for the simulation of exoplanet imaging missions with instrumentation on space observatories .we have also motivated the development and baseline implementation of the component parts of this software for the wfirst - afta coronagraph , and presented initial results of mission simulations for various iterations of the wfirst - afta coronagraph design .these simulations allow us to compare completely different instruments in the form of early competing coronagraph designs for wfirst - afta .the same tools also allow us to evaluate the effects of different operating assumptions , demonstrated here by comparing different assumed post - processing capabilities and telescope stability values for a single coronagraph design .as both the tools and the coronagraph and mission design continue to mature we expect the predictions presented here to evolve as well , but certain trends have emerged that we expect to persist .we have identified the portions of design space and telescope stability ranges that lead to significant probabilities of zero detections , and we expect instrument designs and observatory specifications to move away from these .we have also identified a mean number of new planetary detections , for our particular assumed prior distributions of planetary parameters , that are consistent with the science definition team s mission goals for this instrument . as we continue to both develop the software and to improve our specific modeling of wfirst - afta we expect that these and future simulations will prove helpful in guiding the final form of the mission , and will lay the groundwork for analysis of future exoplanet imagers .this material is based upon work supported by the national aeronautics and space administration under grant no .nnx14ad99 g issued through the goddard space flight center .exosims is being developed at cornell university with support by nasa grant no .nnx15aj67 g .this research has made use of the simbad database , operated at cds , strasbourg , france .the authors would like to thank rhonda morgan for many useful discussions and suggestions , as well as our reviewers wes traub and laurent pueyo , who have significantly improved this work through their comments .m. c. turnbull , t. glassman , a. roberge , w. cash , c. noecker , a. lo , b. mason , p. oakley , and j. bally , `` the search for habitable worlds : 1 .the viability of a starshade mission , '' _ publications of the astronomical society of the pacific _ * 124*(915 ) , 418447 ( 2012 ) . k. l. cahoy , a. d. marinan , b. novak , c. kerr , t. nguyen , m. webber , g. falkenburg , and a. barg , `` wavefront control in space with mems deformable mirrors for exoplanet direct imaging , '' _ journal of micro / nanolithography , mems , and moems _ * 13*(1 ) , 011105011105 ( 2014 ). s. b. shaklan , m. c. noecker , a. s. lo , t. glassman , p. j. dumont , e. o. jordan , n. j. kasdin , j. w. c. cash , e. j. cady , and p. r. lawson , `` error budgeting and tolerancing of starshades for exoplanet detection , '' in _ proceedings of spie _ , * 7731 * ( 2010 ) .m. wenger , f. ochsenbein , d. egret , p. dubois , f. bonnarel , s. borde , f. genova , g. jasniewicz , s. lalo , s. lesteven , _et al . _ ,`` the simbad astronomical database - the cds reference database for astronomical objects , '' _ astronomy and astrophysics supplement series _ * 143*(1 ) , 922 ( 2000 ) .m. a. perryman , l. lindegren , j. kovalevsky , e. hoeg , u. bastian , p. bernacca , m. crz , f. donati , m. grenon , m. grewing , __ , `` the hipparcos catalogue , '' _ astronomy and astrophysics _ * 323 * , l49l52 ( 1997 ) . t. j. henry , `` the mass - luminosity relation from end to end , '' in _ spectroscopically and spatially resolving the components of the close binary stars , proceedings of the workshop held 20 - 24 october 2003 in dubrovnik , croatia _ , r. w. hilditch , h. hensberge , and k. pavlovski , eds ., * 318 * , asp conference series , san francisco ( 2004 ). a. cumming , r. p. butler , g. w. marcy , s. s. vogt , j. t. wright , and d. a. fischer , `` the keck planet search : detectability and the minimum mass and orbital period distribution of extrasolar planets , '' _ publications of the astronomical society of the pacific _ * 120*(867 ) , 531554 ( 2008 ) .a. w. howard , g. w. marcy , j. a. johnson , d. a. fischer , j. t. wright , h. isaacson , j. a. valenti , j. anderson , d. n. lin , and s. ida , `` the occurrence and mass distribution of close - in super - earths , neptunes , and jupiters , '' _ science _ * 330*(6004 ) , 653655 ( 2010 ) .n. m. batalha , j. f. rowe , s. t. bryson , t. barclay , c. j. burke , d. a. caldwell , j. l. christiansen , f. mullally , s. e. thompson , t. m. brown , _ et al ._ , `` planetary candidates observed by kepleranalysis of the first 16 months of data , '' _ the astrophysical journal supplement series _ * 204*(2 ) , 24 ( 2013 ) .f. fressin , g. torres , d. charbonneau , s. t. bryson , j. christiansen , c. d. dressing , j. m. jenkins , l. m. walkowicz , and n. m. batalha , `` the false positive rate of kepler and the occurrence of planets , '' _ the astrophysical journal _ * 766*(2 ) , 81 ( 2013 ) .j. mcbride , j. r. graham , b. macintosh , s. v. beckwith , c. marois , l. a. poyneer , and s. j. wiktorowicz , `` experimental design for the gemini planet imager , '' _ publications of the astronomical society of the pacific _ * 123*(904 ) , 692708 ( 2011 ) .b. macintosh , j. r. graham , p. ingraham , q. konopacky , c. marois , m. perrin , l. poyneer , b. bauman , t. barman , a. s. burrows , _et al . _ , `` first light of the gemini planet imager , '' _ proceedings of the national academy of sciences _ * 111*(35 ) , 1266112666 ( 2014 ) .d. spergel , n. gehrels , j. breckinridge , m. donahue , a. dressler , b. gaudi , t. greene , o. guyon , c. hirata , j. kalirai , __ , `` wide - field infrared survey telescope - astrophysics focused telescope assets wfirst - afta final report , '' _ arxiv preprint arxiv:1305.5422 _ ( 2013 ) .d. savransky , d. n. spergel , n. j. kasdin , e. j. cady , p. d. lisman , s. h. pravdo , s. b. shaklan , and y. fujii , `` occulting ozone observatory science overview , '' in _ proc .spie _ , * 7731 * , 77312h ( 2010 ) .j. b. pollack , k. rages , k. h. baines , j. t. bergstralh , d. wenkert , and g. e. danielson , `` estimates of the bolometric albedos and radiation balance of uranus and neptune , '' _ icarus _ * 65*(2 ) , 442466 ( 1986 ) .m. s. marley , c. gelino , d. stephens , j. i. lunine , and r. freedman , `` reflected spectra and albedos of extrasolar giant planets .i. clear and cloudy atmospheres , '' _ the astrophysical journal _ * 513*(2 ) , 879 ( 1999 ) .j. j. fortney , m. s. marley , d. saumon , and k. lodders , `` synthetic spectra and colors of young giant planet atmospheres : effects of initial conditions and atmospheric metallicity , '' _ the astrophysical journal _* 683*(2 ) , 1104 ( 2008 ) .a. burrows , m. marley , w. b. hubbard , j. i. lunine , t. guillot , d. saumon , r. freedman , d. sudarsky , and c. sharp , `` a nongray theory of extrasolar giant planets and brown dwarfs , '' _ the astrophysical journal _ * 491 * , 856+ ( 1997 ) .d. lafrenire , c. marois , r. doyon , d. nadeau , and e. artigau , `` a new algorithm for point - spread function subtraction in high - contrast imaging : a demonstration with angular differential imaging , '' _ the astrophysical journal _* 660*(1 ) , 770780 ( 2007 ) .r. soummer , l. pueyo , and j. larkin , `` detection and characterization of exoplanets and disks using projections on karhunen - loeve eigenimages , '' _ the astrophysical journal letters _ * 755*(2 ) , l28 ( 2012 ) .j. trauger , d. moody , b. gordon , j. krist , and d. mawet , `` complex apodization lyot coronagraphy for the direct imaging of exoplanet systems : design , fabrication , and laboratory demonstration , '' in _ spie astronomical telescopes+ instrumentation _ , 84424q84424q , international society for optics and photonics ( 2012 ) .n. zimmerman , a. eldorado riggs , n. j. kasdin , a. carlotti , and r. j. vanderbei , `` a shaped pupil lyot coronagraph for wfirst - afta , '' in _ american astronomical society meeting abstracts _, * 225 * ( 2015 ) .e. sidick , b. kern , r. belikov , a. kuhnert , and s. shaklan , `` simulated contrast performance of phase induced amplitude apodization ( piaa ) coronagraph testbed , '' in _ spie astronomical telescopes+ instrumentation _ , 91430w91430w , international society for optics and photonics ( 2014 ) .w. a. traub , r. belikov , o. guyon , n. j. kasdin , j. krist , b. macintosh , b. mennesson , d. savransky , m. shao , e. serabyn , and j. trauger , `` science yield estimation for afta coronagraphs , '' in _ proc .spie _ , _ spie astronomical telescopes+ instrumentation _, 91430n91430n , international society for optics and photonics ( 2014 ) .f. fressin , g. torres , d. charbonneau , s. t. bryson , j. christiansen , c. d. dressing , j. m. jenkins , l. m. walkowicz , and n. m. batalha , `` the false positive rate of kepler and the occurrence of planets , '' _ the astrophysical journal _ * 766*(2 ) , 81 ( 2013 ) .i. poberezhskiy , f. zhao , x. an , k. balasubramanian , r. belikov , e. cady , r. demers , r. diaz , q. gong , b. gordon , _et al . _ ,`` technology development towards wfirst - afta coronagraph , '' in _ spie astronomical telescopes+ instrumentation _, 91430p91430p , international society for optics and photonics ( 2014 ) .* dmitry savransky * is an assistant professor in the sibley school of mechanical and aerospace engineering at cornell university .he received his phd from princeton university in 2011 followed by a postdoctoral position at lawrence livermore national laboratory where he assisted in the integration and commissioning of the gemini planet imager .his research interests include optimal control of optical system , simulation of space missions , and image post - processing techniques . * daniel garrett * is a phd student in the sibley school of mechanical and aerospace engineering at cornell university .his research interests include dynamics and control theory , planetary science , and space exploration .
we present and discuss the design details of an extensible , modular , open source software framework called exosims , which creates end - to - end simulations of space - based exoplanet imaging missions . we motivate the development and baseline implementation of the component parts of this software with models of the wfirst - afta coronagraph , and present initial results of mission simulations for various iterations of the wfirst - afta coronagraph design . we present and discuss two sets of simulations : the first compares the science yield of completely different instruments in the form of early competing coronagraph designs for wfirst - afta . the second set of simulations evaluates the effects of different operating assumptions , specifically the assumed post - processing capabilities and telescope vibration levels . we discuss how these results can guide further instrument development and the expected evolution of science yields . : dmitry savransky ( ) 2
linear regression is perhaps the most widely used example of parameter fitting throughout the sciences . yet , the traditional ordinary least - squares ( or weighted least - squares ) approach to regression neglects some features that are practically ubiquitous in astrophysical data , namely the existence of measurement errors , often correlated with one another , on _ all _ quantities of interest , and the presence of residual , intrinsic scatter ( i.e. physical scatter , not the result of measurement errors ) about the best fit . takes on this problem ( see that work for a more extensive overview of the prior literature ) by devising an efficient algorithm for simultaneously constraining the parameters of a linear model and the intrinsic scatter in the presence of such heteroscedastic and correlated measurement errors .in addition , the approach corrects a bias that exists when the underlying distribution of covariates in a regression is assumed to be uniform , by modeling this distribution as a flexible mixture of gaussian ( normal ) distributions and marginalizing over it .the model is considerably more complex , in terms of the number of free parameters , than traditional regression .nevertheless , it can be efficiently constrained using a fully conjugate gibbs sampler , as described in that work .briefly , the approach takes advantage of the fact that , for a suitable model , the fully conditional posterior of certain parameters ( or blocks of parameters ) may be expressible as a known distribution which can be sampled from directly using standard numerical techniques .if all model parameters can be sampled this way , then a gibbs sampler , which simply cycles through the list of parameters , updating or block - updating them in turn , can move efficiently through the parameter space . by repeatedly gibbs sampling ,a markov chain that converges to the joint posterior distribution of all model parameters is generated ( see , e.g. , for theoretical background ) .the individual pieces ( e.g. , the model distributions of measurement error , intrinsic scatter , and the covariate prior distribution ) of the model are conjugate , making it suitable for this type of efficient gibbs sampling .this is a key advantage in terms of making the resulting algorithm widely accessible to the community , since conjugate gibbs samplers , unlike more general and powerful markov chain monte carlo samplers , require no a priori tuning by the user .while argue against the assumption of a uniform prior for covariates , it should be noted that the alternative of a gaussian mixture model ( or the dirichlet process generalization introduced below ) is not necessarily applicable in every situation either .when a well motivated physical model of the distribution of covariates exists , it may well be preferable to use it , even at the expense of computational efficiency . in the general case, we can hope that a flexible parametrization like the gaussian mixture is adequate , although it is always worth checking a posteriori that the model distribution of covariates provides a good description of the data . and discuss real applications in which a gaussian distribution of covariates turns out to be adequate , despite the underlying physics being non - gaussian .this work describes two useful generalizations to the algorithm .first , the number of response variables is allowed to be greater than one .second , the prior distribution of covariates may be modeled using a dirichlet process rather than as a mixture of gaussians with a fixed number of components .a dirichlet process describes a probability distribution over the space of probability distributions , and ( in contrast to the many parameters required to specify a large mixing model ) is described only by a concentration parameter and a base distribution . for the choice of a gaussian base distribution , used here , the dirichlet process can be thought of as a gaussian mixture in which the number of mixture components is learned from the data and marginalized over as the fit progresses ( see more discussion , in a different astrophysical context , by ) .this makes it a very general and powerful alternative to the standard fixed - size gaussian mixture , as well as one that requires even less tuning by the user , since the number of mixture components need not be specified .crucially , both of these generalizations preserve the conjugacy of the model , so that posterior samples can still be easily obtained by gibbs sampling . of course ,( or this paper ) does not provide the only implementation of conjugate gibbs sampling , nor is that approach the only one possible for linear regression in the bayesian context .indeed , there exist more general statistical packages capable of identifying conjugate sampling strategies ( where possible ) based on an abstract model definition ( e.g. , bugs , jags , and stan ) .the use of more general markov chain sampling techniques naturally allow for more general ( non - conjugate ) models and/or parametrizations ( e.g. , ) . nevertheless , there is something appealing in the relative simplicity of implementation and use of the conjugate gibbs approach , particularly as it applies so readily to the commonly used linear model with gaussian scatter .section [ sec : model ] describes the model employed in this work in more detail , and introduces notation .section [ sec : sampler ] outlines the changes to the sampling algorithm needed to accomodate the generalizations above .since this work is intended to extend that of , i confine this discussion only to steps which differ from the that algorithm , and do not review the gibbs sampling procedure in its entirety. however , the level of detail is intentionally high ; between this document and , it should be straightforward for the interested reader to create his or her own implementation of the entire algorithm .section [ sec : examples ] provides some example analyses , including one with real astrophysical data , and discusses some practical aspects of the approach .the complete algorithm described here ( with both gaussian mixture and dirichlet process models ) has been implemented in the r language .the package is named linear regression by gibbs sampling ( lrgs ) , the better to sow confusion among extragalactic astronomers .the code can be obtained from github or the comprehensive r archive network .here i review the model described by , introducing the generalization to multiple response variables ( section [ sec : mvmodel ] ) and the use of the dirichlet process to describe the prior distribution of the covariates ( section [ sec : dproc ] ) .the notation used here is summarized in table [ tab : notation ] ; it differs slightly from that of , as noted . in this document, denotes a stochastic relationship in which a random variable is drawn from the probability distribution , and boldface distinguishes vector- or matrix - valued variables . [ cols="<,^,<,^",options="header " , ] [ tab : toy ] suppose we had a physical basis for a 3-component model ( or suspected 3 components , by inspection ) , but wanted to allow for the possibility of more or less structure than a strict gaussian mixture provides .the dirichlet process supplies a way to do this . for a given and ,the distribution of is known , , where is an unsigned stirling number of the first kind .] so in principle a prior expectation for the number of clusters , say , can be roughly translated into a gamma prior on . herei instead adopt an uninformative prior on , and compare the results to those of a gaussian mixture model with . using the dirichlet process model , results from a chain of 1000 gibbs samples ( discarding the first 10 )are shown as shaded histograms in figure [ fig : toydata ] .statistic is for every parameter .the autocorrelation length is also very short , steps for every parameter . ]the results are consistent with the input model values ( vertical , dashed lines ) for the parameters of interest ( , and ) .the latent parameters describing the base distribution of the dirichlet process are also consistent with the toy model , although they are poorly constrained .the right panel of figure [ fig : toydata ] shows the cluster assignments for a sample with ( the median of the chain ) ; the clustered nature of the data is recognized , although the number of clusters tends to exceed the number of components in the input model ., we see that 2 of the 6 clusters are populated by single data points that are not outliers .the reverse is not true , and here it is interesting that the dirichlet process can not fit the data using fewer than clusters ( figure [ fig : toyres ] ) . ]an equivalent analysis using a mixture of 3 gaussians rather than a dirichlet process model produces very similar constraints on the parameters of interest ( hatched histograms in figure [ fig : toyres ] ) .+ + as a real - life astrophysical example , i consider the scaling relations of dynamically relaxed galaxy clusters , using measurements presented by .note that there are a number of subtleties in the interpretation of these results that will be discussed elsewhere ; here the problem is considered only as an application of the method presented in this work .briefly , the data set comprises x - ray measurements of 40 massive , relaxed clusters .the x - ray observables are total mass , ; gas mass , ; average gas temperature , ; and luminosity , .in addition , spectroscopically measured redshifts are available for each cluster .a simple model of cluster formation by spherical collapse under gravity , neglecting gas physics , predicts self - similar power - law scaling relations among these quantities : to be measured in a soft x - ray band , in practice 0.12.4kev . since the emissivity in this band is weakly dependent on temperature for hot clusters such as those in the data set , the resulting scaling relation has a shallower dependence on mass than the more familiar bolometric luminosity mass relation , ^{4/3}$ ] .the exponents in the scaling of equation [ eq : selfsim ] are specific to the chosen energy band . ]^{2/3 } , \nonumber\\ l & \propto & e(z)^{1.92}\,m^{0.92 } , \nonumber\end{aligned}\ ] ] where is the normalized hubble parameter at the cluster s redshift .the aim of this analysis is to test whether the power - law slopes above are accurate , and to characterize the joint intrinsic scatter of , and at fixed and .taking the logarithm of these physical quantities , and assuming log - normal measurement errors and intrinsic scatter , this becomes a linear regression with and . for brevity , and neglecting units , and ; i also approximately center the covariates for convenience .figure [ fig : cldata ] shows summary plots of these data . although measurement errors are shown as orthogonal bars for clarity , the analysis will use a full covariance matrix accounting for the interdependence of the x - ray measurements ( this covariance is illustrated for one cluster in the figure ) .+ because the redshifts are measured with very small uncertainties , this problem is not well suited to the dirichlet process prior ; intuitively , the number of clusters in the dirichlet process must approach the number of data points because the data are strongly inconsistent with one another ( i.e. are not exchangeable ) .instead , i use a gaussian mixture prior with , and verify that in practice the results are not sensitive to ( the parameters of interest differ negligibly from an analysis with ) . marginalized 2-dimensional constraints on the power - law slopes of each scaling relation are shown in the top row of figure [ fig : clres ] ( 68.3 and 95.4 per cent confidence ) . on inspection, only the luminosity scaling relation appears to be in any tension with the expectation in equation [ eq : selfsim ] , having a preference for a weaker dependence on and a stronger dependence on .these conclusions are in good agreement with a variety of earlier work ( e.g. ; see also the review of ) .the posterior distributions of the elements of the multi - dimensional intrinsic covariance matrix are shown in the bottom row of figure [ fig : clres ] , after transforming to marginal scatter ( square root of the diagonal ) and correlation coefficients ( for the off - diagonal elements ) .the intrinsic scatters of and at fixed and are in good agreement with other measurements in the literature ( see , and references therein ) ; the scatter of is lower than the per cent typically found , likely because this analysis uses a special set of morphologically similar clusters rather than a more representative sample .the correlation coefficients are particularly challenging to measure , and the constraints are relatively poor .nevertheless , the ability to efficiently place constraints on the full intrinsic covariance matrix is an important feature of this analysis . within uncertainties ,these results agree well with the few previous contraints on these correlation coefficients in the literature .the best - fitting intrinsic covariance matrix is illustrated visually in figure [ fig : clresid ] , which compares it to the residuals of with respect to the best - fitting values of and the best - fitting scaling relations .i have generalized the bayesian linear regression method described by to the case of multiple response variables , and included a dirichlet process model of the distribution of covariates ( equivalent to a gaussian mixture whose complexity is learned from the data ) .the algorithm described here is implemented independently of the linmix_err idl code of as an r package called lrgs , which is publicly available .two examples , respectively using a toy data set and real astrophysical data , are presented .a number of further generalizations are possible . in principle, significant complexity can be added to the model of the intrinsic scatter in the form of a gaussian mixture or dirichlet process model ( with a gaussian base distribution ) while maintaining conjugacy of the conditional posteriors , and thereby the efficiency of the gibbs sampler .the case censored data ( upper limits on some measured responses ) is discussed by .this situation , or , more generally , non - gaussian measurement errors , can be handled by rejection sampling ( at the expense of efficiency ) but is not yet implemented in lrgs .also of interest is the case of truncated data , where the selection of the data set depends on one of the response variables , and the data are consequently an incomplete and biased subset of a larger population .this case can in principle be handled by modeling the selection function and imputing the missing data .lrgs is shared publicly on github , and i hope that users who want more functionality will be interested in helping develop the code further .the addition of dirichlet process modeling to this work was inspired by extensive discussions with michael schneider and phil marshall .anja von der linden did some very helpful beta testing .i acknowledge support from the national science foundation under grant ast-1140019 .
( * ? ? ? * hereafter ) described an efficient algorithm , using gibbs sampling , for performing linear regression in the fairly general case where non - zero measurement errors exist for both the covariates and response variables , where these measurements may be correlated ( for the same data point ) , where the response variable is affected by intrinsic scatter in addition to measurement error , and where the prior distribution of covariates is modeled by a flexible mixture of gaussians rather than assumed to be uniform . here i extend the algorithm in two ways . first , the procedure is generalized to the case of multiple response variables . second , i describe how to model the prior distribution of covariates using a dirichlet process , which can be thought of as a gaussian mixture where the number of mixture components is learned from the data . i present an example of multivariate regression using the extended algorithm , namely fitting scaling relations of the gas mass , temperature , and luminosity of dynamically relaxed galaxy clusters as a function of their mass and redshift . an implementation of the gibbs sampler in the r language , called lrgs , is provided . methods : data analysis x - rays : galaxies : clusters
experiments with cold trapped atoms rely on precise control of laser light amplitude and frequency . depending on the details of the experiments ,the duration of the laser pulse can range from a few seconds to a few micro - seconds . to meet these timing requirements ,switching of laser light is usually achieved by a combination of acousto - optical modulators ( aoms ) for fast switching and precise frequency tuning , and mechanical shutters to eliminate any leakage of the laser light .since aoms require radio - frequency ( rf ) signals ( typically with frequency from 0 to 500 mhz ) to operate , rf generators are indispensable in every modern atomic physics laboratory .while there are many commercially available rf generators in the market , they are not optimized to control multiple aoms in atomic physics experiments .for example , most rf generators requires a few milli - seconds to reprogram the frequency and/or the amplitude .some devices have a frequency - shift - key ( fsk ) functionality , but that only allows users to quickly alternate between two fixed frequencies . to overcome this limitation ,multiple rf generators combined with rf switches can be used . as the number of laser sources increases with the complexity of the experiment ,the space these devices take up in the laboratory can be significant . moreover ,incautious wiring among these devices can lead to unwanted electromagnetic interferences between rf sources and other electronics .hence , a compact multi - channel rf source with robust frequency and amplitude control is desirable since it simplifies the experimental setup . in this paper, we report on our development of a compact fpga - based multi - channel rf - generator and pulse sequencer .the system contains a multi - channel ttl digital pulse sequencer with a timing resolution of 40 ns and 16 channels of direct - digital synthesized ( dds ) rf generator with a frequency tuning resolution of better than 1 mhz , which is especially beneficial in operating optical atomic clocks .each dds channel is capable of amplitude switching with a rise - time of 60 ns and consecutively switching rf frequency within 1.0 .additionally , users can independently program amplitude and frequency ramps of each dds channel independently , making the unit suitable for a wide range of experiments .the unit also has a built - in frequency counter for counting electrical pulses from photo - multiplier tube ( pmt ) , which is widely used to detect fluorescence from trapped atoms or ions .the counter is capable of time - tagging the arrival of the photons at the pmt referenced to the timing of the pulse sequence with a resolution of 10 ns .only a single usb cable is required between the pulse sequencer and a control computer .the system ( not including power supplies and 2 ghz reference clocks ) takes up a volume of which fits in a 3u 19-inch standard rack .the block diagram of a typical setup of a complete system is shown in fig . [fig : block_diagram ] .the core of the system is an fpga module xem6010 from opal kelly ( ok fpga ) , which connects to a user - interface computer via a single usb cable for data - transfer .the ok fpga unit controls the timing of the pulse sequence using its on - board oscillator ( or an externally referenced clock via one of the ttl input channel ) and outputs the digital ttl signals to control devices such as mechanical shutters or relays . for counting electrical pulses from a pmt ,the ok unit receives a ttl signal from one of its input . to generate rf signals, the ok unit receives data ( containing frequency , amplitude and phase for each dds channel ) from a computer and distribute the data to all 16 dds boards . to increase flexibility, each dds board also has a cyclone iv ( altera ) fpga to store rf signal data and settings .each fpga on the dds board then programs ad9915 ( analog devices ) dds chip with desired frequency , phase and amplitude .each dds board requires a 2 ghz reference signal to operate properly .we use pci connectors to attach each dds board to the main pcb ( see fig . [fig : dds_pcb_duo ] ) for ease of installing and removing each individual board .the pci bus contains a 16-bit bus and a few auxiliary signal lines for data - transfer between each dds board and the main ok fpga .additionally , each dds boards receives power directly from the pci bus to simplify electrical connections .we can see in fig .[ fig : dds_pcb_duo ] that the only connections are the 2 ghz input reference clock and an output rf signal for each dds board .the main functionality of the main ok fpga is to generate multi - channel ( at least 32 channels ) digital ttl pulses with timing programmable by the user .we define the data structure of the pulse sequence by the initial states of each ttl channel and the time in which each channel changes its state . in this way , memoryused to store a pulse - sequence is determined by the complexity of the pulse sequence and not by the length of the pulse sequence ( see ref .all the pulse sequence data is transferred into the memory of the ok fpga before the starting of the pulse sequence . during the executing of the pulse sequence ,the internal counter of the ok fpga steps through the data stored in the memory .then the ttl outputs change their states accordingly .hence , there is no data transfer between the fpga and the computer when the pulse sequence is running , eliminating any potential time delay during the pulse sequence .( the timing of the pulse sequence is determined faithfully by the on - board ( or externally referenced ) clock . ) for our current design , the ttl pulses generator has a timing resolution of 40 ns .the switching time of the ttl signal is approximately 6 ns .a separated counter in the ok fpga is dedicated to counting electrical pulses from a pmt , which is widely used to detect fluorescence from trapped atoms or ions .the ok fpga is able to time - tag the arrival of the pmt signal ( with a timing resolution of 10 ns ) relative to the start of the pulse sequence .the time - tagged data is stored temporarily in the internal memory of the fpga which can be read out collectively later to reduce overheads in data - transfer .this is beneficial in running an experiment that requires long measurement time to build up statistics .this feature is demonstrated in ref . where fluorescence from trapped ions is collected during many experimental runs before the all the time - tag data is read by the computer at the end of the pulse sequence .another functionality of the ok fpga is to distribute rf signal data to all the dds boards ( described in the next section ) via a 16-bit differential bus .single - ended signals from the ok fpga are converted to differential signals using max3030 and max3094 ( maxim ) data converters to reject common - mode noise induced along the signal path .each dds board consists of a cyclone iv fpga and a dds chip ( ad9915 from analog devices ) . the block diagram is shown in fig .[ fig : dds_pcb_duo ] . before the starting of a pulse sequence, the fpga receives data from the main ok fpga and store it in a built - in memory in the fpga .each memory address in the fpga is a 128-bit wide word used to store data for frequency , amplitude , phase , frequency ramping rate and amplitude ramping rate ( see table [ table : bit ] ) . during the operation of the pulse sequence ,the fpga on the dds board waits for a digital ttl pulse from the main ok fpga to step to the next memory address .the fpga then programs the dds chip with the dds data from this memory block .programming of frequency and phase is performed directly to the dds chip via a 16-bit bus between the fpga and the dds chip ( see table [ table : bit ] ) . for amplitude tuning ,we control the variable - gain amplifier ( vga ) adl5330 ( analog devices ) which provides 60 db dynamic range of amplitude tuning .the control voltage ( 0 to 1.2 v ) for the vga is generated from a high - speed ad9744 ( analog devices ) 14-bit digital - to - analog converter chip .this independent control over the frequency and amplitude allows us to perform frequency and amplitude ramping separately , which is impossible with the built - in digital ramp functionality of the dds chip .we incorporate directly in the fpga two independent counters with programmable counting rates .this allows us to generate more complex ramping patterns for other applications .the layout of the printed circuit - board ( pcb ) of the dds board is shown in fig .[ fig : dds_pcb_duo ] .the pcb is a 4-layer board design with two inner planes used for ground and power .the analog part ( left side ) and the digital part ( right side ) have separated sets of voltage regulators to reduce electrical interference .there is also an auxiliary usb port for the purpose of using the dds board as a stand - alone rf source .since the communication between the pc and the pulse - sequencer unit is done through the main ok fpga module via a single usb cable , all data transfer is handled by the frontpanel api provided by opal kelly . for us, we use the provided api in python programming language .however , our experimental control software framework ( labrad ) allows an interface between various programming language , including labview .the details of the experimental control software is beyond the scope of this paper and we refer interested readers to ref . for full descriptions of the software .however , we would like to point out that to program a new pulse sequence to the ok fpga , we do not have to recompile the hardware description code for the fpga .the pulse sequence is written to the ok fpga directly in python data structure .an application software for controlling the pulse sequencer unit can be downloaded at a git repository given in ref . which also includes design files for the pcb of the dds board and vhdl source codes for all the fpgas used in our setup .an ability to change the amplitude and/or the frequency of the rf signal ( that drives the aoms ) rapidly is crucial in atomic physics experiments , especially in the case where the laser pulse duration is in the time scale of a few micro - seconds .[ fig : trace_data ] shows the measured signal from one of the dds channels where we set the main ok fpga to trigger the dds board at ( shown in trace [ fig : trace_data]d ) to switch both the amplitude ( from low to high ) and the frequency ( trace [ fig : trace_data]a and b ) . to understand the amplitude / frequency switching behaviour of the dds channel as shown in trace [ fig : trace_data]a , b and c , we describe here the protocol used for frequency / amplitude programming the dds chip via the on - board fpga ( dds fpga ) .once the dds board receives a ttl trigger from the ok fpga , the dds fpga advances its memory address to the next one and checks if there is a change in the amplitude and/or the frequency .if there is only a change in the amplitude , the dds fpga sends data to the ad9744 dac chip to update the control voltage to the vga .( no data is sent to the dds chip . )this task takes approximately 350 ns after triggering which is determined by the time to program the dac chip and the response of the vga .this is shown in trace [ fig : trace_data]b .however , if the amplitude changes to / from a completely off state , then in addition to the dac chip , the dds fpga has to also program the dds chip to turn off / on the rf output signal .this is shown in trace [ fig : trace_data]c , where the rf amplitude is switched from a completely off state .this task takes approximately 200 ns longer to complete compared to trace [ fig : trace_data]b because of the additional time it takes to program the dds chip .if there is a change in the frequency , then the dds fpga has to program the dds chip with new frequency data .this task takes approximately 1.0 after triggering . in trace [fig : trace_data]a we change both the amplitude and the frequency .we can see that the amplitude switching is faster ( and identical to trace [ fig : trace_data]b ) but the frequency switching takes longer to complete .it is important to note that delay in programming is well - defined in terms of a number of clock cycles .the delay can be compensated in the control software .for example , if we want the frequency to switching exactly at = 0.0 , then we can trigger the dds board at = -1.0 to obtain the desired timing .we also demonstrate fast frequency modulation by alternating one of the dds channel between two fixed frequency , as shown in fig .[ fig : freq_sw ] . in cold atom precision spectroscopy experiments ,we often implement a ramsey - type interferometric scheme . in this case , the phase of the laser light is directly controlled by the phase of the rf signal driving the aom .the ad9915 dds chip is capable of arbitrarily changing the phase of the rf signal by changing the phase offset in the internal phase accumulator . in fig .[ fig : phase_data ] , the main ok fpga signals the dds channel b to change the phase by 180 in three successive events given by digital pulses shown in trace [ fig : phase_data]c . by comparing to a reference signal of dds channel a, we can see that there is a delay of approximately 500 ns in phase switching .since the delay is well - defined by a number of clock cycles the dds fpga takes to program the dds chip , we can compensate this delay in a control software .we successfully implemented the phase control capability in the work performed in ref . . in this work, a signal measured from trapped ions is used to feedback to the phase of the clock laser in a ramsey - type interferometric scheme . in some scenarios ,fast amplitude and frequency switching of the rf signal driving aoms are not desirable .for example , we might change the frequency of the laser light stabilized to a frequency comb by changing the reference frequency of the beat signal between the two .if the change in the reference frequency is too sudden , the stabilization circuit might not be able to follow . in this case , a slow change in the rf frequency is more desirable . for each dds channel , we implement a ramping capability for both frequency and amplitude by means of additional counters in the fpga in each dds board .[ fig : amp_ramp_data ] shows a case where one of the dds channel is configure to ramp down the amplitude ( trace b ) compared to a sudden switching ( trace a ) . in this case , the ramping rate is set to 20 db / ms .[ fig : freq_ramp_data ] shows a capability of frequency ramping ( trace b ) compared to a fixed rf frequency in trace a. in this case the ramping rate is set to be 7 mhz / ms .we note that the dds channel is capable of ramping both the amplitude and frequency simultaneously since they are two dedicated counters in the fpga in each dds board .we note that both frequency and amplitude ramping can be implemented at the same time. recent work on optical atomic clocks achieve the frequency precision in the mhz scale routinely .it is then desirable to have rf sources that are capable of fine frequency tuning reaching the level below 1 mhz without sacrificing fast switching capability shown in the previous sections . in fig .[ fig : fine_freq_matome ] , we set one of the dds channel to output frequencies of 0 , 50 , 10 and 0 offset from 15.225 354 543 00 mhz . to be able to resolve small frequency changes , we average the frequency readouts measured using a frequency counter ( agilent 53230a ) for 5 minutes for each frequency setting . in this measurement, the 2 ghz reference clock for the dds is referenced to the frequency counter .* phase noise : * we measured the phase noise of the dds output and compared to the phase noise of the 2 ghz reference clock in fig . [fig : phase_noise_data ] .we found that the noise relative to the reference clock is similar to the specification of the ad9915 chip .* cross - talk : * we tested for cross - talk between two adjacent dds channels by setting the rf power of one channel to maximum and looking for pick - up rf signal at the other channel ( also set at maximum rf power ) . at the noise floor of dbc of our spectrum analyzer , we did not see any rf pick - up in the adjacent dds channel .* output power : * the output power as a function of the dds frequency is shown in fig .[ fig : output_power ] .the roll - off at low and high frequency is due to a finite bandwidth of the tc1 - 1 - 1t+ ( mini - circuits ) transformer and adl5330 variable gain amplifier on the dds board .data shown in fig . [ fig : output_power ] is taken without any on - board low pass filter . * power consumption and temperature : * for each dds channel , the required current for the power supplies are approximately 400 , 600 and 150 ma for 5v ( digital ) , 5v ( analog ) and 8v ( analog ) , respectively . without active air - flow cooling and room temperature of 25 c ,the dds chips heats up to approximate 45 c during a normal operation .we have presented a pulse - sequencer and rf generators unit suitable for experiments in atomic physics where the amplitude and frequency of laser lights are controlled by aoms and mechanical shutters .sub - mhz frequency tuning of the rf generators makes the system suitable for optical atomic clock , where mhz frequency resolution of the laser light frequency is routinely achieved .additionally , the timing of the pulse - sequence , ranging from sub - micro - seconds to seconds , together with frequency and amplitude ramping functionality , adds robustness to the pulse sequencer to be applicable to a wide variety of experiments with trapped atoms and ions .t. p. would like to thank h. hffner and m. ramm for support and assistance during the development of the pulse sequencer system in berkeley .this work is supported by riken s foreign postdoctoral researcher program .s , a triggering pulse is sent to the dds board from the main ok fpga to update the dds configuration .* trace a * shows a case when both the amplitude and the frequency are changed at the same time .( slight change in the rf amplitude after frequency switching is due to the response of the measuring oscilloscope . ) * trace b and c * show the case when the dds switches the amplitude from completely off ( c ) compared to from a low but non - zero amplitude ( b ) . *trace d * is one of the ttl output for timing reference ., scaledwidth=45.0% ] the system presented in this paper is a major upgrade ( especially the dds board and pci connectors ) over the first version of a complete system developed in berkeley in 2009 - 2014 ( see ref . for a full description of the system ) which has been successful in running various ion trapping experiments .the berkeley dds board ( with ad9910 analog devices chip ) was partly based on a prototype board from rainer blatt s group ( university of innsbruck , austria ) .a triggering ttl pulse from the main ok fpga is sampled by a clock on each dds board .hence , jitter in the triggering timing for each dds board is less than one period of the clock on the dds board .this is on the order of 10 ns for our system .this jitter can be reduced further by referencing the clock used by the ok fpga to the same one as the dds board .
we present a compact fpga - based pulse sequencer and radio - frequency ( rf ) generator suitable for experiments with cold trapped ions and atoms . the unit is capable of outputting a pulse sequence with at least 32 ttl channels with a timing resolution of 40 ns and contains a built - in 100 mhz frequency counter for counting electrical pulses from a photo - multiplier tube ( pmt ) . there are 16 independent direct - digital - synthesizers ( dds ) rf sources with fast ( rise - time of ns ) amplitude switching and sub - mhz frequency tuning from 0 to 800 mhz .
the variability hypothesis ( vh ) , also known as the greater male variability hypothesis , asserts that males are more likely than females to vary from the norm in both physical and mental traits " .the origin of the hypothesis is often traced back to johann meckel in the early eighteenth century , and it was used by charles darwin to help explain extreme male secondary sex characteristics in many species . in the first edition of _ man and woman _( london , 1894 ) , havelock ellis devoted an entire chapter to the variability hypothesis , and asserted that both retardation and genius are more frequent among males than females " .it should be emphasized that the vh says nothing about differences in _ means _ between males and females , and even for some physical attributes where the average values are significantly different , such as height , the variance in human males has been found to be significantly greater than that in females ( e.g. , ) . not surprisingly , all ten of the ten tallest humans are male , but five of the shortest ten humans are also male ( e.g. , , ) .the variability hypothesis proved highly controversial during the last century ( see below ) ; nevertheless , the past fifteen years have seen a resurgence of research on this topic .although some of these more recent studies found inconsistent support for the greater male variability hypothesis " , and thatgreater male variability with respect to mathematics is not ubiquitous " , many more ( e.g. , , , , , , , , , , ) have found vh to be valid in different contexts .for example , arden and plomin found greater variance [ in intelligence ] among boys at every age except age two " , and machin and pekkarinen found greater male variance in both mathematics and reading among boys and girls in thirty - five of the forty countries that participated in the 2003 programme for international student assessment .et al _ found that males are more variable on most measures of quantitative and visuospatial ability , which necessarily results in more males at both high- and low - ability extremes " , and he _ et al _ reported that the results of their studies in mainland china supported the hypothesis that boys have greater variability than girls in creativity test performance " .baye and monseur s studies of gender differences in variance and at the extreme tails of the score distribution in reading , mathematics , and science concluded the greater male variability hypothesis is confirmed " , and strand _et al s _ studies of verbal reasoning , non - verbal reasoning , and quantitative reasoning among boys and girls in the u.k . , reported for all three tests there were substantial sex differences in the standard deviation of scores , with greater variance among boys boys were over represented relative to girls at both the top and the bottom extremes for all tests " .et al _ reviewed the history of the hypothesis that general intelligence is more biologically variable in males than in females and presented data which in many ways are the most complete that have ever been compiled , [ that ] substantially support " the vh .even among studies supporting the validity of vh , there has been no clear or compelling explanation offered for _ why _ there might be gender differences in variability .et al _ report , the causes remain unexplained " , and halpern _ et al _ concluded the reasons why males are often more variable remain elusive " .some researchers found that the underlying reason for the vh is probably not due to the educational system ; e.g. , arden and plomin found that differences in variance emerge early even before pre - school suggesting that they are not determined by educational influences " .other researchers specifically mention the possibility of biological factors as a likely underlying explanation for the vh : johnson _ et al _ , after providing data that substantially support the vh in general intelligence , concluded that these differences in variability possibly have roots in biological differences " ( among other factors ) ; and ju _ et al _ , in finding that gender differences in variability in creativity are consistent across urban and rural samples , conjectured that it is more likely related to biological / evolutionary factors " .our goal here is neither to challenge nor to confirm the vh , but rather to propose a theory based exactly on such biological / evolutionary mechanisms that might help explain how one gender of a species might tend to evolve with greater variability than the other gender .note that the precise formal definitions and assumptions here are clearly not applicable in real - life scenarios , and that the contribution here is thus also merely a general theory based on unproved and unprovable hypotheses .this theory is independent of species , and although it may raise red flags for some when applied to _ homo sapiens _ , we share the viewpoints of erikson _ et al _ that the variability hypothesis is not only of mere historical interest but also has current relevance for clinical practice " , and of ju _ et al _ that such a study enriches the discourse on the greater male variability hypothesis " .strenuous objections have been raised over the years about the implications of vh for certain attributes . shortly after the groundbreaking work of ellis in support of the vh , notable opposition appeared .psychologist leta hollingworth , the former doctoral student of nationally known and respected educational psychologist e. l. thorndike , himself a prominent advocate of the vh , attacked the vh on several statistical and sociological grounds , and according to , was one of the most influential critics of the vh .the heated debates continued throughout the twentieth century , at the end of which psychologist stefanie shields asserted for some scientists , the idea that females would be unlikely to excel suggested that it was unnecessary to provide them with opportunities to do so " and concluded , early feminist researchers were largely responsible for its eventual decline " .since then the heated conflicts have continued unabated , in spite of its alleged eventual decline " .one of the most widely reported incidents concerned the remarks that harvard president larry summers made at a national bureau of economic research conference on diversifying the science and engineering workforce in january 2005 .he prefaced his remarks by saying that he would confine himself to the issue of women s representation in tenured positions in science and engineering at top universities and research institutions " and then observed it does appear that on many , many different human attributes height , weight , propensity for criminality , overall iq , mathematical ability , scientific ability - there is relatively clear evidence that whatever the difference in means - which can be debated - there is a difference in the standard deviation , and variability of a male and a female population " . in other words , summers was simply reminding his audience of the gist of the vh , that for many attributes , males are overrepresented at both the high _ and _ the low ends of the distribution .his statement caused a firestorm when it was completely misrepresented by both scholars and the press .yale computer scientist david gelernter , writing in the _ los angeles times _ , stated that summers had suggested that , on average , maybe women are less good than men at science , which might explain why fewer females than males are science professors at harvard [ and that ] summers made a statement about averages , not individuals " .these statements were completely false , as science writer dana mackenzie pointed out in the _ swarthmore college bulletin _ , writing well , no , he did nt .but in the public debate , that is how his statement was interpreted " .summers had made no such statement about _ averages _ , but merely had suggested that the _ standard deviation _ for males appeared to be greater than that of females .that is , the tails of the male distribution curves are heavier at _ both _ ends . not that there are no females in the top echelon , but simply that the relative frequencies of males are greater than those of females , at both ends - in ellis s terms , more retardation , more genius .the resulting uproar , however , precipitated a no - confidence vote by the harvard faculty , and prompted summers s resignation as president .an even thornier issue is the question of what , if anything , should be done if there is in fact some validity to the vh . for some venues with objective performance measures ,the solution is simply to have separate women - only competitions , as is done in the olympics and the world chess championships . in other less clear areas ,the approach to the problem , if it is indeed a problem , has also raised controversial remedies even among the proponents of the vh .for example , in 2013 educators he _et al _ suggested that separate distribution curves and norms should be developed for boys and girls , and different cut - off points should be used for identifying and selecting students for special or gifted programs ." there seems to be no end to the controversies , and the authors leave the socio - political debates to others .the basic idea in this article is very simple . + selectivity theory : _ in a population with two sexes a and b , both of which are needed for reproduction , suppose that sex a is relatively * selective * , i.e. , will mate only with a top tier ( less than half ) of b candidates . then among subpopulations of b with comparable average attributes , those with * greater variability * will tend to prevail over those with lesser variability .conversely , if a is relatively * non - selective * , accepting all but a bottom fraction ( less than half ) of the opposite sex , subpopulations of b with * lesser variability * will tend to prevail over those with comparable means and greater variability ._ + note that this theory makes no assumptions about differences in means between the sexes , nor does it presume that one sex is selective and the other non - selective . if both sexes happen to be selective , for instance , then the best evolutionary strategy for each is to tend to greater variability .the next example illustrates the underlying ideas of this selectivity theory of gender differences in variability through an elementary hypothetical scenario where the subpopulations have distinct fitness levels , and one is trivially more variable than the other .the two subsequent sections provide detailed models with perhaps more realistic assumptions of fitness levels distributed normally or exponentially ; the first uses a probabilistic analysis and the second a standard system of coupled ordinary differential equations as are common in population studies . to quantify acceptability " by the opposite sex , it will be assumed here and throughout that each individual in each sex has a real number that reflects its attractiveness to the opposite sex , with a larger number being preferable to a smaller one .biologists often use the word _ fitness _ for this concept , and in some sense this numerical value describes how good its particular genotype is at leaving offspring .fitness is a relative thing , and in this simplified model , this single number represents an individual s level of reproductive success relative to some baseline level .[ ex2.3 ] sex consists of two subpopulations and .half of the fitness values of are uniformly distributed between 1 and 2 and the other half are uniformly distributed between 3 and 4 , while all of the fitness values of are uniformly distributed between 2 and 3 . thus is more variable than , and they both have the same average fitness .suppose first that and are of equal size .then one quarter of sex ( the lower half of ) has fitness values between 1 and 2 , half of ( all of ) has fitness between 2 and 3 , and one quarter of ( the upper half of ) has values between 3 and 4. if sex is relatively selective and will mate only with the top quarter of sex , then all of the next generation will be offspring of the more variable subpopulation . on the other hand , if sex is relatively non - selective and will mate with any but the lower quarter of , then all of the less variable will mate , but only half of the more variable will mate . similar conclusions follow if the initial subpopulations are not of equal size . for example , suppose that one third of sex is the more variable and two thirds is the less variable .then if sex only mates with the top quarter of , a short calculation shows that two thirds of the next generation will be offspring of and one third will be offspring of , thereby reversing their proportions toward the more variable subpopulation .if sex will mate with any but the lower quarter of , then only two ninths of the next generation will be offspring of and seven ninths will be offsprings of , thereby increasing the proportion of the less variable subpopulation of sex .note the asymmetry here in the mating probabilities ; some intuition behind why this occurs may perhaps be gained from the observation that the upper tier of the more variable population will always be able to mate , whether the opposite sex is selective or non - selective . to facilitate more mathematically precise notions of these fitness and selectivity ideas , several definitions and notation will be introduced .the fitness of individuals within sexes varies , and its ( normalized ) distribution is a probability distribution .a key characteristic of the distribution is the proportion of individuals whose fitness exceeds each given threshold . to fix notation using standard statistical terminology , for a ( borel ) probability measure on the real line , let denote the _ survival function _ for ( i.e., ] for all , so since , the continuity of and implies the existence of so that \mbox { and for all } t>0.\ ] ] thus by , so , which implies that as , completing the proof of ( i ) .the proof of ( ii ) is analogous .[ ex4.2 ] let the survival functions and for subpopulations and be given by note that , , i.e. , subpopulation is more variable than . _ special case 1 . _sex is selective and only accepts the top quarter of individuals in sex , i.e. , . in example[ ex4.2 ] , and the blue curve is the density of the less variable subpopulation . note that the fitness values of both drop off exponentially fast from the mean in both directions.,scaledwidth=75.0% ] using , and , and noting that for yields the following coupled system of ordinary differential equations : figure [ fig4 ] illustrates a numerical solution of with the initial condition . and and of the ratio satisfying ( [ eq9 ] ) . , title="fig:",scaledwidth=45.0% ] and and of the ratio satisfying ( [ eq9 ] ) . , title="fig:",scaledwidth=45.0%] _ special case 2 ._ sex is non - selective and accepts the top three - quarters of individuals in sex , i.e. , . using , , and again , and noting that for yields the following system : see figure [ fig5 ] for a numerical solution with the same initial condition . and and of the ratio satisfying ( [ eq10 ] ) . , title="fig:",scaledwidth=45.0% ] and and of the ratio satisfying ( [ eq10 ] ) ., title="fig:",scaledwidth=45.0% ] once more , note that the selectivity case is more extreme than the non - selectivity case . also note that the birth process model above implicitly includes simple birth _ and _ death processes , via the simple observation that a population growing , for example , at a rate of eight per cent and dying at a rate of three percent , can be viewed as a pure birth process growing at a rate of five per cent .the selectivity theory of differences in gender variability introduced here explains how current greater or lesser variability could depend on the past selectivity factor of the opposite sex , and as such pertains equally to either sex . if both sexes began with comparable mid - range variability , for example , and if our female ancestors were generally selective _ or _ male ancestors were generally non - selective , or both , this would have led to relatively greater male variability , i.e. , the vh .thus if there were a biological reason for either or both of such gender patterns in selectivity to have occurred over time , the above selectivity theory would predict a species whose males now generally exhibit more variability than its females .humans have been hunter - gatherers for all but 600 of their 10,000-generation history , and if during much of that time males were relatively non - selective _ or _ females were selective , the selectivity theory could help explain any perceived evidence of vh in humans today .why might one gender have been more selective than another ? a basic cross - species pattern is that the sex with the slower potential rate of reproduction ( typically females , because of gestation time ) invests more in parenting , [ and ] is selective in mate choices " and the sex with the faster potential rate of reproduction ( typically males ) invests less in parenting , [ and ] is less selective in mate choices " .the bottom line is simply that in our model a sex that has experienced relatively intense vetting by the opposite sex will have tended toward greater variability , and a sex that has experienced relatively little vetting by the opposite sex will have tended toward lesser variability , independent of the means or variances of the other sex .thus , if this selectivity theory has validity , gender differences in variability are time dependent whenever the two sexes tendencies in selectivity are evolving .if gender differences in selectivity have been decreasing and are now less significant in humans than they were in prehistoric times , as appears might be the case , this theory would also predict that the gender difference in variability has also been decreasing in modern times , i.e. , the vh has been slowly disappearing . whether or not that is the case , of course , is far beyond the scope of this article , and is a possible topic for future research .the authors are indebted to professors ron fox , erika rogers , and marjorie senechal for very helpful comments , and the second author is grateful for support from the national science foundation .durnin , j. and womersley , v. a comparison of the skinfold method with extent of ` overweight ' and various weight - height relationships in the assessment of obesity ._ british journal of nutrition * 38*_:27184 ( 1977 ) .eriksson , m. , marschik , p. , tulviste , t. , almgren , m. , pereira , m. , wehberg , s. , marjanovic - umek , l. , gayraud , f. , kovacevic , m. , and gallego , c. differences between girls and boys in emerging language skills : evidence from 10 language communities ._ british journal of developmental psychology * 30*_:326343 ( 2012 ) .geary , d. an evolutionary perspective on sex differences in mathematics and the sciences . in s.j .ceci and w.m .williams ( eds ) ._ why are nt more women in science ? _ washington , dc : american psychological association , 173188 ( 2007 ) .he , w - j . ,wong , w - c ., li , y. , and xu , h. a study of the greater male variability hypothesis in creative thinking in mainland china : male superiority exists . _ personality and individual differences * 55*_:882886 ( 2013 ) .lieberman , d. our hunter - gatherer bodies ._ new york times _ , may 13 , 2011 , + http://www.nytimes.com / roomfordebate/2011/05/12/do - we - want - to - be- supersize - humans / we - still - have - the - bodies - of - hunter - gatherers + ( accessed february 11 , 2017 ) .mackenzie , d. what larry summers said and did nt say ._ swarthmore college bulletin _ , january 2009 , + http://bulletin.swarthmore.edu/bulletin-issue-archive/archive_p=145.html + ( accessed march 16 , 2017 ) .summers , l. remarks at nber conference on diversifying the science & engineering workforce , cambridge , ma , january 14 , 2005 , + http://web.archive.org/web/20080130023006/http://www.president .+ harvard.edu/speeches/2005/nber.html + ( accessed march 16 , 2017 ) . top masters health care posting + http://www.topmastersinhealthcare.com/10-tallest-people-in-history/ + http://www.masters - in - health - administration.com/10-shortest - people - in- the - world/ + http://www.masters-in-health-administration.com/author/cbarker/ + ( accessed february 27 , 2017 ) .
a selectivity theory is proposed to help explain how one gender of a species might tend to evolve with greater variability than the other gender . briefly , the theory says that if one sex is relatively selective , then more variable subpopulations of the opposite sex will tend to prevail over those with lesser variability ; and conversely , if one sex is relatively non - selective , then less variable subpopulations of the opposite sex will tend to prevail over those with greater variability . this theory makes no assumptions about differences in means between the sexes , nor does it presume that one sex is selective and the other non - selective . two mathematical models are presented : a statistical analysis using normally distributed fitness values , and a deterministic analysis using a standard system of coupled ordinary differential equations with exponentially distributed fitness levels . the theory is applied to the classical greater male variability hypothesis .
sandpile model ( ) was introduced by bak , tang and weisenfeld as a paradigm to describe the self - organized criticality ( soc ) phenomenon in physics and has a variety of applications in physics , mathematics , economics , theoretical computer science .the simplest model is that the system starts from a single column configuration , then at each step , one column gives one grain to its right neighbor if it has more than at least two grains comparing to its right neighbor .it was proved that this model converges to only one configuration at which the evolution rule can not be applied at any column ( this configuration is called _ fixed point _ ) .furthermore , all _ reachable configurations _( which are obtained from the initial configuration by applying several times of the evolution rule ) are also well characterized and its configuration space is a lattice .the system has been modified and generalized in several aspects to satisfy each particular purpose . in the context of chip firing games , cellular automata and informatics systems , the model with parallel update scheme ( _ i.e. _ at each step, all applicable rules are applied in parallel ) received great attention . in , durand - lose showed that the transient time to reach a fixed point is linear in the total number of grains when the parallel updated scheme is used , whereas it is when the sequential one is used . to make it closer to the real physical phenomenon , formenti _ et al . _ and phan , generalized so that grains are allowed to fall on both sides ( left and right ) .this generalized model is called _ symmetric sandpile model _ and denoted by .the model has no unique fixed point any more . while formenti __ investigated the model by considering its configurations without caring its positions ( that is , they identify all configurations which are up - to a translation on a line ) , phan investigated the model in addition to its positions and showed the furthest position ( comparing to the position at which the initial column is situated ) .the authors characterized reachable configurations in ( resp .forms of reachable configurations in ) starting from a single column configuration . furthermore , they showed that the number of fixed - point forms of the model is exactly ] .moreover , if is a fixed point of then is of height either ] .in this section we introduce another generalization of the sandpile model . in this model, we inherit the rule of the symmetric sandpile model and implement them in parallel .first we give precisely its definition . like the other generalizations of the sandpile model represented in the previous section, we always start with the single - column configuration . _the parallel symmetric sandpile model _ is a system defined by the following _ rule _ : * at each step , all collapsible columns collapse ; * for columns which are collapsible on both sides , it must choose exactly one direction to collapse ., width=188 ] we denote by ( resp . ) the configuration space of the parallel symmetric sandpile model starting with ( resp .any single column configuration ) .* remark : * 1 . unlike which is deterministic ,the is non - deterministic since although columns collapse at the same time at each step , there may have two directions ( must choose one ) for one column collapsing ; 2 .since an evolution step by rule can be considered as a combination of some evolution steps by rule , each configuration of is a configuration .furthermore , the configuration space of is a subspace of that of and the set of fixed points of is a subset of that of .we notice that the set of fixed points of is a proper subset of that of .actually , has fixed points : , , , ( see figure [ sspm6 ] ) , but has only fixed points : and ( see figure [ psspm6 ] ) .however , one can observe that has only fixed - point forms as , which raises a question about the correlation of fixed - point forms of the two models .the main result of this paper is to state that the set of fixed - point forms of and that of are the same .moreover , we can show an explicit evolution by rule to reach any given fixed - point form of .[ t : psspm1 ] the set of fixed - point forms of is equal to that of . consequently , there is ] ) and from step ^ 2 + 1 ] recall that if is the height of a fixed point of then ] .hence , . from the way we constructed above, it takes transitions of applying pseudo - alternating procedure and alternating procedure ; then it takes at most transitions of applying the final procedure to reach a fixed point of .therefore , .we proved that beginning with a singleton column of sand grains , the sequential model and the parallel model produce the same fixed - point forms . to tackle the problem , for each fixed - point form of , we construct an explicit way of evolution to obtain this fixed - point form . every configuration in this wayhas a `` smooth '' form , even it can be characterized by a formula on the time of the evolution , whereas it is difficult to capture the forms of general reachable configurations . actually , the problem of finding a shortest way to reach a given fixed - point form of is interesting to explore . the way we constructed is not always a shortest way although it reveals many interesting properties to be possibly a shortest way .in fact , the difference between the length of our constructed way and the one of the shortest ways is at most $ ] .we do not know so far an explicit formula of the length or the behavior of such shortest ways . during the evolution we constructed, the original column is always a highest column , and it never receives any grain from its neighborhoods. it would be interesting to investigate the problem in which the positions of the fixed points are considered . in this problem ,the fixed points of are not the same as those of .all the fixed points of might be fully characterized by the furthest fixed points ( the maximum and minimum fixed points with respect to the lexicographic order ) . a possible way to obtain the right - furthest fixed pointis that at a current configuration , each column always collapses on the right if it is possible . by doing the experiments on computer ,it is surprising that when for some , the furthest fixed point has a nice pyramid - shape which has no plateau and the right - most grain is at distance .for example , let .then the furthest fixed point is illustrated figure [ fig : image1 ] .it is reasonable to come up with the following conjecture
this paper presents a generalization of the sandpile model , called the parallel symmetric sandpile model , which inherits the rule of the symmetric sandpile model and implements them in parallel . we prove that although the parallel model produces really less number of fixed points than that by the sequential model , the forms of fixed points of the two models are the same . moreover , our proof is a constructive one , which gives a nearly shortest way to reach a given fixed point form .
in cologne , we teach a course on theoretical physics ii ( electrodynamics ) to students of physics in their fourth semester . for several years, we have been using for that purpose the calculus of exterior differential forms , see , because we believe that this is the appropriate formalism : it is based on objects which possess a clear operational interpretation , it elucidates the fundamental structure of maxwell s equations and their mutual interrelationship , and it invites a 4-dimensional representation appropriate for special _ and _ general relativity theory ( i.e. , including gravity , see ) .our experimental colleagues are somewhat skeptical ; and not only them .therefore we were invited to give , within 90 minutes , a sort of popular survey of electrodynamics in exterior calculus to the members of one of our experimental institutes ( group of h. micklitz ) .the present article is a worked - out version of this talk .we believe that it could also be useful for other universities .subsequent to the talk we had given , we found the highly interesting and historically oriented article of roche on and , the intensity vectors of magnetism " . therein , the corresponding work of bamberg and sternberg , bopp , ingarden and jamiokowski , kovetz , post , sommerfeld , and truesdell and toupin , to drop just a few names , was neglected yielding a picture of and which looks to us as being not up of date ; one should also compare in this context the letter of chambers and the book of roche , in particular its chapter 9 .below we will suggest answers to some of roche s questions .moreover , `` ... any system that gives and different units , when they are related through a relativistic transformation , is on the far side of sanity '' is an apodictic statement of fitch . in the sequel , we will prove that we _ are _ on the far side of sanity : the _ absolute _ dimension of turns out to be _ magnetic flux / time _ and that of _ magnetic flux , _ see sec .[ strengths ] . according to the audience we want to address, we will skip all mathematical details and take recourse to plausibility considerations . in order to make the paper self - contained , we present though a brief summary of exterior calculus in the appendix . a good reference to the mathematics underlying our presentation is the book of frankel , see also and . for the experimental side of our subject we refer to bergmann - schaefer .complementary to , one can define an operation which decreases the rank of a form by 1 .this is the _ interior product _ of a vector with a -form .given the vector with the components , the interior product with the coframe 1-form yields , which is a sort of a projection along .by linearity , the interior product of with a -form is defined as described in table iii . the _ hodge dual operator _ maps -forms into -forms .its introduction necessarily requires the _ metric _ which assigns a real number to every two vectors and . in local coordinates ,the components of the metric tensor are determined as the values of the scalar product of the basis vectors , .this matrix is _ positive definite_. the metric introduces a natural volume 3-form which underlies the definition of the hodge operator .the general expression is displayed in table iii .explicitly the hodge dual of the coframe 1-form reads , for example : , where is inverse to .the notions of the _ odd _ and _ even _ exterior forms are closely related to the orientation of the manifold . in simple terms , these two types of forms are distinguished by their different behavior with respect to a reflection ( i.e. , a change of orientation ) : an even ( odd ) form does not change ( changes ) sign under a reflection transformation .these properties of odd and even forms are crucial in the integration theory , see , e.g. , . for a -form an _ integral _ over a -dimensional subspace is defined .for example , a -form can be integrated over a curve , a -form over a 2-surface , and a volume 3-form over the whole 3-dimensional space .we will not enter into the details here , limiting ourselves to the formulation of stokes s theorem which occupies a central place in integration theory : here is an arbitrary -form , and is an arbitrary -dimensional ( hyper)surface with the boundary .obukhov and f.w .hehl , _ space - time metric from linear electrodynamics _ , _ phys .lett . _ * b458 * ( 1999 ) 466 - 470 .hehl and yu.n .obukhov , _ how does the electromagnetic field couple to gravity , in particular to metric , nonmetricity , torsion , and curvature ?_ preprint iassns - hep-99 - 116 , institute for advanced study , princeton , see also http://arxiv.org/ abs / gr - qc/0001010 .
the axiomatic structure of the electromagnetic theory is outlined . we will base classical electrodynamics on ( 1 ) electric charge conservation , ( 2 ) the lorentz force , ( 3 ) magnetic flux conservation , and ( 4 ) on the maxwell - lorentz spacetime relations . this yields the maxwell equations . the consequences will be drawn , inter alia , for the interpretation and the dimension of the electric and magnetic fields .
the network coding approach introduced in generalizes routing by allowing intermediate nodes to forward packets that are coded combinations of all received data packets .this yields many benefits that are by now well documented in the literature .one fundamental open problem is to characterize the capacity region and the classes of codes that achieve capacity .the single session multicast problem is well understood . in this case , the capacity region is characterized by max - flow / min - cut bounds and linear network codes maximize throughput .significant complications arise in more general scenarios , involving more than one session .linear network codes are not sufficient for the multi - source problem .furthermore , a computable characterization of the capacity region is still unknown .one approach is to bound the capacity region by the intersection of a set of hyperplanes ( specified by the network topology and sink demands ) and the set of entropy functions ( inner bound ) , or its closure ( outer bound ) .an exact expression for the capacity region does exist , again in terms of .unfortunately , this expression , or even the bounds can not be computed in practice , due to the lack of an explicit characterization of the set of entropy functions for more than three random variables .in fact , it is now known that can not be described as the intersection of finitely many half - spaces .the difficulties arising from the structure of are not simply an artifact of the way the capacity region and bounds are written .in fact it has been shown that the problem of determining the capacity region for multi - source network coding is completely equivalent to characterization of . one way to resolvethis difficulty is via relaxation of the bound , replacing the set of entropy functions with the set of polymatroids ( which has a finite characterization ) . in practicehowever , the number of variables and constraints increase exponentially with the number of links in the network , and this prevents practical computation for any meaningful case of interest . in this paper, we provide an easily computable relaxation of the lp bound .the main idea is to find sets of edges which are determined by the source constraints and sink demands such that the total capacity of these sets bounds the total throughput .the resulting bound is tighter than the network sharing bound and the bounds based on information dominance .section [ sec : background ] provides some background on pseudo - variables and pseudo entropy functions ( which generalize entropy functions ) .these pseudo variables are used to describe a family of linear programming bounds on the capacity region for network coding . in section[ sec : fdg ] we give an abstract definition of a functional dependence graph , which expresses a set of local dependencies between pseudo variables ( in fact a set of constraints on the pseudo entropy ) .our definition extends that introduced by kramer to accommodate cycles .this section also provides the main technical ingredients for our new bound .in particular , we describe a test for functional dependence , and give a basic result relating local and global dependence .the main result is presented in section [ sec : bound ] ._ notation _ : sets will be denoted with calligraphic typeface , e.g. . set complement is denoted by the superscript ( where the universal set will be clear from context ) . set subscripts identify the set of objects indexed by the subscript : .the power set is the collection of all subsets of . where no confusion will arise , set union will be denoted by juxtaposition , , and singletons will be written without braces .we give a brief revision of the concept of pseudo - variables , introduced in .let be a finite set , and let be a ground set associated with a real - valued function defined on subsets of , with .we refer to the elements of as _ pseudo - variables _ and the function as a _ pseudo - entropy _ function .pseudo - variables and pseudo - entropy generalize the familiar concepts of random variables and entropy .pseudo - variables do not necessarily take values , and there may be no associated joint probability distribution . a pseudo - entropy function may assign values to subsets of in a way that is not consistent with any distribution on a set of random variables .a pseudo - entropy function can be viewed as a point in a dimensional euclidean space , where each coordinate of the space is indexed by a subset of .a function is called _ polymatroidal _ if it satisfies the polymatroid axioms . it is called _ ingletonian _ if it satisfies ingleton s inequalities ( note that ingletonian are also polymatroids ) .a function is _ entropic _ if it corresponds to a valid assignment of joint entropies on random variables , i.e. there exists a joint distribution on discrete finite random variables with . finally , is _ almost entropic _ if there exists a sequence of entropic functions such that .let respectively denote the sets of all entropic , almost entropic , ingletonian and polymatroidal , functions . both and are polyhedra .they can be expressed as the intersection of a finite number of half - spaces in . in particular , every satisfies , - , which can be expressed minimally in terms of linear inequalities involving variables .each satisfies an additional linear inequalities .[ def : function ] let be subsets of a set of pseudo - variables with pseudo - entropy .define a pseudo - variable is said to be a _ function _ of a set of pseudo - variables if .[ def : independent ] two subsets of pseudo - variables and are called _ independent _ if , denoted by . let the directed acyclic graph serve as a simplified model of a communication network with error - free point - to - point communication links .edges have capacity . for edges , write as shorthand for . similarly , for an edge and a node , the notations and respectively denote and .let be an index set for a number of multicast sessions , and let be the set of source variables .these sources are available at the nodes identified by the mapping . each source may be demanded by multiple sink nodes , identified by the mapping .each edge carries a variable which is a function of incident edge variables and source variables .given a network , with sessions , source locations and sink demands , and a subset of pseudo - entropy functions on pseudo - variables , let be the set of source rate tuples for which there exists a satisfying it is known that and are inner and outer bounds for the set of achievable rates ( i.e. rates for which there exist network codes with arbitrarily small probability of decoding error ) .it is known that is an outer bound for the set of achievable rates .similarly , is an outer bound for the set of rates achievable with linear network codes .clearly .the sum - rate bounds induced by and can in principle be computed using linear programming , since they may be reformulated as where is either or , and are the subsets of pseudo - entropy functions satisfying the so - labeled constraints above .clearly the constraint set is linear .one practical difficulty with computation of is the number of variables and the number of constraints due to ( or ) , both of which increase exponentially with .the aim of this paper is to find a simpler outer bound .one approach is to use the functional dependence structure induced by the network topology to eliminate variables or constraints from .here we will take a related approach , that directly delivers an easily computable bound .[ def : fdg ] let be a set of pseudo - variables with pseudo - entropy function . a directed graph with called a _ functional dependence graph _ for if and only if for all with an identification of and node , this definition requires that each pseudo - variable is a function ( in the sense of definition [ def : function ] ) of the pseudo - variables associated with its parent nodes . to this end , define where it does not cause confusion , we will abuse notation and identify pseudo - variables and nodes in the fdg , e.g. will be written . definition [ def : fdg ] is more general than the functional dependence graph of ( * ? ? ?* chapter 2 ) .firstly , in our definition there is no distinction between source and non - source random variables .the graph simply characterizes functional dependence between variables .in fact , our definition admits cyclic directed graphs , and there may be no nodes with in - degree zero ( which are source nodes in ) .we also do not require independence between sources ( when they exist ) , which is implied by the acyclic constraint in .our definition of an fdg admits pseduo - entropy functions with _ additional _ functional dependence relationships that are not represented by the graph .it only specifies a certain set of conditional pseudo - entropies which must be zero .finally , our definition holds for a wide class of objects , namely pseudo - variables , rather than just random variables .clearly a functional dependence graph in the sense of satisfies the conditions of definition [ def : fdg ] , but the converse is not true .henceforth when we refer to a functional dependence graph ( fdg ) , we mean in the sense of definition [ def : fdg ] .furthermore , an fdg is _ acyclic _ if has no directed cycles .a graph will be called _ cyclic _ if every node is a member of a directed cycle. definition [ def : fdg ] specifies an fdg in terms of local dependence structure .given such local dependence constraints , it is of great interest to determine all implied functional dependence relations . in other words , we wish to find all sets and such that .[ def : fd ] for disjoint sets we say determines in the directed graph , denoted , if there are no elements of remaining after the following procedure : remove all edges outgoing from nodes in and subsequently remove all nodes and edges with no incoming edges and nodes respectively . for a givenset , let be the set of nodes deleted by the procedure of definition [ def : fd ] .clearly is the largest set of nodes with .[ lem : grandparent ] let be a functional dependence graph for a polymatroidal pseudo - entropy function . for any with by hypothesis , for any .furthermore , note that for any , conditioning can not increase pseudo - entropy and hence for any . now using this property , and the chain rule we emphasize that in the proof of lemma [ lem : grandparent ] we have only used the submodular property of polymatroids , together with the hypothesized local dependence structure specified by the fdg .clearly the lemma is recursive in nature .for example , it is valid for and so on .the implication of the lemma is that a pseudo - variable in an fdg is a function of for any with .let be a functional dependence graph on the pseudo - variables with polymatroidal pseudo - entropy function .then for disjoint subsets , let in the fdg . then , by definition [ def : fd ] there must exist directed paths from some nodes in to a every node in , and there must not exist any directed path intersecting that does not also intersect .recursively invoking lemma [ lem : grandparent ] , the theorem is proved .definition [ def : fd ] describes an efficient graphical procedure to find implied functional dependencies for pseudo - variables with local dependence specified by a functional dependence graph .it captures the essence of the chain rule for pseudo - entropies and the fact that pseudo - entropy is non - increasing with respect to conditioning ( [ poly : submod ] ) , which are the main arguments necessary for manual proof of functional dependencies . one application of definition [ def : fd ] is to find a reduction of a given set , i.e. to find a disjoint partition of into and with , which implies . on the other hand ,it also tells which sets are _a set of nodes in a functional dependence graph is _ irreducible _ if there is no with . clearly , every singleton is irreducible .in addition , in an acyclic fdg , irreducible sets are basic entropy sets in the sense of .in fact , irreducible sets generalize the idea of basic entropy sets to the more general ( and possibly cyclic ) functional dependence graphs on pseudo - variables . in an acyclic graph ,let denote the set of ancestral nodes , i.e. for every node , there is a directed path from to some . of particular interestare the maximal irreducible sets : an irreducible set is _ maximal _ in an acyclic fdg if , and no proper subset of has the same property .note that for acyclic graphs , every subset of a maximal irreducible set is irreducible .conversely , every irreducible set is a subset of some maximal irreducible set .irreducible sets can be augmented in the following way .[ lem : augment ] let in an acyclic fdg .let .then is irreducible for every .this suggests a process of recursive augmentation to find all maximal irreducible sets in an acyclic fdg ( a similar process of augmentation was used in ) .let be a topologically sorted to then ( * ? ? ?* proposition 11.5 ) . ]acyclic functional dependence graph .its maximal irreducible sets can be found recursively via in algorithm [ alg : augment ] .in fact , finds all maximal irreducible sets containing . output output in cyclic graphs , the notion of a maximal irreducible set is modified as follows : an irreducible set is _ maximal _ in a cyclic fdg if , and no proper subset of has the same property . for cyclic graphs ,every subset of a maximal irreducible set is irreducible .in contrast to acyclic graphs , the converse is not true .in fact there can be irreducible sets that are not maximal , and are not subsets of any maximal irreducible set .it is easy to show that [ lem : equal ] all maximal irreducible sets have the same pseudo - entropy .this fact will be used in development of our capacity bound for network coding in section [ sec : bound ] below .we are interested in finding every maximal irreducible set for cyclic graphs .this may be accomplished recursively via in algorithm [ alg : ams ] .note that in contrast to algorithm [ alg : augment ] , finds all maximal irreducible sets that _ do not _ contain any node in . output output figure [ butterfly ] shows the well - known butterfly network and figure [ mul ] shows the corresponding functional dependence graph .nodes are labeled with node numbers and pseudo - variables ( the sources variables are and .the are the edge variables , carried on links with capacity ) .edges in the fdg represent the functional dependency due to encoding and decoding requirements .the maximal irreducible sets of the cyclic fdg shown in figure [ mul ] are now give an easily computable outer bound for the total capacity of a network coding system .[ thm : mainresult ] let be given network coding constraint sets .let be a functional dependence graph ] on the ( source and edge ) pseudo - variables with pseudo - entropy function .let be the collection of all maximal irreducible sets not containing source variables .then let , then maximal irreducible sets which do not contain source variables are information blockers " from sources to corresponding sinks .they can be interpreted as information theoretic cuts in in the network .note that an improved bound can in principle be obtained by using additional properties of ( rather than just subadditivity ) .similarly , bounds for linear network codes could be obtained by using . for single source multicast networks ,theorem [ thm : mainresult ] becomes the max - flow bound ( * ? ? ?* theorem 11.3 ) and hence is tight .the functional dependence bound for the butterfly network of figure [ mul ] is to the best of our knowledge , theorem [ thm : mainresult ] is the tightest bound expression for general multi - source multi - sink network coding ( apart from the computationally infeasible lp bound ) . other bounds like the network sharing bound and bounds based on information dominance use certain functional dependencies as their main ingredient .in contrast , theorem [ thm : mainresult ] uses all the functional dependencies due to network encoding and decoding constraints .explicit characterization and computation of the multi - source network coding capacity region requires determination of the set of all entropic vectors , which is known to be an extremely hard problem .the best known outer bound can in principle be computed using a linear programming approach . in practicethis is infeasible due to an exponential growth in the number of constraints and variables with the network size .we gave an abstract definition of a functional dependence graph , which extends previous notions to accommodate not only cyclic graphs , but more abstract notions of dependence .in particular we considered polymatroidal pseudo - entropy functions , and demonstrated an efficient and systematic method to find all functional dependencies implied by the given local dependencies .this led to our main result , which was a new , easily computable outer bound , based on characterization of all functional dependencies in networks .we also show that the proposed bound is tighter than some known bounds .
explicit characterization and computation of the multi - source network coding capacity region ( or even bounds ) is long standing open problem . in fact , finding the capacity region requires determination of the set of all entropic vectors , which is known to be an extremely hard problem . on the other hand , calculating the explicitly known linear programming bound is very hard in practice due to an exponential growth in complexity as a function of network size . we give a new , easily computable outer bound , based on characterization of all functional dependencies in networks . we also show that the proposed bound is tighter than some known bounds . = 1
many high energy astrophysical phenomena , including accretion flows , jet flows , gamma - ray bursts , and pulsar winds involve relativistic flows . in powerful extragalactic radio sources , for example , ejections from galactic nucleiproduce intrinsic beam lorentz factors of usually more than and apparently up to , which are required to explain the apparent superluminal motions observed in extragalactic radio sources associated with active galactic nuclei ( e.g. , * ? ? ?* ) . in the expansion of many relativistic jets the internal thermal energy of a gas is converted into bulk kinetic energy so as to reach a high lorentz factor in a short distance. then this kinetic energy is dissipated by shock interactions , mostly by terminal shock complexes , and partially by internal shocks within the jets as they propagate over long distances ( e.g. , * ? ? ?since relativistic flows are inherently nonlinear and complex , in addition to possessing large lorentz factors , numerical simulations have been performed to investigate such relativistic flows , for example , in the propagation of relativistic extragalactic jets ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?many explicit finite difference schemes originally applied to classical hydrodynamics have been employed to treat special relativistic hydrodynamics numerically .these schemes to solve the relativistic hydrodynamic equations are based on either exact or approximate solutions to the local riemann problem .some of these schemes adopt local characteristic decomposition of the jacobian matrix to build numerical fluxes .it is often difficult to build characteristic decomposition in some regimes , especially in ultrarelativistic limits , due to the degeneracy of the characteristic information .thus the use of an alternative scheme becomes sensible when the characteristic decomposition is unknown . the hll scheme proposed by for classical hydrodynamicsis based on an approximate riemann solver that does not require full , and numerically expensive , characteristic decomposition .this feature of the hll scheme makes its use very attractive , especially in multidimensions , where computational efficiency and robustness is extremely important .this scheme was applied first to relativistic hydrodynamics by in one dimension and by in multidimensions .it is worth stressing that many treatments of relativistic astrophysical problems have assumed a ideal gas equation of state with a constant polytropic index , but this is a reasonable approximation only if the gas is either strictly subrelativistic or ultrarelativistic .however , when the gas is semirelativistic or when the gas has two components , e.g. , nonrelativistic protons and relativistic electrons , this assumption is no longer correct .this was shown for the relativistic perfect gas law by , where the exact form of an equation of state relating thermodynamic quantities of specific enthalpy and temperature is completely described in terms of modified bessel functions . since the correct equation of state for the relativistic perfect gas has been recognized as being important , several investigations with a more general equation of statehave been reported in numerical relativistic hydrodynamics . described an upwind numerical code for special relativistic hydrodynamics with the synge equation of state for multi - component relativistic gas .more recently , used , in their upwind relativistic hydrodynamic code , a simple equation of state that closely approximates the synge equation of state for a single - component relativistic gas .several numerical simulations in the context of relativistic extragalactic jets make use of the general equation of state to account for transitions from nonrelativistic to relativistic temperature . used the synge equation of state for different compositions , including pure leptonic and baryonic plasmas , to investigate the influence of the composition of relativistic extragalactic jets on their long - term evolution . similarly, studied the relativistic extragalactic jet deceleration through density discontinuities by using the synge - like equation of state with a variable polytropic index . in this workwe propose an analytical form of equation of state for the multi - component relativistic perfect gas that is consistent with the synge equation of state in the relativistic regime .this proposed equation of state is suitable for a numerical code from the computational point of view , unlike the synge equation of state , which involves the computation of bessel functions .we then build a multidimensional relativistic hydrodynamics code based on the hll scheme using the proposed equation of state , and demonstrate the accuracy and robustness of this code by presenting several test problems and numerical simulations .in particular , we plan to use this code to simulate relativistic extragalactic jets that are probably composed of a mixture of relativistic particles of different masses .numerical simulations of relativistic jets of different compositions are challenging , but are made tractable by using the proposed equation of state that accounts for different compositions of relativistic gas .this paper is organized as follows . in 2 we present the equations of motion and the synge equation of state , and we describe the proposed general equation of state for the relativistic gas . in 3 we describe a relativistic hydrodynamics code based on the hll scheme , incorporating this proposed equation of state . in 4 and 5 we present numerical tests and simulations with the code to demonstrate the performance of the code .a conclusion is given in 6 .the motion of relativistic gas is described by a system of conservation equations .the equations in special relativistic hydrodynamics are written in a covariant form as here , is the covariant derivative with respect to spacetime coordinates ] is the normalized ( ) four - velocity vector , where is the lorentz factor , and is the metric tensor in minkowski space .the speed of light is set to unity ( ) in this work .greek indices ( e.g. , , ) denote the spacetime components while latin indices ( e.g. , , ) indicate the spatial components . the rest mass density , specific enthalpy , and pressure in the local rest frame are denoted by , , and , respectively . in cartesiancoordinates these relativistic hydrodynamic equations can be written in conservative form as where is the state vector of conservative variables and , , and are respectively the flux vectors in the , , and -directions , defined by ,~~ \mbox{\boldmath}_x = \left[\matrix{d v_x\cr m_x v_x+p\cr m_y v_x\cr m_z v_x\cr\left(e+p\right)v_x}\right].\ ] ] the flux vectors and are given by properly permuting indices .the conservative variables , , , , and represent respectively the mass density , three components of momentum density , and energy density in the reference frame .the variables in the reference frame are nonlinearly coupled to those in the local rest frame via the transformations where the lorentz factor is given by with .the system of conservation equations describing the motion of relativistic gas is completed with an equation of state that relates the thermodynamic quantities of specific enthalpy , rest mass density , and pressure . in generalthe equation of state can be expressed with the specific enthalpy expressed as a function of the rest mass density and the pressure the sound speed is then defined as the explicit form of the sound speed depends on the particular choice of the equation of state .the exact form of equation of state for a relativistic perfect gas composed of multiple components was derived by . for a single - component relativistic gasthe equation of state is written as where and are respectively the modified bessel function of the orders two and three and is a measure of inverse temperature . under the synge equation of state a relativistic perfect gasis entirely described in terms of the modified bessel functions .the synge equation of state for a relativistic gas can be written in the form where the quantity is defined by then the sound speed is written as and the relativistic adiabatic index is given by where .the quantities and are constant and equal if the gas remains ultrarelativistic or subrelativistic ( i.e. , for or for ) . for the intermediate regime and vary slightly differently between the two limiting cases . for a multi - component relativistic gas ,the direct use of the synge equation of state involves the computation of bessel functions , thus requires significant computation cost and results in computational inefficiency .here we propose a new general equation of state for multi - component relativistic gas that uses analytical expression and is more efficient and suitable for numerical computations .we suppose that the relativistic gas is composed of electrons , positrons , and protons although more components of relativistic gas easily can be considered .the total number density is then given by where , , and are the electron , positron , and proton number densities , respectively .we ignore the production or annihilation of electron - positron pairs and assume the composition of electrons , positrons , and protons is maintained .the assumption of charge neutrality gives us the relations and .the total rest mass density and pressure are respectively given by and , where is the boltzmann constant and is the temperature .for our equation of state for multi - component relativistic gas we adopt the equation of state , previously introduced by and later used by , which closely reproduces the synge equation of state for a single - component relativistic gas .the equation of state takes the form where is the energy density in the local rest frame .it can be solved for the specific enthalpy using as for a multi - component relativistic gas , the total enthalpy is then given by ] is the state vector of primitive variables , and }}{1-v^2c_s^2}.\ ] ] the maximum and minimum eigenvalues are based on a simple application of the relativistic addition of velocity components decomposed into coordinates directions and simply reduce to in the one - dimensional case .numerical integration of relativistic hydrodynamic equations advances by evolving the state vector of conservative variables in time .however , in order to compute the flux vectors for the evolution , the primitive variables involved in the flux vectors should be recovered from the conservative variables at each time step by the inverse transformation the inverse transformation is nonlinearly coupled and reduces to a single equation for the pressure this nonlinear equation can be solved numerically using a newton - raphson iterative method in which the derivative of the equation with respect to pressure is given by once the pressure is found numerically , the rest mass density and velocity are recovered by the inverse transformation .this procedure of inversion from conservative to primitive variables is valid for a general equation of state by specifying the expression of specific enthalpy . the numerical integration of relativistic hydrodynamic equations proceeds on spatially discrete numerical cells in time , based on the finite difference method . in our implementationthe state vector at the cell center at the time step is updated by calculating the numerical flux vector along the -direction at the cell interface at the half time step as follows the numerical flux vector is calculated from the approximate riemann solution and is given in the form }{a_{x , i+1/2}^+-a_{x , i+1/2}^-},\end{aligned}\ ] ] where the maximum and minimum wave speeds are defined by here and are the left and right state vectors of the primitive variables , which are defined at the left and right edges of the cell interface , respectively . in the first order of spatial accuracy the left and right state vectors reduce to and . for second - order accuracy in spacethe left and right state vectors are interpolated as with the minmod limiter \mathrm{min}\{\left|\delta^+\mbox{\boldmath}_i^n\right|,\left|\delta^-\mbox{\boldmath}_i^n\right|\},\ ] ] or the monotonized central limiter \mathrm{min}\{2\left|\delta^+\mbox{\boldmath}_i^n\right|,2\left|\delta^-\mbox{\boldmath}_i^n\right| , \frac{1}{2}\left|\delta^+\mbox{\boldmath}_i^n+\delta^-\mbox{\boldmath}_i^n\right|\},\ ] ] where and . in our formulation ,the state vector of the primitive variables defined at the half time step , , is computed from a predictor step , , with the flux vector calculated by replacing the time step by the time step in equations ( 26 ) to ( 29 ) .this approach makes our code second order in time as well .the time step is restricted by the courant condition with . for multidimensional extensions ,the numerical integration along the -direction is applied separately to the - ( and - ) directions through the strang - type dimensional splitting . in order to maintain second - order accuracy the order of dimensional splitting is completely permuted in each successive sequence .we have applied the numerical scheme with the general equation of state , described in previous sections , to several test problems . in all the test problems the minmod limiter and a courant constant , , are used .the relativistic shock tube test is characterized initially by two different states separated by a discontinuity .as the initial discontinuity decays , distinct wave patterns consisting of shock waves , contact discontinuities , and rarefraction waves appear in the subsequent flow evolution . in the relativistic shock tube problemthe decay of the initial discontinuity significantly depends on the tangential velocity since the velocity components are coupled through the lorentz factor in the equations and the specific enthalpy also couples with the tangential velocity . as a result , the relativistic shock tube problem becomes more challenging in the presence of tangential velocities .this relativistic shock tube problem is very good test since it has an analytic solution where the numerical solution can be compared .we have performed two sets of the two - dimensional relativistic shock tube tests using the general equation of state for an electron - positron gas .the first test set is less relativistic , with , while the second test set is more severe , with a large initial internal energy ( ) . for each of the test sets ,two cases are presented , one having only parallel velocity components while the other has , in addition , tangential velocities . in the first set , the initial left and right states for the case of only parallel velocitiesare given by , , , , , and ; for the case where tangential velocities are included these additional initial values are and .for the second set the initial left and right states for the case of only parallel velocity are , , , , , and ; now for the case when tangential velocities are considered as well , and are taken . herethe subscripts and denote the left and right states separated by an initial discontinuity placed along the main diagonal in the two - dimensional computational plane .structures such as waves and discontinuities propagate along the diagonal normal to the initial discontinuity . here is velocity parallel to the wave normal in the plane , given by and is velocity tangential to the wave normal in the direction out of the plane , given by .the numerical computations are performed in a two - dimensional box with ] using cells for the first set and cells for the second set .the results from the numerical computations carried with the general equation of state for an electron - positron gas for the first test set are shown in figure [ fig3 ] .wave structures are measured along the main diagonal line to at times and for the cases of the only parallel and also tangential velocities , respectively .the numerical solutions are marked with open circles and the analytical solutions obtained using the numerical code available from are plotted with solid lines .our numerical scheme with the proposed general equation of state is able to reproduce all the wave structures with very good accuracy and stability , as shown in figure [ fig3 ] .the shock waves and rarefraction waves are captured correctly , while the contact discontinuities are relatively more smeared due to the use of the hll scheme and the minmod limiter .the inclusion of the tangential velocity leads to the same basic wave pattern as in the absence of tangential velocity , but the numerical solutions are significantly modified .figure [ fig4 ] shows the results from the numerical computations with the general equation of state for an electron - positron gas for the second test set .structures are measured along the main diagonal line to at times and for the cases of the parallel only and included tangential velocities , respectively .the numerical solutions compare well with the analytical solutions .shock waves and contact discontinuities propagate to the right , while rarefraction waves move to the left .as shown in figure [ fig4 ] , all the wave structures are accurately reproduced and their stability is good ; however , the contact discontinuities are somewhat smeared .again , the inclusion of the tangential velocity has a considerable influence on the numerical solutions . for a quantitative comparison with analytical solutions we have calculated the norm errors of the rest mass density , parallel and tangential velocities , and pressure defined by , e.g. , for density , , where the superscripts and represent numerical and analytical solution , respectively .the norm errors are given in table [ tab1 ] and demonstrate a very good agreement between the numerical and analytical solutions in all the primitive variables .we also carried out the two - dimensional relativistic shock tube tests considered in figures [ fig3 ] and [ fig4 ] for several other possible combinations of the tangential velocity pairs , , and , , . in general , the stable wave structures are reproduced and the numerical solutions are reasonably comparable to the analytical solutions for all the different combinations of tangential velocity pairs . as a consequence ,the results from the two - dimensional relativistic shock tube tests show that our code is able to robustly and accurately capture discontinuities and waves .the relativistic shock reflection problem involves a collision between two equal gas flows moving at relativistic speeds in opposite directions .the collision of the two cold gases causes compression and heating of the gases as kinetic energy is converted into internal energy .this generates the two strong shock waves to propagate in opposite directions , leaving the gas behind the shocks stationary .the analytical solution of this relativistic shock reflection problem is obtained by and .we have tested the two - dimensional relativistic shock reflection problem using the general equation of state for an electron - positron gas .two cases are presented here .as for the shock tube tests , one includes only the parallel velocity and the other includes in addition the tangential velocity .the left and right states for the case of only parallel velocity are initially given by , , , , , and , and for the case when the tangential velocity is included , oppositely directed tangential velocities and are assumed to be present on either side of the plane . here the subscripts and stand for the left and right states separated by the initial collision points located along the main diagonal in the two - dimensional computational plane .the two shock waves propagate diagonally in opposite directions , and should keep symmetric with respect to the initial collision points . as in the relativistic shock tube tests , is velocity parallel to the wave normal in the plane and is velocity tangential to the wave normal in the direction out of the plane .the numerical computations are carried out in a two - dimensional box with ] using cells .figure [ fig5 ] shows the results from our relativistic shock reflection tests with the general equation of state for an electron - positron gas .structures are measured along the main diagonal line to at time .the numerical solutions are in very good agreement with the analytical solutions . in both cases ,the shock wave is resolved by two numerical cells , and there are no numerical oscillations behind the shocks .as shown in figure [ fig5 ] , the compression ratio between shocked and unshocked gases is about for the rest mass density and about for the pressure in the case of only parallel velocities ; the inclusion of the tangential velocities increases the compression ratio to about for the rest mass density and to about for the pressure .near , the density distribution slightly underestimates the analytical solution and a stationary discontinuity in the tangential velocity is somewhat diffused due to the numerical effect of reflection heating phenomena .the norm errors of the rest mass density , parallel and tangential velocities , and pressure are also given in table [ tab1 ] .a direct comparison with the analytical solutions shows that the measured errors are very small for all the primitive variables .the accuracy of numerical solutions depends on the number of cells spanned by the computational box .we have run the relativistic shock reflection test in figure [ fig5](b ) with different numerical resolutions to check the convergence rate . except for the numerical resolutions the initial conditions are identical to those used in the test in figure [ fig5](b ) .we have computed the norm errors for rest mass density , velocities , and pressure with different resolutions .numerical resolutions of , , , , , and cells give norm errors for the rest mass density of , , , , , and , respectively . as expected for discontinuous problems , first - order convergence in the norm errors for rest mass densityis obtained with increasing the numerical resolution .similar clear trends toward convergence are seen in the norm errors for velocities and pressure . in a relativistic blast wave test a large amount of energy is initially deposited in a small finite spherical volume and the subsequent expansion of that overpressured region is evolved forward in time .this produces a spherical shock propagating outward from an initial discontinuity at an arbitrary radius .this radial blast wave explosion provides a useful test problem to explore the spherically symmetric properties in highly relativistic flow speeds .we have performed the three - dimensional relativistic blast wave test with the general equation of state for an electron - proton gas .the initial condition for the relativistic blast wave problem consists of two constant states given by , , , , , and , where subscripts and represent the inner and outer states separated by an initial discontinuity at the radius in the three - dimensional computational box .the numerical computations are performed in a three - dimensional box with ] , and ] , ] using a uniform numerical grid of cells .the beam has an initial radius ( corresponding to cells ) , is launched from the origin , and propagates through a uniform static ambient medium along the positive -direction .outflow boundary conditions are set at all boundaries except along the symmetry axis where reflecting boundary conditions are used and in the injection region where an inflow boundary condition is imposed to keep the beam constantly fed .the monotonized central limiter and the courant constant are used in all these jet propagation cases .the images in figure [ fig8 ] display the logarithms of the rest mass density on the plane at time for the four different cases in the simulations of relativistic axisymmetric jets .the bow shock , the beam , and the cocoon surrounding the beam can be clearly identified in all four cases , confirming the ability of our code to follow complex relativistic flows . for each case , a bow shock that separates the shocked jet material from the shocked ambient medium is driven into the ambient medium .the beam itself is terminated by a mach disk where much of the beam s kinetic energy is converted into internal energy .shocked jet material flows backward along the working surface into a cocoon , resulting in the development and mixture of turbulent vortices in the cocoon , and the interaction of these vortices with the beam forms oblique internal shocks within the beam close to the mach disk , which causes the deceleration of the jet .the four different cases , however , show differences in specific morphological and dynamical properties .the cold jet ( case a ) propagates at the slowest velocity , and the jet produces a broad bow shock , a thick cocoon , and has the mach disk located quite far behind the bow shock . on the contrary , the hot jet ( case b ) is dominated by a narrow bow shock , has a thin cocoon , and its mach disk lies very close to the bow shock , thanks to its having the fastest advance velocity .the electron - positron and the electron - proton jets ( cases c and d ) propagate faster than the cold jet , but slower than the hot jet .therefore , the electron - positron and the electron - proton jets possess morphological and dynamical properties intermediate between the cold and the hot jets . in terms of our simulation parameters, the electron - positron jet tends to be more similar to the cold jet , while the electron - proton jet seems to share more features with the hot jet . in figure[ fig9 ] , the position of the bow shock is plotted as a function of time for the four different gases .the symbols mark the numerical estimate for a selected time interval for each case , and the lines represent the one - dimensional theoretical estimate in equation ( 32 ) .the numerical simulations are in good agreement with the one - dimensional theoretical estimates for the bow shock location for all cases .most numerical codes for special relativistic hydrodynamics have used the ideal gas equation of state with a constant adiabatic index , but this is a poor approximation for most relativistic astrophysical flows .we proposed a new general equation of state for a multi - component relativistic gas , based on the synge equation of state for a relativistic perfect gas .our proposed general equation of state is very efficient and suitable for numerical relativistic hydrodynamics since it has an analytic expression .the thermodynamic quantities computed using the proposed general equation of state behave correctly asymptotically in the limits of hot and cold gases . for intermediate regimesthe thermodynamic quantities vary between those for the two limiting cases , depending on the composition of the relativistic gas .we also presented a multidimensional relativistic hydrodynamics code incorporating the proposed general equation of state for a multi - component relativistic gas .our numerical code is based on the hll scheme , which avoids a full characteristic decomposition of the relativistic hydrodynamic equations and uses an approximate solution to the riemann problem for flux calculations . since the numerical code is fully explicit , retaining a second - order accuracy in space and time , it is simple to extend the code to different geometries or to produce parallelized versions of this code .the analytical formulation of the proposed equation of state and the numerical scheme being free of complete characteristic decomposition make the code very efficient and robust in ultrarelativistic multidimensional problems .the accuracy and robustness of the code are demonstrated in two dimensions using the test problems of the relativistic shock tube and the relativistic shock reflection and in three dimensions using the test problem of the relativistic blast wave .the direct comparisons of numerical results with analytical solutions show that shocks and discontinuities are correctly resolved even in highly relativistic test problems with nonvanishing tangential velocities .results from the three - dimensional simulations of the relativistic axisymmetric jets demonstrate the ability of our code to follow complex relativistic flows as well as the flexibility enough to be applied to practical astrophysical problems .these simulations show that the morphology and dynamics of the relativistic jets are significantly influenced by the different equation of state and by different compositions of a relativistic perfect gas .aloy , m. a. , ibez , j. m. , mart , j. m. , & mller , e. 1999 , , 122 , 151 anninos , p. , & fragile , p. c. 2003 , , 144 , 243 blandford , r. d. , & mckee , c. f. 1976 , phys .fluids , 19 , 1130 choi , e. , wiita , p. j. , & ryu , d. 2007 , , 655 , 769 del zanna , l. , & bucciantini , n. 2002 , , 390 , 1177 donat , r. , font , j. a. , ibez , j. m. , & marquina , a. 1998 , j. comput . phys . , 146 , 58 duncan , g. c. , & hughes , p. a. 1994 , , 436 , l119 einfeldt , b. 1988 , siam j. numer ., 25 , 294 falle , s. a. e. g. , & komissarov , s. s. 1996 , , 278 , 586 giacomazzo , b. , & rezzolla , l. 2006 , j. fluid mech . , 562 , 223 harten , a. , lax , p. d. , & van leer , b. 1983 , siam rev ., 25 , 35 hughes , p. a. , miller , m. a. , & duncan , g. c. 2002 , , 572 , 713 komissarov , s. s. , & falle , s. a. e. g. 1998 , , 297 , 1087 landau , l. d. , & lifshitz , e. m. 1959 , fluid mechanics ( london : pergamon press ) lister , m. l. , et al .2009 , , 138 , 1874 mart , j. m. , mller , e. , font , j. a. , ibez , j. m. , & marquina , a. 1997 , , 479 , 151 mathews , w. g. 1971 , , 165 , 147 meliani , z. , keppens , r. , & giacomazzo , b. 2008 , , 491 , 321 meliani , z. , sauty , c. , tsinganos , k. , & vlahakis , n. 2004 , , 425 , 773 mignone , a. , & bodo , g. 2005 , , 364 , 126 mignone , a. , & mckinney , j. c. 2007 , , 378 , 1118 mignone , a. , plewa , t. , & bodo , g. 2005 , , 160 , 199 norman , m. l. , winkler , k .- h .a. , smarr , l. , & smith , m. d. 1982 , , 113 , 285 perucho , m. , & mart , j. m. 2007 , , 382 , 526 pons , j. a. , mart , j. m. , & mller , e. 2000 , j. fluid mech . , 422 , 125 rosen , a. , hughes , p. a. , duncan , g. c. , & hardee , p. e. 1999 , , 516 , 729 rossi , p. , mignone , a. , bodo , g. , massaglia , s. , & ferrari , a. 2008 , , 488 , 795 scheck , l. , aloy , m. a. , mart , j. m. , gmez , j. l. , & mller , e. 2002 , , 331 , 615 schneider , v. , katscher , u. , rischke , d. h. , waldhauser , b. , maruhn , j. a. , & munz , c .- d .1993 , j. comput .phys . , 105 , 92 strang , g. 1968 , siam j. numer ., 5 , 506 synge , j. l. 1957 , the relativistic gas ( amsterdam : north - holland ) wilson , j. r. , & mathews , g. j. 2003 , relativistic numerical hydrodynamics ( cambridge : cambridge univ . press ) , as function of inverse temperature , , for different compositions of relativistic gas .the compositions of ( electron - positron ) , , , , and ( electron - proton ) are shown using dotted , short dashed , dot - short dashed , long dashed , and solid curves , respectively .the exact synge solutions for electron - positron and electron - proton gases are respectively drawn as the red dotted and solid lines for comparison . ] , specific enthalpy , and sound speed , as functions of inverse temperature for the different equation of state .the long dashed and short dashed lines correspond to the ideal gas equation of state with constant adiabatic index and , respectively .results for the proposed general equation of state for electron - positron and electron - proton gases are shown by the dotted and solid lines , respectively . ] ] using cells , and the wave structures are measured along the main diagonal line to at time .the numerical solutions are marked with open circles and the analytical solutions are plotted with solid lines .( b ) same as in ( a ) but for the case of the included tangential velocity and at time .] ] using cells , and radial structures are measured along the main diagonal line to at time .numerical computations carried out with the general equation of state for an electron - proton gas and those done with an ideal gas equation of state with constant adiabatic index of are marked using open and filled circles , respectively . ]ccccc rst3a & 4.96e & 3.91e & 0.00e & 3.46e + rst3b & 7.85e & 2.38e & 1.64e & 3.11e + rst4a & 4.43e & 2.83e & 0.00e & 5.12e + rst4b & 1.43e & 2.20e & 8.46e & 4.21e + rsr5a & 1.81e & 2.33e & 0.00e & 2.61e + rsr5b & 2.48e & 2.92e & 9.92e & 6.91e + ccccccccc a & & 7.1 & 2 & & & - & & 8 + b & & 7.1 & 2 & & & - & & 6 + c & & 7.1 & 2 & & & 0.0 & & 7 + d & & 7.1 & 2 & & & 1.0 & & 7 +
the ideal gas equation of state with a constant adiabatic index , although commonly used in relativistic hydrodynamics , is a poor approximation for most relativistic astrophysical flows . here we propose a new general equation of state for a multi - component relativistic gas which is consistent with the synge equation of state for a relativistic perfect gas and is suitable for numerical ( special ) relativistic hydrodynamics . we also present a multidimensional relativistic hydrodynamics code incorporating the proposed general equation of state , based on the hll scheme , which does not make use of a full characteristic decomposition of the relativistic hydrodynamic equations . the accuracy and robustness of this code is demonstrated in multidimensional calculations through several highly relativistic test problems taking into account nonvanishing tangential velocities . results from three - dimensional simulations of relativistic jets show that the morphology and dynamics of the relativistic jets are significantly influenced by the different equation of state and by different compositions of relativistic perfect gases . our new numerical code , combined with our proposed equation of state is very efficient and robust , and unlike previous codes , it gives very accurate results for thermodynamic variables in relativistic astrophysical flows .
modern parallel and distributed storage systems encapsulate the storage layer behind an object abstraction [ 1 ] since this allows to hide the implementation details behind a key - valued interface : having ` get(oid ) ` and ` set(oid , value ) ` functions where ` oid ` is an object identifier . a device / service that exposes such an interfaceis known as an object storage device ( osd ) . in this workwe introduce the concept of a versioning osd ( vosd ) : incorporating versioning primitives as part of the osd api . a vosd stores multiple versions of an object , allowing the user to execute time - travel operations and to access an object s lineage .the vosd interface ( shown below ) requires a ` version_id ` parameter to be passed to any call .additionally , similar collection - wide operations can be implemented that allow handling versions for a sets of objects ( with a corresponding ` collection_id ` parameter ) .the regular osd api can be supported by assuming that ` get / set ` operations return / modify the latest version .clone(v_id , o_id ) get(v_id , o_id ) set(v_id , o_id , value ) diff(v_1 , v_2 , o_id ) parent(v_1 , o_id ) children(v_1 , o_id ) clone(v_id , c_id ) ...a vosd serves as a powerful building block , as distributed and parallel storage systems can enable new services that leverage multiversioning , allowing a user / application to choose from an spectrum of multiversioning alternatives : distinct consistency needs can be served depending on the use case .we look at two of these use cases next .to exemplify the utility of having versioning as a first - class citizen in an osd interface , we look at two use cases : one distributed and another one in a parallel setting .we focus on the issues at the single osd level since these are independent of scale . in transactional database systems ,versioning is usually employed to implement optimistic concurrency control . in thissetting , instead of acquiring locks , every transaction operates over a snapshot of the database in an isolated manner .when a transaction is ready to be committed , a validation phase checks that it does nt conflict with others , in which case the transaction is aborted and has to be restarted .implementing multiversion concurrency control ( mvcc ) [ 2 ] requires keeping track of the highest - committed transaction ( hct ) .access to this record has to be serialized to avoid inconsistencies .once this hct record is available ( e.g. as an object itself ) , implementing mvcc on top of a vosd is relatively straight forward . at the beginning of a transaction, the collection of objects that is being transactionally managed is snapshotted . at the end of the transaction ,a lock is acquired on the hct record and every object in the isolated snapshot is ` diff`ed against the corresponding hct .if no conflicts arise , the hct is assigned to point to the new transaction and the lock is released .hpc applications use checkpointing as their main fault - tolerant technique : periodically dump checkpoints to storage and , in the advent of failures , recover by reading the latest checkpoint .a recent trend is to provide asynchronous interfaces ( e.g. see the recent doe fastforward storage and i / o effort [ 3 ] ) to applications .asynchrony allows an application to request an i / o operation and not have to wait for its completion .a challenge arises when multiple i / o operations depend on each other , since in order to avoid inconsistencies ( e.g. abort if a dependant request fails ) , the user needs to keep track of these dependencies and add new logic at the application level .all this extra code introduces overhead and causes waste of computational resources . by employing versioning ,an hpc application can tag every i / o operation with its corresponding checkpoint version and let multiple versions co - exist .an out - of - core process can merge multiple versions or garbage collect unused ones to free - up space .additionally , similarly to the mvcc case , a record that keeps track of the highest - readable checkpoint ( hrc ) can be used to give analysis and visualization applications access to consistent checkpoints ( an isolation level known as read - atomicity [ 4 ] ) .there are mainly three alternatives for implementing an osd api : by using an in - memory backend , key - value store or a local posix filesystem . incorporating versioning to each of thesecan be done in distinct ways : * * posix*. if the underlying filesystem supports it , copy - on - write ( cow ) can be used to represent multiple versions of an object . if filesystem lacks support for cow , a vosd can fall - back to having per - version copies . * * in - memory*. copy - on - write memory can be employed .for complex objects this might carry an extra overhead .in such cases , alternatives like ropes or interning can be used .* * key - value store*. the most straight - forward way to implement it is by keeping a copy for each version of an object .this might be prohibitive for large objects .we next present preliminary evaluation of implementations for each of the above .the ceph distributed storage platform [ 5 ] provides an object interface that exposes a ` clone ( ) ` operation , allowing applications to create snapshots of an object .internally , ceph abstracts storage nodes as osds and currently supports the three backend types mentioned earlier : * * posix*. a ceph osd can be backed by either xfs , ext4 , zfs or btrfs . in our experiments we use xfsthus we can not make use of a cow operation . * * in - memory*. ceph osds implement a custom in - memory store ( memstore ) , using cow to back the snapshot operation .* * key - value store*. the key - value store of a ceph osd is backed by an instance of leveldb , which is what we use . since leveldbdoes nt support versioning , cloning an object results in making a full copy of an object . *experimental setup*. our experiments were conducted on a machine with two 2.0ghz dual - core opteron 2212 , 8 gb ddr-2 ram , one 250 gb seagate 7200-rpm sata hard drive , running ubuntu 12.04 .a ceph osd daemon runs on the machine and a local client connects to it to operate on objects stored in it .we measure two aspects : version creation and retrieval .we generate a workload consisting of 100 objects and 100 versions .the size of each object is 4 mb .we modify a portion of the object for each version ( 64 16 kb chunks modified at random ) .we measure the time it takes to create this workload for each backend .table 1 shows the results . ....backend phase time ( ms ) ---------- -------------- ----------- xfs f 676 s 59 m 20829 ---------- ------------- ------------ memstore f 106 s 47 m 9247 ---------- --------------- ---------- leveldb f 196 s 192 m 9548 .... we break down the timings into three phases : ` f ` which corresponds to the time it takes to create the first revision . `s ` denotes the average time that it takes to create a snapshot of the collection . `m ` corresponds to the average time it takes to modify the 100 objects ( 64 16 kb modifications for each object ) . for the workload described above, we read the latest version of an object , as well as a randomly selected version ( in the [ 1,100 ] range ) . the object being readis randomly chosen .we execute 100 queries of each type and report the average .table 2 shows the results . ....backend latest ( ms ) random ( ms ) ----------- ------------- ------------- xfs 11.5 11.7 memstore 3.2 3.1 leveldb 6.4 6.4 ....as part of our ongoing project , we are defining a generalized distributed multiversioning framework that will be able to support multiple flavors of versioning . as mentioned previously , applications can customize this service to their particular needs and observe distinct consistency guarantees .we are currently looking at other use cases that fit in this multi - versioned view : distributed softare transactional memory , management of massive datasets , transactional stream processing and programmable filesystems .[ 1 ] m. mesnier , g. ganger , and e. riedel , `` object - based storage , '' _ ieee communications magazine _ , vol .2003 , pp . 8490 .[ 5 ] s.a .weil , s.a .brandt , e.l .miller , d.d.e .long , and c. maltzahn , `` ceph : a scalable , high - performance distributed file system , '' _ proceedings of the 7th symposium on operating systems design and implementation _ , berkeley , ca , usa : usenix association , 2006 , pp .
the ability to store multiple versions of a data item is a powerful primitive that has had a wide variety of uses : relational databases , transactional memory , version control systems , to name a few . however , each implementation uses a very particular form of versioning that is customized to the domain in question and hidden away from the user . in our going project , we are reviewing and analyzing multiple uses of versioning in distinct domains , with the goal of identifying the basic components required to provide a generic distributed multiversioning object storage service , and define how these can be customized in order to serve distinct needs . with this primitive , new services can leverage multiversioning to ease development and provide specific consistency guarantees that address particular use cases . this work presents early results that quantify the trade - offs in implementing versioning at the local storage layer .
in 1868 , the first autonomous reviewing journal for publications in mathematics the `` jahrbuch ber die fortschritte der mathematik '' was started , a reaction of the mathematical community to the increasing number of mathematical publications .the new information service should inform the mathematicians about recent developments in mathematics in a compact form .today , we encounter a similar situation with mathematical software . until now , a comprehensive information service for mathematical software is still missing .we describe an approach towards a novel kind of information service for mathematical software .a core feature of our approach is the idea of systematically connecting mathematical software and relevant publications .there have already been some activities towards the development of mathematical software information services . a far - reaching concept for a semantic web service for mathematical softwarewas developed within the monet project which tries to analyze the specific needs of a user , search for the best software solution and organize the solution by providing a web service .however , the realization of such an ambitious concept requires a lot of resources .also , a number of specialized online portals and libraries for mathematical software were developed .one of the most important portals for mathematical software is the netlib provided by nist .netlib provides not only metadata for a software but also hosts the software .netlib has developed an own classification scheme , the gams system , which allows for browsing in the netlib .other important manually maintained portals , e.g. , orms , plato or the mathematical part of freecode , provide only metadata about software .mathematical software and publications are closely interconnected .often , ideas and algorithms are first presented in publications and later implemented in software packages .on the other hand , the use of software can also inspire new research and lead to new mathematical results .moreover , a lot of publications in applied mathematics use software to solve problems numerically .the use of the publications which reference a certain software is a central building block of our approach .identification of software references in the zbmath database : : there are essentially two different types of publications which refer to a software , publications describing a certain software in detail , and publications in which a certain software is used to obtain or to illustrate a new mathematical result . in a first step , the titles of publications were analyzed to identify the names of mathematical software .heuristic methods were developed to search for characteristic patterns in the article titles , e.g. , ` software ' , ` package ' , ` solver ' in combination with artificial or capitalized words .it was possible to detect more than 5,000 different mathematical software packages which were then evaluated manually .software references in publications indirect information of software : : the automatically extracted list of software names ( see above ) can be used as a starting point for searching software references in the abstracts : more than 40,000 publications referring to previously identified software packages were found in the zbmath database .of course , the number of articles referring to a given software is very different , ranging from thousands of publications for the big players ( e.g. mathematica , matlab , maple ) to single citations for small , specialized software packages .an evaluation of the metadata of the publications , especially their keywords and msc classifications has shown that most of the information is also relevant for the cited software and can therefore be used to describe the latter . for instance, we collect the keywords of all articles referring to a certain software and present them in swmath as a keyword cloud , common publications and the msc are used to detect similar software . more information about software : : web sites of a software if existing are an important source for direct information about a software . as mentioned above , there are also special online portals which sometimes provide further information about certain mathematical software packages .metadata scheme for software : : the formal description of software can be very complex .there are some standard metadata fields which are also used for publications , like authors , summary , key phrases , or classification . for softwarehowever , further metadata are relevant , especially the url of the homepage of a software package , version , license terms , technical parameters , e.g. , programming languages , operating systems , required machine capacities , etc ., dependencies to other software packages ( some software is an extension of another software ) , or granularity . unfortunately , often a lot of this metadata information is not available or can only be found with big manual effort .the focus of the metadata in swmath is therefore on a short description of the software package , key phrases , and classification . for classification, we use the msc2010 scheme even though the mathematics subjects classification is not optimal for software . quality filter for software : : swmath aims at listing high - quality mathematical software .up to now , no peer - reviewing control system for software is established . however, the references to software in the database zbmath can be used as an indirect criterion for the quality of a software : the fact that a software package is referred in a peer - reviewed article also implies a certain quality of the software .there are several reasons which suggest an extension of the publication - based approach .a major drawback of the publication - based approach is the time - delay between the release of software and the publication of an article describing the software .this delay can be up to several years for peer - reviewed journals . a second reason , not every software is referenced in peer - reviewed publications .often , software is described in technical reports or conference proceedings .also , not all publications describing or using mathematical software are contained in the zbmath database , e.g. , if a software was developed for a special application and articles on it were published in a journal outside the scope of zentralblatt math . in order to build a comprehensive information service about mathematical software, we therefore still use other sources of information as online portals for mathematical software , contacts to renowned mathematical institutions , research in google and other search engines with heuristic methods .one problem here is the quality control of this software .being listed on a renowned portal for mathematical software should be a clear indicator for the quality of a software , whereas a mere google hit does not mean much with respect to quality .swmath is a free open - access information service for the community .the development and maintenance of it , however , are not for free . for sustainability , the resources needed for the maintenance of the service must be minimized .automatic methods and tools are under development to search for mathematical software in the zbmath database , and to maintain and update the information on software ( e.g. an automatic homepage verification tool ) . in order to ease the maintenance of the service , the developments of the user interface and the retrieval functionalitiesare carried out in close coordination with the corresponding developments in zbmath .the swmath service enhances the existing information services provided by fiz karlsruhe / zentralbatt math .the integration of the database swmath in the information services of zentralblatt math contributes to its sustainability . at the moment, links from software - relevant articles to zbmath are provided . in the near future , back links from zbmath to swmath will be added too .the first prototype of the swmath service was published in autumn 2012 .currently , the service contains information about nearly 5,000 mathematical software packages .it can be found at http://www.swmath.org .the user interface of swmath concentrates on the essentials , containing simple search and an advanced search mask .then a list of the relevant software is presented .the detailed information about this software is shown if the name is clicked .it contains a description of the software , a cloud representation of key phrases ( auto - generated from the key phrases of the publications ) , the publications referring to the software , the most important msc sections , similar software and a plot showing the number of references over time .the latter is an indicator for usefulness , popularity and acceptance of a package within the mathematical community .swmath is a novel information service on mathematical software basing on the analysis of mathematical publications .automatic tools periodically check the availability of urls .further heuristic methods to automatically extract relevant information from software websites are currently developed .another possibility to keep the software metadata up - to - date is direct contact with ( selected ) software authors and providers .the user interface is under permanent development ; we recently added a browsing feature and will further enhance the usability of the swmath web application .in order to meet the demands of the mathematical software community , we created an online questionnaire which has recently been distributed to several thousand participants , https://de.surveymonkey.com/s/swmath-survey .9 monet project , http://www.ist-world.org/projectdetails.aspx?\projectid=bcfbe93045764208a1c5173cc4614852&sourcedatabaseid=9cd97ac2e51045e39c2ad6b86dce1ac2 netlib , http://www.netlib.org guide to available mathematical software ( gams ) , http://http://gams.nist.gov/ oberwolfach references to mathematical software ( orms ) , http://orms.mfo.de decision tree for optimization software ( plato ) , http://plato.asu.edu/guide.html math part of freecode , http://freecode.com/tags/mathematics msc2010 , http://www.msc2010.org
an information service for mathematical software is presented . publications and software are two closely connected facets of mathematical knowledge . this relation can be used to identify mathematical software and find relevant information about it . the approach and the state of the art of the information service are described here .
transformation - based learning is a relatively new machine learning method , which has been as effective as any other approach on the part - of - speech tagging problem ( brill , 1995a ) .we are utilizing transformation - based learning for another important language task called dialogue act tagging , in which the goal is to label each utterance in a conversational dialogue with the proper dialogue act .dialogue act _ is a concise abstraction of a speaker s intention , such as suggest or accept . recognizing dialogue acts is critical for discourse - level understanding and can also be useful for other applications , such as resolving ambiguity in speech recognition . but computing dialogue acts is a challenging task , because often a dialogue act can not be directly inferred from a literal reading of an utterance .figure [ ex - das ] presents a hypothetical dialogue that has been labeled with dialogue acts .[ cols="^,^,<,^",options="header " , ] as a preliminary experiment we ran ten trials with five committee members , testing on held - out data .figure [ confidences ] presents average scores and standard deviations , varying the minimum confidence , * m*. for a given instance , if at least * m * committee members agreed on a tag , then the most popular tag was applied , breaking ties in favor of the committee member that was developed the earliest ; otherwise no tag was output .the results show that the committee approach assigns useful confidence measures to the tags : all five committee members agreed on the tags for 45.12% of the instances , and 90.09% of those tags were correct .also , for 69.79% of the instances , at least four of the five committee members selected the same tag , and this tag was correct 83.53% of the time .we foresee that our module for tagging dialogue acts can potentially be integrated into a larger system so that , when transformation - based learning can not produce a tag with high confidence , other modules may be invoked to provide more evidence . in addition , like boosting , the committee method improves the overall accuracy of the system . by selecting the most popular tag among all five committee members , the average accuracy in tagging unseen data was 73.45% , while using the first committee member alone resulted in a significantly ( ) lower average score of 70.79% .previously , the best success rate achieved on the dialogue act tagging problem was reported by reithinger and klesen ( 1997 ) , whose system used a probabilistic machine learning approach based on n - grams to correctly label 74.7% of the utterances in a test corpus .( see samuel , carberry , and vijay - shanker ( 1998a ) for a more extensive analysis of previous work on this task . ) as a direct comparison , we applied our system to exactly the same training and testing set . over five runs ,the system achieved an average accuracy of 75.12%.34% , including a high score were produced in this experiment .] of 77.44% .in addition , we ran a direct comparison between transformation - based learning and c5.0 ( rulequest research , 1998 ) , which is an implementation of the decision trees method .the accuracies on held - out data for training sets of various sizes are presented in figure [ accuracy - graph ] . for transformation - based learning, we averaged the scores of ten trials for each training set ( to factor out the random effects of the monte carlo method ) , and the standard deviations are represented by error bars in the graph .these experiments did not utilize the committee method , and we would expect the scores to improve when this extension is used . with c5.0 , we wanted to use the same features that were effective for transformation - based learning , but we encountered two problems : 1 ) since c5.0 requires that each feature take exactly one value for each instance , it is very difficult to utilize the cue patterns feature .we decided to provide one boolean feature for each possible cue pattern , which was set to true for instances that included that cue pattern and false otherwise .2 ) our transformation - based learning system utilized the system - generated tag of the preceding instance .c5.0 can not use this information , as it requires that the values of all of the features are computed before training begins .the training times of transformation - based learning and c5.0 were relatively comparable for any number of conditions , although boosting sometimes resulted in a significant increase in training time .the accuracy scores of transformation - based learning and c5.0 , with and without boosting , are not significantly different , as shown in figure [ accuracy - graph ] .this paper has described the first investigation of transformation - based learning applied to discourse - level problems .we extended the algorithm to address two limitations of transformation - based learning : 1 ) we developed a monte carlo version of transformation - based learning , and our experiments suggest that this improvement dramatically increases the efficiency of the method without compromising accuracy .this revision enables transformation - based learning to work effectively on a wider variety of tasks , including tasks where the relevant conditions and condition combinations are not known in advance as well as tasks where there are a large number of relevant conditions and condition combinations .this improvement also decreases the labor demands on the human developer , who no longer needs to construct a minimal set of rule templates .it is sufficient to list all of the conditions that might be relevant and allow the system to consider all possible combinations of those conditions .2 ) we devised a committee strategy for computing confidence measures to represent the reliability of tags . in our experiments , this committee method improved the overall tagging accuracy significantly .it also produced useful confidence measures ; nearly half of the tags were assigned high confidence , and of these , 90% were correct . for the dialogue act tagging task, our modified version of transformation - based learning has achieved an accuracy rate that is comparable to any previously reported system .in addition , transformation - based learning has a number of features that make it particularly appealing for the dialogue act tagging task : 1 . transformation - based learning s learned model consists of a relatively short sequence of intuitive rules , stressing relevant features and highlighting important relationships between features and tags ( brill , 1995a ) .thus , transformation - based learning s learned model offers insights into a _ theory _ to explain the training data .this is especially useful in dialogue act tagging , which currently lacks a systematic theory .2 . with its iterative training algorithm ,when developing a new rule , transformation - based learning can consider tags that have been produced by previous rules ( ramshaw and marcus , 1994 ) .since the dialogue act of an utterance is affected by the surrounding dialogue acts , this leveraged learning approach can directly integrate the relevant contextual information into the rules .in addition , transformation - based learning can accommodate the focus shifts that frequently occur in discourse by utilizing features that consider tags of varying distances .our transformation - based learning system is very flexible with respect to the types of features it can utilize .for example , it can learn set - valued features , such as cue patterns . additionally , because of the monte carlo improvement, our system can handle a very large number of features .4 . for the dialogue act tagging task, people still do nt know what features are relevant , so it is very difficult to construct an appropriate set of rule templates .fortunately , transformation - based learning is capable of discarding irrelevant rules , as ramshaw and marcus ( 1994 ) showed experimentally , so it is not necessary that _ all _ of the given rule templates be useful .ramshaw and marcus s ( 1994 ) experiments suggest that transformation - based learning tends to be resistant to the overfitting problem .this can be explained by observing how the rule sequence produced by transformation - based learning progresses from general rules to specific rules .the early rules in the sequence are based on many examples in the training corpus , and so they are likely to generalize effectively to new data . later in the sequence ,the rules do nt receive as much support from the training data , and their applicability conditions tend to be very specific , so they have little or no effect on new data .thus , resistance to overfitting is an emergent property of the transformation - based learning algorithm . for the future ,we intend to investigate a wider variety of features and explore different methods for collecting cue patterns to increase our system s accuracy scores further .although we compared transformation - based learning with a few very different machine learning algorithms , we still hope to examine other methods , such as naive bayes . in addition , we plan to run our experiments with different corpora to confirm that the encouraging results of our extensions to transformation - based learning can be generalized to different data , languages , domains , and tasks .we would also like to extend our system so that it may learn from untagged data , as there is still very little tagged data available in discourse .brill developed an unsupervised version of transformation - based learning for part - of - speech tagging ( brill , 1995b ) , but this algorithm must be initialized with instances that can be tagged unambiguously ( such as `` the '' , which is always a determiner ) , and in dialogue act tagging there are very few unambiguous examples .we intend to investigate the following weakly - supervised approach : first , the system will be trained on a small set of tagged data to produce a number of different committee members . then given untagged data , it will derive tags with confidence measures . those tags that receive very high confidence can be used as unambiguous examples to drive the unsupervised version of transformation - based learning .we wish to thank the members of the verbmobil research group at dfki in germany , particularly norbert reithinger , jan alexandersson , and elisabeth maier , for providing the first author with the opportunity to work with them and generously granting him access to the verbmobil corpora .this work was partially supported by the nsf grant # ger-9354869 .ramshaw , lance a. and marcus , mitchell p. ( 1994 ) . exploring the statistical derivation of transformation rule sequences for part - of - speech tagging . in _ proceedings of the 32nd annual meeting of the acl_. samuel , ken , carberry , sandra , and vijay - shanker , k. ( 1998a ) . computing dialogue acts from features with transformation - based learning . in _ applying machine learning to discourse processing : papers from the 1998 aaai spring symposium_.
this paper presents results from the first attempt to apply transformation - based learning to a discourse - level natural language processing task . to address two limitations of the standard algorithm , we developed a monte carlo version of transformation - based learning to make the method tractable for a wider range of problems without degradation in accuracy , and we devised a committee method for assigning confidence measures to tags produced by transformation - based learning . the paper describes these advances , presents experimental evidence that transformation - based learning is as effective as alternative approaches ( such as decision trees and n - grams ) for a discourse task called dialogue act tagging , and argues that transformation - based learning has desirable features that make it particularly appealing for the dialogue act tagging task .
stochastic resonance ( sr ) is a phenomena in which the response of the nonlinear system to a weak periodic input signal is amplified / optimized by the presence of a particular level of noise , i.e , a previously untraceable subthreshold signal applied to a nonlinear system , can be detected in the presence of noise .furthermore , there exists an optimal level of noise for which the most efficient detection takes place .sr has been observed in many physical , chemical and biological systems .coherence resonance ( cr ) is the phenomena wherein regularity of the dynamical behavior emerges by virtue of an interplay between the autonomous nonlinear dynamics and the superimposed stochastic fluctuations . in cr ,analogous to sr , the extent of provoked regularity depends upon the amplitude of added noise .the cr effect too has been studied exhaustively , both theoretically and experimentally , in a wide range of nonlinear systems .the nonlinearity in plasma systems arises from the most fundamental processes , namely the wave - wave and wave - particle interactions .different modes may be excited due to nonlinear coupling of waves and plasma components and the character of the oscillations is primarily determined by the plasma parameters and perturbations . in the present work , possibility of observing noise invoked resonances in a glow discharge plasma is explored . however , as a precursor to the experiments involving noise , a systematic analysis of the autonomous dynamics is performed .this includes identification and characterization of the bifurcation in the vicinity of the set - point employed for the noise related experiments .the experiments were performed in a hollow cathode dc glow discharge plasma .the schematic diagram of the experimental setup is presented in fig [ fig1:setup ] .a hollow stainless steel ( s.s ) tube of length and of diameter ( ) 45 mm was used as the cathode and a central rod of length and 1.6 mm was employed as the anode .the whole assembly was mounted inside a vacuum chamber and was pumped down to a pressure of about 0.001 mbar using a rotary pump .the chamber was subsequently filled with the argon gas up to a pre - determined value of neutral pressure by a needle valve .finally a discharge was struck by a dc discharge voltage ( dv ) , which could be varied in the range of 0 v. mm from the anode .signal and noise sources were coupled to the discharge voltage ( dv ) through a capacitor.,width=8 ] the noise and subthreshold periodic square pulse generators were coupled with dv through a capacitor [ fig .[ fig1:setup ] ] . in all the experimentsdv was used as the bifurcation parameter while the remaining system parameters like pressure etc . , were maintained constant .the system observable was the electrostatic floating potential , which was measured using a langmuir probe of diameter = 0.5 mm and length 2 mm .the tip of this langmuir probe was placed in the center of the electrode system as indicated in fig .[ fig1:setup ] .the plasma density and the electron temperature were determined to be of the order of 10 and 3 ev respectively .furthermore , the electron plasma frequency ( ) was observed to be around 28 mhz , whereas the ion plasma frequency ( ) was measured to be around 105 khz .before studying the noise induced dynamics , we characterized the behavior of the autonomous system .not surprisingly , it was observed that at different chamber pressures , discharge struck at different voltages .fig [ fig : paschen ] shows the breakdown voltage ( ) at different , where p and d are the filling pressure and radius of the cathode respectively .this breakdown voltage ( ) initially decreases with an increase in , goes through a minimum value resembling a typical paschen curve and then begins to increase with increasing .it is observed that the system is excitable for the region paschen minima . in the lower side it shows self organized criticality . in this excitable domain ,the system dynamics are irregular ( complex ) at the initial stages of the discharge voltage and upon increasing dv they become regular ( period - one ) as shown in fig [ fig : raw0.89 mb ] .further augmentation of dv modifies the oscillation profile and results in the induction of typical relaxation oscillations .vs ( paschen curve ) for our experimental system .the minimum occurs at ( 1.69 mbar - mm , 251 v ) .the system is excitable for the minimum of the curve.,width=8 ] the time period ( t ) of these relaxation oscillations increases dramatically upon further incrementing dv .this eventually results in the vanishing of the limit cycle behavior beyond a critical dv ( ) . for larger values of dv ,the autonomous dynamics exhibit a steady state fixed point behavior .time traces from top to bottom in the left panel of fig [ fig : homoclin ] depict this period lengthening of the oscillatory behavior . a systematic analysis of the increment in the period ( t ) , presented in fig .[ fig : homoclin][(a ) right panel ] , indicates that the autonomous dynamics undergo a critical ( exponential ) slowing down .consequently , the vs t curve can be fitted by a straight line , where is the bifurcation point separating the oscillatory domain and the steady state behavior .the results of fig [ fig : homoclin ] indicate that the system dynamics undergo a homoclinic bifurcation at resulting in the loss of oscillations .an anode glow is observed with these oscillations .figs [ fig : glow](a ) shows that the glow with largest size , appears when the discharge is struck at a typical pressure of 0.95 mbar and its size decreases with increase in the dv until it finally disappears [ figs [ fig : glow](a)[fig : glow](h ) ] .this may some types of unstable structure in the plasma and produces such oscillation of the instabilities .in this section experimental results involving noise generated resonances , namely sr and cr are presented . for our experiments on stochastic resonance ,the reference voltage was chosen such that and therefore the autonomous dynamics , by virtue of an underlying homoclinic bifurcation , exhibit steady state behavior .the discharge voltage was thereafter perturbed , where is the subthreshold periodic pulse train chosen for which , ( subthreshold signal does not cause the system to cross over to the oscillatory regime ) and is the added gaussian white noise with amplitude .subthreshold periodic square pulse of width and duration 2 ms was constructed using fluke pm5138a function generator .meanwhile , the gaussian noise produced using the hp 33120a noise generator was subsequently amplified using a noise amplifier .[ fig : periodic] show time series of the system response in the presence of an identical subthreshold signal for three different amplitudes of imposed noise . the subthreshold periodic pulse train is also plotted , in the top most graph of the left panel , for comparison purposes .[ fig : periodic](a ) shows that there is little correspondence between the subthreshold signal and the system response for a low noise amplitude .however , there is excellent correspondence at an intermediate noise amplitude [ fig .[ fig : periodic](b ) ] .finally , at higher amplitudes of noise the subthreshold signal is lost amidst stochastic fluctuations of the system response [ fig .[ fig : periodic](c ) ] . absolute mean difference ( amd ) , used to quantify the information transfer between the subthreshold signal and the system response , is defined as . and are the inter - peak interval of the response signal and mean peak interval of the subthreshold periodic signal respectively .fig [ fig : periodic](d ) shows that the experimentally computed amd versus noise amplitude d curve has a unimodal structure typical for the sr phenomena .the minima in this curve corresponds to the optimal noise level for which maximum information transfer between the input and the output takes place .for the experiments on coherence resonance dv ) was located such that the floating potential fluctuations exhibit fixed point behavior . in order to minimize the effect of parameter drift , a set - point ( ) quite far from the homoclinic bifurcation ( )was chosen .subsequently , superimposed noise on the discharge voltage was increased and the provoked dynamics analyzed . the normalized variance ( nv )was used to quantify the extent of induced regularity .it is defined as , where is the time elapsed between successive peaks .it is evident that more regular the induced dynamics the lower the value of the computed nv . for purely periodic dynamicsthe nv goes to zero .[ fig : nv] ( left panel ) show the time series of the floating potential fluctuations for different noise levels and fig [ fig : nv](d ) ( right panel ) is the experimental nv curve as a function of noise amplitude d. the point ( a ) in fig [ fig : nv](d ) ( time series shown in fig .[ fig : nv](a ) ) is associated with a low level of noise where the activation threshold is seldom crossed , generating a sparsely populated irregular spike sequence .as the noise amplitude is increased , the nv decreases , reaching a minimum ( b ) in fig [ fig : nv](d ) ( time series shown in fig .[ fig : nv](b ) ) corresponding to an optimum noise level where maximum regularity of the generated spike sequence is observed . as the amplitude of superimposed noiseis increased further , the observed regularity is destroyed manifested by an increase in the nv ; label ( c ) in fig [ fig : nv](d ) ( time series shown in fig .[ fig : nv](c ) ) .this is a consequence of the dynamics being dominated by noise .the effect of noise has been studied experimentally near a homoclinic bifurcation in glow discharge plasma system .our study demonstrates the emergence of sr for periodic subthreshold square pulse signals and the induction of cr via purely stochastic fluctuations . in sr experiments , the efficiency of information transferwas quantified using amd instead of the power norm which has been utilized elsewhere . the advantage of using this method in comparison to the power norm ( ) lies in the fact that amd remains independent of the lag between the measured floating potential and the applied periodic square pulse .this is of relevance in our experimental system , where invariably there exists a lag , at times varying in time due to the parameter drifts . forthe cr experiments it was occasionally observed that while with an initial increase in noise amplitude ( d ) nv reaches a minimum , the subsequent rise of nv for even higher amplitudes of noise was suppressed .this leads to the modification of the unimodal profile , a signature of the cr phenomenon .a possible explanation for this suppression is that by virtue of the superimposed high frequency noise ( bandwidth 500 khz ) and fast responding internal plasma dynamics , the system has the capability of exciting high frequency regular modes within the ion plasma frequency ( 105 khz ) .this in turn leads to the persistence of low nv values . finally , in refs both the destructive and constructive role of noise ( cr only ) have been reported for glow discharge and magnetized rf discharge plasma systems respectively . however ,both these experiments were carried out in the vicinity of the hopf bifurcation .in contrast , for the present work we studied both stochastic ( sr ) and coherence resonance ( cr ) in the neighborhood of the homoclinic bifurcation .regularity in the stochastic resonance has been done by calculating cross - correlation ( >|$ ] ) .but in this case this is not suitable , because , as we also measuring floating potential at different location in side the plasm there is always a lag between periodic signal that is applied in the plasma and output .this lag also varies with time because the plasma conditions are changing continuously with time .therefore cross - correlation between output and input signal gives wrong estimation .so we have proposed a statistics which will be independent of lag and defined as follows :
stochastic resonance ( sr ) and coherence resonance ( cr ) have been studied experimentally in the discharge plasma close to a homoclinic bifurcation . for the sr phenomena , it is observed that a superimposed subthreshold periodic signal can be recovered via stochastic modulations of the discharge voltage . furthermore , it is realized that even in the absence of a subthreshold deterministic signal , the system dynamics can be recovered and optimized using noise . this effect is defined as cr in the literature . in the present experiments , induction of sr and cr are quantified using the absolute mean difference ( amd ) and normalized variance ( nv ) techniques respectively . amd is a new statistical tool to quantify regularity in the stochastic resonance and is independent of lag .
derivative pricing entails the hedging of market risks .the breakthrough of the black - scholes - merton ( bsm ) model was the use of extremely elegant mathematical formalism to show that risk could be eliminated under simplified assumptions .the expansion of trade in derivative instruments has been documented from numerous perspectives : notional amounts in global markets now aggregate to hundreds of trillions of us dollars , low points include the ltcm crisis ( 1998 ) and failure of lehman brothers ( 2008 ) and extreme points include the hundreds of billions in quantitative easing ( qe ) to alleviate the crisis caused by defaulting credit risk contracts under limited global regulatory oversight ( 2008-present ) .a characterisation of no - arbitrage markets emerged round the same time as the bsm derivative pricing model : commencing with a simplified finite - state model , stephen ross provided no - arbitrage conditions in the so - called first fundamental theorem of asset pricing .this ensured the existence of a consistent pricing mechanism , via a risk - neutral measure for expected prices , which does not allow a `` free lunch '' , i.e. attainment of a riskless profit .equivalently , if two products are effectively identical ( in every measurable sense ) , then they should cost the same . more specifically ,if the net cash - flows which they generate over the lifetime of the contracts are equivalent , then two investments are considered to be equivalent .if either is mispriced and all else is equal , then speculators would step in to exploit the difference by purchasing the cheaper version and selling it at the higher price , making profit without holding inventory .no arbitrage is closely coupled to the notion of market efficiency , whereby all assets are priced according to correct information which is available instantaneously to all market participants . a routine shopping exercise to discern between toothpastes sold in a large supply store highlights that efficient decision making is constrained by the ability to reason and weigh a wide range of benefits and costs in reasonable time , possibly ignoring useful detail .conditions in illiquid markets typically imply that transaction costs are higher , where additional costs are incurred to fund due - diligence and reliable information gathering . when the bsm papers were published in 1973 , asset valuation under fiat currencywas premised on the capacity to understand how different assets store and produce value over different time horizons and hence , to assess correct discount and inflation rates and keep markets honest . in theory ,novel exchange - traded futures and options contracts enabled lower risk costs for managing uncertainty .the notion of `` no free lunch '' has been generalised to a more sophisticated mathematical formalism for consistent pricing of investment portfolios , based on the non - admissability of arbitrage trading strategies and referred to as `` no free lunch with vanishing risk '' .thus , risk - neutral pricing of tradeable assets offered a theoretic framework which made aggregate market growth consistent with the supply of capital through monetary policy .free market economist milton friedman published `` there s no such thing as a free lunch '' in 1975 .the title echoed a well - used phrase from the real us economy of the 1930 s .friedman was interested in removing all regulatory constraints while advocating that risk had a cost which market forces could be relied on to anticipate correctly in natural pricing . coupled with a view that wealth trickled down into the real economy , liquid derivative markets were considered as a path to making the pricing processes for stocks , bonds and commodities more efficient .geopolitics of the post bretton woods era was far more complicated than any simplified model . in reality , us economic policy makers were confronted with a market - changing petrodollar - shock delivered by opec while exiting it occupation of vietnam .the beginning of the 1970 s also saw support for popular socialist movements around the world , with democratic elections voting into power local control of wealth in the interest of local communities .however , it was still the cold world era and nato continued anti - communist interventions in the interest of global capital . as multinational corporations took up residences around the world for new markets , cheaper labour and tax arbitrage through the 1980 s , us economic policy from washington advocated for so - called open markets .simultaneously , subsidies and rebates supported its own suppliers of energy and agriculture goods . by the turn of the millennium , members of the us federal reserve banks and the city of london wrapped up reagonomics and thatcherism by removing the last market constraints which had been set in place to contain moral hazard after the 1929 crash .deregulation of major capital markets provided lower - cost credit for the debt - funded growth of stake - holders .market crashes are not the worst failures which an economy can suffer . if one considers price formation in its simplest form , a buyer and seller meet and exchange bids and offers until they converge on an agreeable price or walk away from the auction .unnatural market crises occur when failed auctions lead to one party assaulting the other to demand a price .former us federal reserve bank governor , alan greenspan , was famous for serving under a long duration of market growth which is referred to as the great moderation ( 1982 - 2007 ) . at its high point , it was advocated that us economic policy had successfully tamed the management of market crises .however , his successor s acknowledgement of the successful diminishing of market volatility in 2004 came a year after the us invasion of iraq .since then , greenspan has been quoted in leading mainstream media giants to say that the military occupation of iraq , which was imposed without sanction by the un , was indeed about oil interests . in reality , the taming of profit - seeking allocation of free capital coincided with the perpetuation of the doctrine reiterated in 1980 by carter that the us would use military force if necessary to defend its national interests in the persian gulf .there are grave economic implications when dominant economies are backed by the biggest military - industrial complex and do not follow globally accepted procedure . given us protection of some very restricted societies ,it is consistent that oil and energy interests drive the highly selective nature of us protection of universal franchise and global openness .such distortions can have permanent impact on the natural evolution of no - arbitrage conditions . at the wef session on the global economic outlook at the start of the 2016, uk chancellor of the exchequer remarked that `` the world has not very been good at accommodating rising powers .'' china currently holds approximately us us 13.5 trillion .it follows that the inclusion of the yuen as an imf reserve currency is consistent with its persistent stake in us monetary supply , purchased with the output of decades of labour in market driven production for global consumers .while cheap oil and consumption refueled markets after the nasdaq crash of 2000 , the great moderation ended abruptly with the onset of the global financial crisis . documentations of the free market failings which led to the dotcom bubble and the global financial crisis ( gfc , 2007 + ) are manifold , with numerous bestsellers published to film and print media .greenspan himself has admitted that the washington consensus had gotten its models for moderation wrong .the contagion effects of market mispricing of credit derivatives were global , with economies like sa markets rocked dramatically without significant direct investment in the defaulting assets .interventions by regulators to address the crisis fallout generated significant secondary impacts on developing markets .as members of smaller economies reassessed their purchasing power , hundreds of billions of usd , eur , gbp and yen were freed onto the global markets under quantitative easing .policies to bolster developed markets out of recession ignored potential knock - on effects of renewed speculative investment in volatile global markets .instead of simply alleviating debt crises in the intended target markets , the unregulated channeling of qe capital into emerging markets contributed to the ad hoc risk of withdrawals under uncertain allocations of further rounds of qe , as well as devaluation challenges . with some estimates for the capitalisation of non - bank financial intermediaries in the range of us 45 - 75 trillion ,major regulators have acknowledged the need to address the systemic impact of so - called shadow banking .defaulting off - balance - sheets contracts at the heart of the credit crisis were moved onto the balance sheet of the us federal reserve bank through qe . however , even though it has increased its balance sheet since 2008 to us 200 billion .given the corresponding credit relief to those same institutions and subsequent waves of qe , the credit crunch and aftermath have highlighted the non - homogeneous impact of regulatory influence and financial innovation in complex hierarchies . the existence of large - scale arbitrage opportunities imply that markets are not risk - neutral in the sense of ross s neoclassical finance .in particular , this implies that it is impossible to value future global cashflows consistently over extended periods , even with robust models for the underlying rates .the nyse market crash of black monday of 1987 offers one of the first global failures of the application of dynamic hedging strategies . since then the bsm model has been revised many times over to incorporate better noise analysis , additional underlying variables for credit and liquidity risks and some cost for model uncertainty . today, advanced mathematical pricing models continue to provide trading strategies for hedging cashflows in more than one currency and taking into account coupled dynamics for credit risky payments .case studies for emerging markets include scenarios whereby bond default of a large parastatal could trigger foreign - exchange devaluation and conversely , currency deterioration could cause a credit crisis for company or portfolio which needs to deliver foreign - denominated payments . at exchange interfaces ,information transmission and price evolution are not free from process errors on trading platforms . in this domain ,decades of research have been allocated to the study of order flows , to the extent that short range prediction as well as anomaly detection are possible . given the multifaceted nature of value and exchange currency for trade within a single closed economy , it is clear that exchange rates are impacted by almost all economic variables , directly or indirectly .any attempt at predictive modeling is forced to grapple with dimension reduction of a highly complex system .while many data mining methods to analyse exchange rates are able to offer short range predictions , they do not necessarily provide comprehensive descriptions of causality . at the opposite end , the sort of optimal foreign exposure hedge ratios provided by financial economists like fisher black relied on idealised input variables for investment between perfect markets .research on cross - border flows is already mature in the sense that much has been written and empirical approaches to analysing currency are based on a wide spectrum of investment and consumption data .early attempts to aggregate data include so - called gravity models , named for their dependence on variations of size of and estimations of square - distances between economies . however , such approaches to obtain single equations to describe price relationships are bounded by their simplified dependence on underlying variables .a variety of models have been developed to analyse portfolios of currencies .covariance analysis has been used to investigate co - movements of exchange - rates and hierarchical dependencies , mostly between developed markets .while these analyses typically ignore other economic variables , the approach is able to map the evolution of dominant clusters of causation directly , giving a summarised perspective of trade relations . even in complex economies such as sa ,asset - management has defended market quality by appealing to the belief that market forces are most efficient with respect to capital allocation .policies for attracting foreign direct investment ( fdi ) and net portfolio investment ( npi ) were promoted with the implicit assumption that benefits would reach required areas for development . from an abstract mathematical modelling perspective ,insight from the study of dynamical systems has exposed how sensitivity to initial conditions can result in dramatically varying evolutions or , equivalently , that there is an inescapable flaw in assuming that an uncontrolled system will reach homogeneous or equitable conditions , irrespective of initial and boundary conditions .for emerging markets , investigations into macroeconomic determinants of fdi and npi have included regression analysis against price stability , stable policies , transparency , openness to trade , infrastructure and lack of corruption .however , empirical evidence has also exposed that the most vulnerable , small - capitalised companies in a market can be impacted the most under changes in fdi , even when there exist favourable investment conditions in the target economy .given that global power blocks engaged in closed market negotiations and military interventions in developing markets during the cold - war era , foreign investment is inextricably linked to geopolitical forces .thus , there are numerous caveats associated with the notion of openness championed by neo - liberalism and regulation is required to ensure that investment benefits include local infrastructure development for the public good . by some measures, china can now be regarded as the biggest national economy on the global trade field . at the same time other g20 and brics economies are at still different stages of development relative to the g7 .yuen re - valuation and a shift in chinese economic focus from infrastructure to consumption have implications for the demand for resources from south africa . on the other hand fdi from china into other emerging economiesoffers new potential , provided such investments drive development in the real economy .many currency paradigms have ignored debt . while countries like sa were held accountable for repaying apartheid government debt even after democratisation in 1994 , management of private and government debt in countries such as the usare regulated by different rules , with trillion dollar debt a source of uncertainty for emerging markets even as global policy makers intervene to ensure global stability . despite significantly lower debt to gdp ratios, global market sentiment advocates austerity over qe for emerging markets to ensure future prosperity .deep differences exist with respect to how economists explain the role of debt and money supply in the gfc .steve keen argues that the omission of the dynamics of private debt is the key failure of the macro - economic modelling which culminated in systemic failure .his approach addresses debt as endogenous money to deduce that change in money supply is equivalent to change in debt .in contrast some neoclassical perspectives equate debt to loanable funds with zero impact on net money . following the gfc , there are still unresolved questions for consistent accounting of cashflows in the valuation of derivative contracts. with debt - related payments coupled to adjustments for bilateral credit risks measurements , ongoing challenges include how to mitigate against default in networks of non - bank liabilities and implications of rehypothecation in the case of collateralisation .the models in this section emerge from differing perspectives to provide insight on both endogenous evolution and impacts of exogenous changes . if one zooms into the challenge of currency or exchange rate valuation , it is clear that required methodologies are far more advanced than when interactions were considered in 1973 .a non - exhaustive list of determinants of local currency value would include : * local variables such as gdp , employment , wages , inflation , savings and domestic interest rates * capitalisation of traditional local banks and future cash - flows of their depositors * differentials between local and foreign interest rates * market crises due to investment bubbles or debt accumulation in shadow banks * local instability due to mismatches in expectations of various stakeholders * unexpected economic frictions , for example sa electricity shortages , the impact of tax arbitrage by multinationals or other endemic fraud * unintended consequences of market intervention after crises in dominant markets , such as qe * global dynamics , i.e. the interplay between large trading blocks , including persistent asymmetries * structural changes in dominant nodes such as current developments in china * unexpected economics shocks such as the knock - on effects of extreme disasters to illustrate some dependencies , figures 1 - 6 depict an increasing complexity of modelling currency . while these simplified figures _ _ i__gnore full quantitative contribution of domestic or foreign money supply , they highlight sources of global uncertainty for emerging markets .it follows from the interdependencies between different economies that modelling schemes which incorporate top - down ( exogenous ) and bottom - up ( endogenous ) sources of causation offer longer - term solutions to currency - exchange valuation .from the previous section , exchange - rates are lynchpins which hold together various networks of transaction flows .it is understood now that the simplified abstractions which made bsm elegant , also led to the underpricing of risk in global markets prior to the gfc . with changing dynamics , failures in application of even the best modelsbecome an eventual certainty .electronic access and algorithmic trading have provided innovative perspectives on price evolutions as markets map information about real economic data to numeric prices .if markets are free of arbitrage , then goods are probably priced efficiently . in reality, valuation is driven at various levels of economic interaction , which are not always synchronised or equally informed .innovations and structural transitions can have significant impact on money supply and currency valuation .similarly , systemic asymmetries make smaller scale participants more vulnerable to failures , weaknesses or transitions in partner economies of larger scale .increased complexity demands increased sophistication and agility of regulatory oversight .markets are neither globally free , nor globally fair . with this , comes the implication that arbitrage - free models of exchange rates are as challenging as the rewriting of economic theory itself . this discussion is based on research which has been funded in part by the national research foundation of south africa ( grant numbers 87830 , 74223 and 70643 ) .the conclusions herein are due to the author : any omissions or errors in reasoning are my own and should not be attributed to my co - authors , colleagues or informal research contacts . in particular , the nrf and the university of the witwatersrand accept no liability in this regard .deng , w. , li , w. , cai , x. , wang , q.a . , ( 2011 ) , on the application of the cross - correlations in the chinese fund market : descriptive properties and scaling behaviour , _ advances in complex systems _ , 14 ( 1 ) farmer , j. , d. , geanakoplos , j. , ( 2008 ) , the virtues and vices of equilibrium and the future of financial economics , _ cowles foundation discussion paper _ no .available at ssrn : http://ssrn.com/abstract=1112664 tesfatsion , l. , agent - based computational economics : a constructive approach to economic theory , pp .831 - 880 in leigh tesfatsion and kenneth l. judd ( eds . ) , handbook of computational economics , volume 2 : agent - based computational economics , north - holland / elsevier , 2006 .
a very brief history of relative valuation in neoclassical finance since 1973 is presented , with attention to core currency issues for emerging economies . price formation is considered in the context of hierarchical causality , with discussion focussed on identifying mathematical modelling challenges for robust and transparent regulation of interactions . in order to illustrate the complex interplay between derivative markets and underlying economies , this essay includes an abridged record of some key determinants of currency valuation . arguments are qualitative rather than quantitative and aim to highlight the need for better representation of market information and for regulation to ensure pricing in developing economies is protected from systemic arbitrage . attention is given to some repercussions of the supply and transfer of money and capital across borders from an emerging market perspective , specifically south africa . while this note does not address money supply within any country or legacy capital ownership in a post - colonial era , it identifies a few geopolitical forces in generality . in particular , given the historic links between sa capital markets and uk economic interests , as well as the continued global dominance of the us economy in the post cold war era , some hierarchical impacts of their global policies on risk - neutrality are considered . overall , the discussion aims to give insight into the multilevel modeling of a key economic variable , taking into account endogenous and exogenous sources of causation .
the numerical integration of the discrete velocity boltzmann equation provides an efficient method for the solution of isothermal , incompressible fluid flows in complex geometries . the finite - difference equation generated by the integration schemeis referred to as the lattice boltzmann equation ( lbe ) .the method can be extended to study multiphase and multicomponent flows , the hydrodynamics of polymers and suspensions , and flows under gravity . in the extensions of the lbe described above ,the resulting momentum balance equations contain additional terms beyond the usual pressure and viscous forces .these represent forces acting on the fluid , either from external sources like gravity , or internal sources like the gas - liquid interface in a two - phase fluid .external sources can add to the total momentum of the fluid , while internal sources , being immersed within the fluid , can only exchange momentum with it .internal forces on the fluid , therefore , can always be expressed as the divergence of a stress tensor .this formulation encodes the fact that there are no local sources or sinks of momentum .the lbe , derived as it is from the boltzmann equation for a dilute gas , can only faithfully represent the hydrodynamics of a fluid with an ideal - gas equation of state and a newtonian constitutive equation .one way around this restrictive situation is to use the forced boltzmann equation to represent the additional forces that appear in the extensions described above .so far , this idea has been used mainly to model the effects of gravity and gas - liquid interfacial forces in non - ideal gases .these correspond to two special types of force distributions : in the case of gravity , the force is spatially and temporally constant , while in the case of the gas - liquid interface , the force varies both in space and time , but is only evaluated on the nodes of the computational grid .the forces in either case are _ smooth _ functions of position .however , many models of boundaries immersed in fluids require _ singular _ distributions of forces .such a description follows , for example , when a gas - liquid interface is described as a two - dimensional manifold of zero thickness instead of a three - dimensional volume of space where the density changes rapidly . in a similar mathematical idealization , a polymer in a fluidmay be represented as a one - dimensional curve with a singular distribution of forces . in yet another example , at distances large compared to its radius , a sedimenting colloid can be well - approximated as a singular point force .clearly , the range of applications of lattice boltzmann hydrodynamics can be greatly expanded if singular force densities , not necessarily located at grid points , can be incorporated into the method . in this paperwe show how to include force densities having smooth or singular distributions , located at arbitrary ( in general , off - lattice ) points into the lattice - boltzmann formulation of hydrodynamics . in the following section we first discuss the discrete representation of the forcing term in the boltzmann equation and derive a second - order accurate integration scheme for the discrete velocity forced boltzmann equation using the method of characteristics . in section[ sec : singular ] we introduce a general distribution of singular forces and using a suitable regularization of the delta function obtain a smooth but sharply peaked distribution of forces .the method is exemplified in section [ sec : singular ] for three common singular force distributions ( a stokeslet , a stresslet , and a rotlet ) and validated for the stokeslet case by comparison with fully resolved numerical simulation .finally , we show how our method can be adapted to provide a simplified description of a dilute suspension of sedimenting colloids .we end with a summary of our method and discuss potential applications .the lbe may be derived from the boltzmann equation by a two - step procedure .first , a discrete velocity boltzmann equation ( dvbe ) is obtained by retaining a finite number of terms in the hermite expansion of the boltzmann equation and evaluating the conserved the moments using a gauss quadrature .the discrete velocities are the nodes of the gauss - hermite quadrature .this is followed by a discretization in space and time to provide a numerical integration scheme , which is commonly called the lbe .usually a first - order explicit euler scheme is used to integrate the dvbe , which surprisingly enough , gives second - order accurate results .this is so because the discretization error has the same structure as the viscous term in the navier - stokes equation , whereby it can be absorbed by a simple redefinition of the viscosity to give second - order accuracy .the same euler scheme for the forced boltzmann equation gives a discretization error term which can be absorbed only be redefining physical quantities like the momentum and stress .below , we provide a a straightforward explanation of these redefinitions and show how they are related to the discretization error induced by the integration scheme .we begin with the discrete velocity boltzmann equation including an external acceleration field = - { \cal l}_{ij}(f_j - f_j^0)\ ] ] where is the one - particle distribution function in phase space of coordinates and velocities , is the collision matrix linearized about the local equilibrium and the repeated index is summed over .mass and momentum conservation require the collision term to satisfy while isotropy requires that the depend only on the angles between and .( [ dbe ] ) is most easily derived by expanding the distribution functions in terms of tensor hermite polynomials , truncating the expansion at a certain order , and evaluating the expansion coefficients using a gaussian quadrature . in dimensions ,the quadrature is defined by the discrete velocities and a set of weights giving rise to a discrete boltzmann equation . retaining terms up tosecond order in the hermite expansion is sufficient for isothermal fluid flow problems .the equilibrium distribution functions to second - order in the hermite expansions are \ ] ] where the tensor ( where greek indices denote cartesian directions ) and is the speed of sound . the mass density and the momentum density are moments of the distribution function : to the same order in the hermite expansion , the discrete representation of the forcing term is given by & = & - \rho w_i\left[\frac{{\bf f}\cdot{\bf c}_i}{c_s^2}+\frac{({\bf vf}+{\bf fv}):{\mathbf q}_i}{2c_s^4}\right]\\ & \equiv&-\phi_i({\bf x},t)\end{aligned}\ ] ] finally , the deviatoric momentum flux tensor is the second moment of the distribution function . in isothermal models ,the higher moments represent non - conserved kinetic degrees of freedom , commonly known as ghost modes . in the hydrodynamic limit ,( [ dbe ] ) gives rise to navier - stokes behaviour , described by where the pressure obeys , and the shear viscosity and the bulk viscosity are related to the eigenvalues of . in practice the algorithm is normally used in a parameter regime where the fluid is nearly incompressible ( ) . to begin our derivation of the numerical schemewe rearrange eq .( [ dbe ] ) to obtain where + \phi_i({\bf x},t)$ ] represents the effects of both collisions and forcing .( [ dbe1 ] ) represents a set of first - order hyperbolic equations and can be integrated using the method of characteristics . integrating over a time interval we have the integral above may be approximated to second - order accuracy using the trapezium rule and the resulting terms transposed to give a set of implicit equations for the : the structure of the above set of equations suggests the introduction of a new set of _ auxiliary _ distribution functions , in terms of which the previous set of equations are explicit , this shows that the lbe evolution can be thought of two separate processes : the first is a relaxational step in which the distributions are relaxed to their `` post - collisional '' values , followed by a propagation step in which the post - collisional distributions are propagated along a lagrangian trajectory without further change , thus the computational part of the method is most naturally framed in terms of the auxiliary distributions and not the physical distribution functions themselves . to obtain the post - collisional without having to refer to the , the lattermust be eliminated from eq .( [ halftransform ] ) . inverting the equations defining the in eq .( [ transformf ] ) we obtain .\ ] ] combining this with eq .( [ halftransform ] ) we obtain a numerical scheme for the forced discrete boltzmann equation with a general collision operator in terms of the : .\ ] ] for a single relaxation time collision operator , where , this takes on a particularly simple form ,\ ] ] a result obtained previously by a multiscale expansion of the lbe dynamics . for a non - diagonal collision operator ,the collision term is best evaluated in the moment basis . for example , using a collision operator in which the ghost modes are projected out and the stress modes relax at a rate , the post - collisional ( _ i.e. _ the rhs of eq .( [ mrtlbe ] ) ) is given by ,\ ] ] where , the momentum component of the post - collisional auxiliary distributions , is and , the stress component , is the hydrodynamic variables are moments of the physical distribution , but can easily be obtained from the auxiliary distributions used in the computation , using the transformation rule , eq .( [ transformf ] ) , the definitions of the macroscopic variables , eq .( [ macrovariable1 ] ) , and the constraints of mass and momentum conservation , eq . ( [ conservation ] ) .we obtain the equilibria can be reconstructed from and .what appear in the literature as redefinitions of momentum and stresses are shown in the above analysis to be discretization errors which vanish as .this completes the description of the method for the numerical solution of the forced lbe .verberg and ladd have derived results equivalent to those above using a multiple scale analysis of the discrete lbe dynamics .it is not clear to us whether their analysis admits singular force densities .however , the above derivation shows that these equations are a reliable starting point even in that case .the lbe can be extended to situations where the fluctuations in the fluid density and momentum are important .a consistent discrete kinetic theory of fluctuations was presented in , which improves on an earlier algorithm due to ladd , and produces thermodynamically accurate variances of the local mass and momentum densities .we return to the issue of noise below , when we address the representation of brownian colloids as point particles ( section [ sec : fallers ] ) .in a wide variety of situations , as mentioned in the introduction , force densities may need to be defined off - lattice , and may in addition be singular .mathematically , such a force density may be written as where the force is localized to some manifold described parametrically as and is the measure on the manifold . any numerical method which attempts to deal with such force distributionsmust be reconciled with the singular nature of the force and , for grid - based numerical methods , the fact that the position of the manifold need not coincide with the nodes of the grid . in a well established numerical method , the dirac delta function in the singular force distributionis replaced by a regularized delta function which leads to a smooth distribution of forces . necessarily, this implies that the force is now no longer localized on the manifold but is sharply peaked and smooth around it .this smooth force density can now be sampled on the grid using the discretized delta function as an interpolant .thus a representation of eq .( [ singularf ] ) on the grid is obtained from the crucial ingredient here is the kernel function , which is a representation of the dirac delta function regularized on the grid .we have followed closely the method described by peskin where a regularized approximation to the dirac delta function with compact support is derived : where is the lattice spacing and is given by : this form is motivated by the need to preserve the fundamental properties of the dirac delta function on the grid .a simple closed form approximation to which is useful for analytical work is whose fourier transform is given by : in this work we combine eq .( [ singularf ] ) directly with the numerical method described in the previous section , giving a well - defined method for incorporating singular and/or off - lattice force densities into the lattice boltzmann hydrodynamics . to validate the method, we compare analytical solutions of the singularly forced navier - stokes equation against our numerical solutions , using lattice units , .the most straightforward benchmark is against the initial value problem for the stokes limit where the nonlinearity has been discarded , incompressibility is assumed , and . in an infinite system , the solution is obtained in terms of the unsteady oseen tensor describing the diffusion of vorticity . in a system with periodic boundary conditions ,the oseen solution must be replaced by the hasimoto solution .in contrast to the oseen solution , the real - space hasimoto solution is not available in a simple closed form but must be evaluated numerically .however , the solution in fourier variables presents no such difficulty , and is in fact identical in both cases : thus , we find it most convenient to compare fourier modes of the velocity from the numerical solution against the solution above .in particular , this provides a neat way to evaluate the performance of the method at different length scales . in fig .[ fig : oseen ] we compare the numerical data ( points ) to the theoretical result for a regularized force monopole using the approximation to the peskin delta function , eq .( [ deltaft ] ) ( solid line ) . lattice . shownare the first ( upper curves ) , the ( middle curve ) and the ( lower curve ) fourier modes of the velocity field .the solid line is the analytical result , eq .( [ eqn : oseen]).,scaledwidth=50.0% ] the results show excellent agreement with the theoretical curve for low modes , where we expect the momentum to behave hydrodynamically .the departure from hydrodynamic behaviour increases progressively with the wavenumber , as expected from previous studies on the hydrodynamic behaviour of the lbe . however , there is a significant range of length scales over which our model reproduces hydrodynamic behaviour , which is not less than the scale over which hydrodynamic behaviour is obtained in the unforced lbe .( see fig .[ fig : stresslet ] ) .upper half : simulated velocity field .lower half : isosurfaces of the magnitude of the velocity difference between simulation and theory .isosurfaces are at values of 12.5% , 25% and 37.5% .the colouring ( online ) depends upon the magnitude of the difference field and is shown as a percentage of in the colour bar .the rotlet is oriented with the forces in a horizontal plane and positioned in the centre of the volume.,scaledwidth=50.0% ] by combining elementary monopoles , discrete representations of higher multipoles can be generated .for example , the discrete stokes doublet , a dipole of two point forces , can be constructed out of monopoles of magnitude and separation and is often used as a simplified representation of a neutrally buoyant , steadily moving self - propelled particle . in figures [ fig : stresslet - comp ] and [ fig : stresslet - diff ] , we compare the velocity response of such a dipole to theoretical predictions , finding good agreement away from the immediate vicinity of the forces . in fig .[ fig : rotlet ] , we show a velocity field plot for the antisymmetric force dipole , or rotlet , which may be used as a representation of an object which rotates due to an external torque .this requires the use of four , rather than two , point forces ; we arrange these in a swastika - like fashion ( whose axes can be aligned in an arbitrary direction without significantly affecting the flow produced ) .this cancels a spurious stresslet component that arises from our regularization of the function for any dipole in which the forces are not collinear with the separation vector .the above examples show that the regularized delta function provides an useful way of incorporating arbitrary distributions of singular forces into the lattice boltzmann method , capable of dealing with internal as well as external forcing .the dynamics in a dilute sedimenting suspension , despite a century of investigation , still presents open questions .the problem , even for a hard - sphere suspension , is unusually difficult due to the long - ranged , many - body nature of the hydrodynamic interaction .moreover , the flow can develop structural features at large length scales , and the role of inertia , while usually negligible at the particle scale , may be significant at those scales .the stokes approximation of globally vanishing reynolds number can not thus be justified _ a priori _ in a sedimenting suspension .the full hydrodynamic problem including inertia for both fluid and particles was first simulated by ladd using a novel lattice boltzmann method .this method , though possibly the most competitive for fully resolved particles , remains computationally expensive .a considerable simplification of the hydrodynamics is possible if only the lowest order multipole of the force distribution induced on the particle surface by the no - slip boundary condition is retained .this principle was exploited previously to develop representations of polymers as strings of point particles which were then coupled to an lb fluid . a similar idea has been used to represent resolved colloids with a mesh of point particles covering their surfaces .however in the current work we simplify further , treating each colloid as a single point particle ( thereby sacrificing all near - field effects).in the colloidal context , this model was first introduced by saffman ; the finite sized particles are replaced by a singular force monopole , the stokeslet , located at the nominal centre of the particle . in saffman s original model , both the fluid and the particles have no inertia . in keeping with the comments above , our model retains inertia for the fluid , while neglecting it for the particle and for hydrodynamics at the particle scale .we thus have a momentum balance equation , where the sum includes contributions from the particles located and acted upon by _ external _ forces . in the absence of particle inertia accelerationsvanish , and the particle coordinates are updated directly using the first faxn relation which relates the centre of mass velocity of the particle to the external force on it .the background velocity is the fluid velocity at the location _ in the absence _ of the -th particle .the above two equations provide a complete specification of a model of sedimenting spheres , valid in the dilute limit , for dynamics at long wavelengths .the lattice boltzmann implementation of this model proceeds by first replacing the dirac delta function with the regularized delta functions to obtain a force density at the grid points .since the lbe evolves the total fluid velocity due to all particles , the background fluid velocity must be obtained by a careful subtraction procedure . in the absence of fluid inertia at the particle scalethis can be accomplished as follows . by definition ,the fluid velocity at a node is the sum of the background velocity at the node and the velocity due to the -th stokeslet located at , .the background velocity field at the location of the particle can be obtained using the same interpolation kernel as used for the force , , and using the previous relation can be written as appealing only to linearity and dimensional analysis , the sum above can be expressed as in appendix [ sec : subtraction ] , we derive this result and show that the lattice parameter depends only on the system size and on the form of the regularization and interpolation kernels ; it is independent of viscosity and of the radius . using eq .( [ eq : a_ldefn ] ) , the update equation for the stokeslet positions can now be expressed in terms of the interpolated fluid velocity , without any reference to the background velocity , notice that replacing the background velocity in the faxn relation with the actual fluid velocity induces an effective backflow , leading to a renormalized hydrodynamic radius , the numerics thus places a constraint on the allowed values of the hydrodynamic radius .this numerical constraint encodes the condition that the grid points must be in the far - field of the stokeslet , the limit in which the background velocity can be obtained from the fluid velocity by subtracting a monopole contribution . in our simulations , we operate well within this limit .this almost completes the description of the lattice boltzmann implementation of our stokeslet model of sedimenting particles .the only free parameter is the hydrodynamic radius of the particles , which decides how fast they sediment for a given force .as shown below , the lattice parameter can be calculated analytically as a function of system size .we find it convenient to fit it using a procedure described in appendix [ sec : subtraction ] . finally , to address brownian motion of our colloids, we need to use the flbe of which imparts an appropriate thermal noise spectrum to the fluid . because of the renormalization of , the resulting diffusivity is generally not correct unless a further noise term is added that is the counterpart of the correction .the details are explained in appendix [ sec : noise ] .our first benchmark addresses the dynamics of a single impulsively started particle , without noise . from unsteady hydrodynamics , we know that the asymptotic decay of the particle velocity varies as in dimensions . in fig .[ fig : tail ] we display the decay of the particle velocity , for a single hydrodynamic radius ( 0.05 lu ) , but several values of the fluid viscosity . in all cases ,we see the correct asymptotic behaviour , until the particle is begins to interact with its image , due to the periodic boundary conditions .( this interpretation is supported by the scaling of the time at which the deviations become significant : we find as shown in the inset to fig .[ fig : tail ] . ) in other words , our particle model correctly captures the low and intermediate frequency behaviour of the particle mobility , but can not capture the high - frequency behaviour correctly , since that depends on the way vorticity diffuses in the immediate neighbourhood of the particle , a regime which is excluded in our model . in a periodic box .main figure : response shown for a range of viscosities ( points ) compared to the theoretical prediction at long times .the inset shows the effect of varying the size of the simulation box .deviations from the prediction become significant at approximately 250 , 1000 and 4000 timesteps for box sizes of 32 , 64 and 128 , respectively .this is consistent with the expected scaling , .,scaledwidth=50.0% ] our next benchmark involves collective motion of a set of particles , and thus directly probes the hydrodynamic interaction between particles . in fig .[ fig : lattice ] the mean sedimentation velocity of a periodic array of spheres is shown , as a function of volume fraction .there is excellent agreement with the theoretical result of .the model is also able faithfully to capture instabilities due to collective hydrodynamic flow . in fig .[ fig : crowley ] the instability of a falling 2d lattice of spheres in three dimensions is captured , at least qualitatively , by our model . .the separation , , is expressed in terms of the fitted particle radius , .the solid line is the theoretical result , with .,scaledwidth=50.0% ] for most problems , all reynolds numbers below some ( situation - dependent but small ) value give rise to equivalent behaviour , as discussed in detail in previous work . following protocols discussed there , we have compared the normalized velocity field ( with the sedimentation velocity of an isolated colloid ) for a number of simulations of a single sedimenting sphere with periodic boundary conditions ( see figure [ fig : reynoldsrange ] ) in order to explore the range of reynolds number at which our algorithm gives acceptably accurate results .our ` reference ' simulation has a very small such that we can be confident it is the in the stokesian limit .this is shown in the panel [ fig : reynoldsrangeref ] .panels [ fig : reynoldsrangediff4 ] and [ fig : reynoldsrangediff2 ] show the normalized velocity difference fields between the reference case and simulations with and , respectively . in the simulation with ,the magnitude of the difference is everywhere less than , a negligibly small error . in the simulation with , we find throughout the bulk of the domain ; only in the immediate vicinity of the particle does it become larger .this suggests that this reynolds number is sufficiently low to give ` realistic ' , although not ` fully realistic ' , behaviour . since reaching very low reynolds number requires paying a larger cost in computational time , and there are other sources of percent - level error in the code , is probably a reasonable compromise between accuracy and run - time , for studies in the low reynolds number limit . as a final benchmark, we have compared the behaviour of our sedimenting particle model with a fully resolved colloid simulation code using the algorithm of nguyen and ladd .( for full implementation details see . ) at dilute concentrations , the paths of the particles are almost indistinguishable between the two simulations when plotted graphically .this is shown in figure [ fig : random - traj - diff ] for volume fraction ; note that the largest differences occur when the density of particles is large locally , when the implicit assumption of our model that the particles are always at separations large compared to their radius is no longer valid .we can not expect both simulations to give the same trajectories for long times , since the small differences between algorithms will cause exponential separation of trajectories owing to the positive lyapunov exponent of the system .however , from a plot of the mean difference in position between the two simulations against time , we can see excellent agreement for several stokes times , and until at least ten stokes times for sufficiently dilute systems ( fig .[ fig : deltar ] ) . .see also movie 1 .,scaledwidth=80.0% ] .,scaledwidth=50.0% ] .parameters for simulations used to compare fully - resolved and point - like algorithms ; see text and figures [ fig : random - traj - diff ] and [ fig : deltar ] . [ cols="<,<,<",options="header " , ]the focus of this work has been to derive and validate a general method for addressing singular forcing in the lbe , with specific application to the simulation of point - like particles .we have shown the method to agree well with analytic results , where available , and with fully - resolved particle algorithms at low concentrations and reynolds number .additionally , due to its careful construction , the regularized -function provides a good interpolation scheme , minimizing velocity fluctuations as the particle moves relative to the computational grid .indeed we find that for sedimenting colloids the trajectories are much smoother in our stokeslet algorithm than for the fully resolved simulation . in the latter, the discretization renders particles hydrodynamically aspherical with shapes that vary as they move across the lattice .absence of such irregularities in the stokeslet code may make this generally preferable at small volume fractions . for dilute suspensions of sedimenting colloids , our new algorithm can thus perform simulations of accuracy comparable to ( or even better than ) that of a fully resolved code , but at vastly reduced computational cost .as shown in table [ tab : params ] , similar particle numbers and volume fractions can be simulated with an lb lattice that is smaller in linear dimension by a factor .( this is the ratio of the particle radii in the two simulations . )the computational time to update the particle positions is essentially negligible , so that the cpu time needed to perform one lb time step is decreased by a factor of ; moreover the stokes time , , scales as . the latter sets the time basic time scale for evolution of sedimentation trajectories , so that for this problem we expect a speed - up of .this should allow us to study the sedimentation behaviour of dilute systems with tens of millions of particles ; we hope to pursue this avenue elsewhere .we derive here the correction factor that arises from replacing the background velocity with the fluid velocity in the faxn relation .the velocity at a node due a regularized stokeslet located at is this velocity interpolated from the neighbouring nodes to the location of the particle is completing the spatial sum , we get the for the interpolated stokeslet velocity , which shows that the offset parameter obeys is indeed independent of viscosity , and particle radius , but depends on the lattice size and the numerical implementation of the regularization and interpolation .considering a spherical sedimenting particle in the langevin picture , we can write and taking the inertialess limit gives we note that in this equation , noise only comes in through the gaussian random variable and that in particular the fluid velocity is completely deterministic .the update rule for our model particle is which is sufficient for the infinite pclet number regime .if one uses fluctuating lb , then the interpolated velocity contains a noise component .however , we do not expect the magnitude of the noise to be appropriate for a particle of the desired radius , since the random component of the velocity has no dependence on the radius of the particle .in fact , the variance of this noise is that expected for a brownian particle with the same radius as the offset parameter ( ) and we use this fact to determine its value from a diffusion `` experiment '' on an unforced particle , as explained below .knowing that the particle will otherwise diffuse as one with a much larger radius , we add a white noise term to the update rule for the model the variance of the extra noise is determined , by the requirement of satisfying the fluctuation - dissipation theorem , to be to determine the value of the offset parameter , we set up a simulation of a single unforced particle in periodic boundary conditions at finite temperature and disable the extra noise term discussed above , giving as the equation of motion of the particle .we let the simulation equilibrate for the characteristic time for momentum to diffuse across the box size , , before recording the displacement as a function of time .this is repeated for a number of different starting positions relative to the lb grid and a plot of vs. is used to estimate the diffusivity .we then use the stokes - einstein relation to derive a radius and use this as the offset parameter .we then test to ensure that this gives the correct sedimentation behaviour of a particle .
we present a second - order accurate method to include arbitrary distributions of force densities in the lattice boltzmann formulation of hydrodynamics . our method may be used to represent singular force densities arising either from momentum - conserving internal forces or from external forces which do not conserve momentum . we validate our method with several examples involving point forces and find excellent agreement with analytical results . a minimal model for dilute sedimenting particles is presented using the method which promises a substantial gain in computational efficiency .
advances in computational algorithms combined with the steady advance of computer technology have made it possible to simulate regions of the universe with unprecedented dynamic range .realistic simulations of galaxy formation require spatial resolution better than 1 kpc and mass resolution better than in volumes at least 100 mpc across containing more than .cosmologists have made significant progress towards these requirements . in simulations of dark matter halos , and achieved more than 4 orders of magnitude in spatial resolution while the virgo consortium has performed simulations with particles .recently , have performed a simulation of the formation of the first subgalactic molecular clouds using adaptive mesh refinement with a spatial dynamic range of 262,144 and a mass dynamic range more than .the possibility to resolve numerically such vast dynamic ranges of length and mass begs the question of what are the appropriate initial conditions for such simulations .hierarchical structure formation models like the cold dark matter ( cdm ) family of models have increasing amounts of power at smaller scales .this power should be present in the initial conditions . for simulations ofspatially constant resolution , this is straightforward to achieve using existing community codes .however , workers increasingly are using multiscale methods in which the best resolution is concentrated in only a small fraction of the simulation volume .how should multiscale simulations be initialized ?many workers currently initialize multiscale models following the approach of .first , a gaussian random field of density fluctuations ( and the corresponding irrotational velocity field ) is sampled on a cartesian lattice of fixed spacing .then , is decreased by an integer factor and a new gaussian random field is sampled with times as many points , such that the low - frequency fourier components ( up to the nyquist frequency in each dimension ) agree exactly with those sampled on the lower - resolution grid .this method has two drawbacks .first , it is limited by the size of the largest fast fourier transform ( fft ) that can be performed , since the gaussian noise is sampled on a uniform lattice in fourier space .this represents a severe limitation for adaptive mesh refinement codes which are able to achieve much higher dynamic range .second , the uniform high - frequency sampling on the fine grid is inconsistent with the actual sampling of the mass used in the evolutionary calculations .multiscale simulations have grid cells , hence particle masses , of more than one size .the gravitational field produced by a distribution of unequal particle masses differs from that produced with constant resolution . in the linear regime ,the velocity and displacement should be proportional to the gravitational field . with the method of ,they are not .we are challenged to develop a method for sampling multiscale gaussian random fields consistent with the multiresolution sampling of mass .a satisfactory method should satisfy several requirements in addition to correctly accounting for variable mass resolution .first , each refined field should preserve exactly the discretized long - wavelength amplitude and phase so as to truly refine the lower - resolution sample .second , high - frequency power should be added in such a way that the multiscale fields are an exact sample from the power spectrum over the whole range of wavelengths sampled . because multiscale fields are not sampled on a uniform lattice ,it is not the power spectrum but rather than spatial two - point correlation function that should be exactly sampled .finally , a practical method should have a memory requirement and computational cost independent of refinement so that it is not limited by the size of the largest fft that can be performed .this paper presents the analytic theory and practical implementation of multiscale gaussian random field sampling methods that meet these requirements .our algorithms are the equivalent of adaptive mesh refinement applied to gaussian random fields .the mathematical properties of such fields are simple enough so that an exact algorithm may be developed .practical implementation requires certain approximations to be made but they can be evaluated and the errors controlled .the essential idea enabling this development is that gaussian random fields can be sampled in real space rather than fourier space ( hereafter -space ) .adaptive mesh refinement can then be performed in real space conceptually just as it is done in the nonlinear evolution code used by .how can the long - range correlations of gaussian random fields be properly accounted for in real space ? in an elegant paper , pointed out that any gaussian random field ( perhaps subject to regularity conditions such as having a continuous power spectrum ) sampled on a lattice can be written as the convolution of white noise with a function that we will call the transfer function .salmon recognized the advantages of multiresolution initial conditions and developed a tree algorithm to perform the convolutions .tree algorithms have the advantage that they work for any mesh regular , hierarchical , or unstructured .next , pointed out that ffts may be used to perform the convolutions in such a way that the two - point correlations of the sampled fields are exact , in contrast with the usual -space methods which produce exact power spectra but not two - point correlations .the key is that the transfer functions may be evaluated in real space accurately at large separation free from distortions caused by the discretization of -space .pen also pointed out that this method allows the mean density in the box to differ from the cosmic average , and that the method could be extended to hierarchical grids .this paper builds upon the work of and as well as the author s earlier cosmics package , which included a module called grafic ( gaussian random field initial conditions ) .grafic implemented the standard -space sampling method for generating gaussian random fields on periodic rectangular lattices .this paper presents the theory and computational methods for a new package for generating multiscale gaussian random fields for cosmological initial conditions called grafic2 .this paper contains the fine print for the owner s manual to grafic2 , as it were .this paper is organized as follows . [ sec : method ] reviews the mathematical method for generating gaussian random fields through convolution of white noise including adaptive mesh refinement . [ sec : transf ] presents methods for the all - important computation of transfer functions . [ sec : implem ] presents important details of implementation .exact sampling requires careful consideration of both the short - wavelength components added when a field is refined ( [ sec : short ] ) as well as the long - wavelength components interpolated from the lower - resolution grid ( [ sec : long ] ) .as we show , the long - wavelength components must be convolved with the appropriate anti - aliasing filter .truncation of this filter to a subvolume ( a step required to avoid intractably large convolutions ) introduces errors that we analyze and reduce to the few percent level in [ sec : fixv ] .the method is extended to hierarchical grids in [ sec : multiple ] . [ sec : tricks ] presents additional tricks with gaussian random fields made possible by the white noise convolution method . [ sec : end ] summarizes results and describes the public distribution of the computer codes developed herein for multiscale gaussian random fields .the starting point is the continuous fourier representation of the density fluctuation field : where is gaussian white noise with power spectrum here is the dirac delta function and we are assuming that space is euclidean .the function is the transfer function relative to white noise , and it is related simply to the power spectrum of : ^{1/2}\ .\ ] ] note that and both have units of ^ { 3/2} ] .the set of all for a given is called a brillouin zone .the coarse grid corresponds to the fundamental brillouin zone , .mesh refinement extends the coverage of wavenumber space by increasing the number of brillouin zones to where is the refinement factor .the major technical challenge of our algorithm is to perform the convolution of equation ( [ congen1 ] ) without storing or summing over the entire fourier space .this is possible when is required over only a subgrid in the spatial domain .the first step is to note that equation ( [ congen1 ] ) is equivalent to where \,\xi(\vec k\,)\ , \quad t(\vec m,\vec n\,)=(rm)^{-3}\sum_{\vec k}\exp[i\vec k\cdot\vec x(\vec m , \vec n\,)]\,t(k)\ .\ ] ] now , mesh refinement is performed only over the subgrid of size where , so it is not necessary to evaluate and for all high - resolution grid points .we set outside of the subgrid volume .consequently , needs to be evaluated only to distances of grid points in each dimension in order that all contributions to be included .we will describe how the transfer functions are computed in the next subsection .the function must also be evaluated on the subgrid . because this is a sample of white noise, we simply draw independent real gaussian random numbers with zero mean and variance at each grid point . subtracting a mean over coarse grid cells ( eq . [ xisamp ] ) or imposing any other desired linear constraints ( using the hoffman - ribak method )is easily accomplished .once we have and on the subgrid , the next step is to fourier transform them . for simplicity in presentation ,let us suppose that the subgrid is cubic with where is the number of coarse grid points that are refined in each dimension .the result is where the sums are taken over the fine grid points in the subvolume , which has been doubled to the accommodate periodic boundary conditions required by fourier convolution .primes are placed on the wavevectors and on the transformed quantities to distinguish them from the original quantities and .note that the sampling of -space is different in equations ( [ congen3 ] ) and ( [ congen4 ] ) because the length of the spatial grid has changed from to . also , is in general not spherically symmetric even if is spherical .the final step is to perform the convolution by multiplication in the subgrid -space followed by fourier transformation back to real space : the reader may verify that equations ( [ congen4 ] ) and ( [ congen5 ] ) give results identical to equations ( [ congen1 ] ) and ( [ congen2 ] ) , when is zero outside of the subvolume .thus , we have achieved the equivalent of convolution on a grid of size by using a ( typically smaller ) grid of size .note well that is not the same as , because it is based on spatially truncating and making it periodic on a grid of size instead of .so far the method looks straightforward .however , some practical complications arise which will discuss later , in the computation of the transfer functions in real space ( [ sec : transf ] ) and in the split of our random fields into long- and short - wavelength parts on the coarse and fine grids , respectively ( [ sec : long ] ) .finally , we note that we will perform the convolution of equation ( [ congen1 ] ) using a grid of size in each dimension in the standard way using ffts without requiring periodic boundary conditions for the subgrid of size .we calculate the transfer functions in the first octant of size and then reflect them periodically to the other octants using reflection symmetry ( odd along the direction of the displacement , otherwise even ) . in order to achieve isolated boundary conditions ,the white noise field is filled in one octant and set to zero in the other octants .if we desire to have periodic boundary conditions ( e.g. for testing ) , we can set and fill the full refinement grid of size with white noise .the convolution method requires calculating the transfer functions for density , velocity , etc ., on a high resolution grid of extent grid points in each dimension . the transfer function is given in the continuous case by equations ( [ transferk ] ) and ( [ transferx ] ) and in the discrete case by the first of equations ( [ discpow ] ) and the second of equations ( [ congen3 ] ) .our challenge is to compute the transfer functions on the subgrid without performing an fft of size , under the assumption that the problem is too large to fit in the available computer memory .also , we wish to avoid a naive summation of the second of equations ( [ congen3 ] ) , which would require operations . in practice , will be a modest - size integer ( from 2 to 8 , say ) while will be much larger , of order .we present three solutions to this challenge .the first two are based , respectively , on three - dimensional discrete fourier transforms while the third is based on a spherical transform . the first method is equivalent to the second of equations ( [ congen3 ] ) and is therefore exact in the sense of yielding the same transfer functions as if we had used full resolution on a grid of size . note that would call this method approximate because , after the fft to the spatial domain , the results differ from the exact spatial transfer function of equation ( [ transferx ] ) .we will say more about this in [ sec : spherical ] , but note simply that the discretization of -space required for the fft makes it impossible for the transfer function to be exact in both real space and -space .the transfer function of this subsection is exact in -space and is equivalent to the usual -space sampling method .we rewrite the second of equations ( [ congen3 ] ) as where \,t(\vec k\,)\ .\ ] ] the fourier space is split into brillouin zones according to equation ( [ kgrid ] ) .beware that the symbol has three different uses here which are distinguished by its arguments : it is either the transfer function in real space , the transfer function in fourier space , or else the mixed fourier / real case .equation ( [ transkx1 ] ) is a simple fft of size .this is the same size as is used for generating the coarse grid initial conditions , so it is tractable .however , we save the results only at those coarse grid points that lie in the refinement subvolume , discarding the rest . by performing some unnecessary computation, the fft reduces the number of operations required to compute this sum for all from to , a substantial savings .equation ( [ transkx2 ] ) is also a fft , in this case of size .however , we can not evaluate both equations ( [ transkx1 ] ) and ( [ transkx2 ] ) using ffts without storing for all points . in order to reduce the storage to a tractable amount ( no more than the larger of and ), we must perform an outer loop over to evaluate . for each , we must compute for all , requiring direct summation in equation ( [ transkx2 ] ) . the operationscount for all is then , which dominates over the for equation ( [ transkx1 ] ) .the operations count for equation ( [ transkx2 ] ) can be reduced by a factor of up to 6 by using symmetries when is spherically or azimuthally symmetric .nonetheless , if we use this method , computation of the transfer functions is generally the most costly part of the whole method .if the transfer function falls off rapidly with distance in real space , there is another way to evaluate that is much faster .it is based on noting that the fourier sum is an approximation to the fourier integral , and another approximation is given by simply changing the discretization in -space .in equation ( [ congen3 ] ) , the step size in -space is where is the full size of the simulation volume .if we increase this step size to , the transfer function will be evaluated with exactly the sampling needed for in equation ( [ congen5 ] ) . in this casewe do nt even need to transform to the spatial domain , truncate and periodize on the subgrid to give , and then transform back to get .we simply replace with in fourier space .this is exactly equivalent to decreasing the -space resolution in equation ( [ congen3 ] ) to the minimum needed to sample on the subgrid .this method is extremely fast but its speed comes with a cost .low wavenumbers are sampled poorly compared with equations ( [ transkx1 ] ) and ( [ transkx2 ] ) , and the transfer functions are truncated in a cube of size instead of . the decreased -space sampling leads to significant real - space errors for distances comparable to the size of the box .this may be tolerable for the density but is unacceptable for the velocity transfer function . in [ sec : long ] we will introduce anti - aliasing filters for the coarse grid for which the minimal -space sampling method is well - suited . in [ sec : multiple ] we will revisit the use of the minimally sampled transfer function for the density field .another fast method can be used when is spherically symmetric , as it is for the density and the radial component of displacement . in this casewe approximate the second of equations ( [ congen3 ] ) as a continuous fourier integral , aside from units , this is essentially the same as equation ( [ transferx ] ) .the last integral in equation ( [ transkx3 ] ) can be performed by truncating the fourier integral at the nyquist frequency of the subgrid , and then using a one - dimensional fft .the method is much faster than equations ( [ transkx1 ] ) and ( [ transkx2 ] ) .the fourier integral of equation ( [ transkx3 ] ) can be evaluated accurately , yielding an essentially exact transfer function in real space .this approach was advocated by .however , this is not necessarily the best approach for cosmological simulations with periodic boundary conditions . in order to achieve periodic boundary conditions , such simulationscompute the gravitational fields with -space discretized at low frequencies as in the second of equations ( [ congen3 ] ) . in this caseit would be inconsistent to use equation ( [ transkx3 ] ) on the top - level grid with periodic boundary conditions the displacement field would not be proportional , in the linear regime , to the gravity field computed by the poisson solver of the evolution code .however , the spherical method is satisfactory for refinements without periodic boundary conditions . in order to use equation ( [ transkx3 ] ) ,the transfer function must be spherically symmetric .this seems natural for the density field given that the power spectrum is isotropic .however , the standard fft - based method for computing samples of the density field violates spherical symmetry through the cartesian discretization of -space . as we noted above , periodic boundary conditionsare inconsistent with spherical symmetry on the largest scales .moreover , the displacement transfer function is multiplied by a factor which breaks spherical symmetry for each cartesian component . has been used with an unfiltered power spectrum .the cosmological model is flat and the box is 64 mpc across .false colors are scaled to the logarithm of the transfer function , which shows 6 orders of magnitude .anisotropy of the discrete fourier transform leads to anisotropic features that are barely visible along horizontal and vertical axes through the center . ] to examine the first concern , namely the non - isotropic discretization in fourier space , we examine the transfer function computed using the exact method of [ sec : exact ] .figure [ fig : transd3 ] shows the result for the flat model ( , , ) with a refinement of a subgrid of a grid .the coarse grid spacing is 1 mpc . in effect , the transfer function has been computed at resolution on a grid of spacing 0.25 mpc , but is shown only within a central region 64 mpc across ( ) . there is a slight banding visible along the - and -axes in figure [ fig : transd3 ] .the amplitude of this banding ranges from a relative size of about 20% at small to more than a factor of two at the edges ( where the transfer function is very small ) ; however , it is much smaller away from the coordinate axes .this anisotropic structure arises because , although is spherically symmetric , the fourier integration is not carried out over all but rather only within a cube of size .the fourier space is periodic ( because the real space is discrete ) , which breaks the spherical symmetry of . in this caseit is the anisotropy at large that produces the anisotropy in real space .this anisotropy is present in the initial conditions generated with the with the cosmics package .the author s rational for allowing it was that it is preferable to retain all the power present in the initial density fluctuation field . including all power in the fourier cubegives the best possible resolution at small scales while producing a modest anisotropy along the coordinate axes .however , the effects of the anisotropy are unclear and should be more carefully evaluated . in [ sec : short ] , we will show how the density transfer function can be made isotropic by filtering . to determine whether the anisotropy of unfiltered initial conditions causes any significant errors ,full nonlinear numerical simulations should be performed with and without filtering .that test is beyond the scope of this paper .additional considerations arise when calculating the transfer function for the linear velocity or displacement fields .( the linear velocity and displacement are proportional to each other . )the displacement field is related to the density fluctuation field by in real space or in -space .each component of the displacement field is anisotropic .this presents no difficulty for the discrete methods of [ sec : exact][sec : minimal ] .there is one subtlety of implementation , however : in fourier space , the displacement field must vanish on the brillouin zone boundaries .that is , the component of along must vanish on the surfaces and similarly for the other components .this is required because each component of is both odd and periodic .if the density transfer function is filtered so as to be spherical in real space , then the displacement field is radial in real space and we can obtain the radial component simply by applying gauss s law : the radial integral can be performed from a tabulation of the spherical density transfer function in real space , , by integrating a cubic spline or other interpolating function .the cartesian components of displacement follow simply from . in summary, we will use the spherical method in the case of spherical transfer functions , otherwise we will use one of the discrete methods . if the transfer function is sufficiently localized in real space sothat the fourier space may be coarsely sampled , the minimal -space sampling method may be used . in all caseswe will compare against the exact method to test the accuracy of our approximations .in this section we present our implementation of the two - level adaptive mesh refinement method described in [ sec : method ] and we discuss the split of our fields into long- and short - wavelength parts on the coarse and fine grids , respectively . the high - resolution density field is the superposition of two parts : where *t\ .\ ] ] the convolution operator is defined by equation ( [ congen2 ] ) , with the transfer function defined on a high - resolution grid .the net density field is the superposition arising from the coarse - grid white noise sample and its high - frequency correction as in equation ( [ xisamp ] ) .basically , we split the density field ( and similarly the displacement and velocity fields ) into long - wavelength and short - wavelength parts . in this sectionwe first describe the computation of the short - wavelength part , followed by the long - wavelength part . except that a spherical hanning filter ( cosine in fourier space ) has been applied to reduce the anisotropy that was seen in figure [ fig : transd3 ] .false colors show the logarithm of the transfer function , with 6 orders of magnitude shown for the density and 3 orders of magnitude for the displacement .( the absolute value of the displacement is shown ; it is negative in the right half of the image . )when convolved with white noise , these transfer functions give the density fluctuation and -displacement fields in linear theory at redshift .,title="fig : " ] except that a spherical hanning filter ( cosine in fourier space ) has been applied to reduce the anisotropy that was seen in figure [ fig : transd3 ] .false colors show the logarithm of the transfer function , with 6 orders of magnitude shown for the density and 3 orders of magnitude for the displacement .( the absolute value of the displacement is shown ; it is negative in the right half of the image . )when convolved with white noise , these transfer functions give the density fluctuation and -displacement fields in linear theory at redshift .,title="fig : " ] the high - frequency part of the density field , , is straightforward to calculate using the methods of [ sec : subgrid ] and [ sec : transf ] .let us first consider the transfer functions for the density and displacement fields , which we show in figure [ fig : transf ] .in order to eliminate the anisotropy appearing in figure [ fig : transd3 ] , we have applied a spherical hanning filter , multiplying by for and zeroing it for where .this filter has removed the anisotropic structure and has also smoothed the density field near .we have used the spherical method of [ sec : spherical ] .the exact method gives results that are visually almost indistinguishable , with maximum differences of order one percent because the hanning filter does completely eliminate the anisotropy of the discrete fourier transform .each cartesian component of the displacement transfer function displays a characteristic dipole pattern because of the projection from radial motion : .a density enhancement at the origin is accompanied by radial infall ( with changing sign across the origin ) .note that the displacement transfer function falls off much less rapidly with distance than the density transfer function , illustrating the well - known fact that the linear velocity field has much more large - scale coherence than the density field .the linear displacement , velocity , and gravity fields are all proportional to each other , so one may also interpret as the transfer function for the gravitational field . .right : zero - padding is used for isolated boundary conditions , and the means have been subtracted over cells of size 1 mpc so that we show .false colors are scaled to linear values ranging from standard deviations of .the zero level is light green . because of the predominance of high - frequency power , the filtering applied to the right - hand image is not apparent , but the two samples differ in the upper left quadrant.,title="fig : " ] .right : zero - padding is used for isolated boundary conditions , and the means have been subtracted over cells of size 1 mpc so that we show .false colors are scaled to linear values ranging from standard deviations of .the zero level is light green . because of the predominance of high - frequency power , the filtering applied to the right - hand image is not apparent , but the two samples differ in the upper left quadrant.,title="fig : " ] the next step in computing the subgrid contribution to the initial conditions is to generate an appropriate sample of gaussian noise .figure [ fig : wnsamp ] shows two samples of white noise for the high - resolution subgrid .the left sample is pure white noise , which is the correct noise sample if we wish to generate a gaussian random field with periodic boundary conditions on a grid of size 64 mpc ( the full width that is shown ) .the right sample is nonzero only in a region 32 mpc across , as is appropriate for isolated boundary conditions in a subgrid , and the means over coarse grid cells have been subtracted , i.e. we plot .this is the appropriate noise sample for computing the short - wavelength density field . ) with the density transfer function ( left panel of figure [ fig : transf ] ) .false colors are scaled to linear values ranging from standard deviations for the left panel .the left and right panels correspond to the same panels of figure [ fig : wnsamp ] .the two density fields are strikingly different because long wavelengths have been suppressed in the right image by subtraction of coarse - cell means in figure [ fig : wnsamp ] .the long wavelength components will be restored to the right image by addition of the coarse grid sample.,title="fig : " ] ) with the density transfer function ( left panel of figure [ fig : transf ] ) .false colors are scaled to linear values ranging from standard deviations for the left panel .the left and right panels correspond to the same panels of figure [ fig : wnsamp ] .the two density fields are strikingly different because long wavelengths have been suppressed in the right image by subtraction of coarse - cell means in figure [ fig : wnsamp ] .the long wavelength components will be restored to the right image by addition of the coarse grid sample.,title="fig : " ] figure [ fig : delsub ] shows the result of convolving the two noise samples of figure [ fig : wnsamp ] with the density transfer function of figure [ fig : transf ] .the left - hand panel gives while the right - hand panel gives the desired short - wavelength field .the two fields differ in the upper left quadrant because of the subtraction of coarse - cell means from the white noise field used to generate the left image .although the effect of this subtraction is barely evident in figure [ fig : wnsamp ] , it dominates the comparison of the two panels in figure [ fig : delsub ] because convolution by the transfer function acts as a low - pass filter .the left panel of figure [ fig : delsub ] gives a complete sample of on a periodic grid of size 64 mpc , while the right panel shows only the short - wavelength components coming from mesh refinement .careful examination of the right panel of figure [ fig : delsub ] shows that the finite width of the transfer function has caused a little smearing at the boundaries , which are matched periodically to the opposite side of the box by the fourier convolution .( a few pixels along the right and bottom edges of the left panel differ from green . )however , these edge effects do not represent errors in the short - wavelength density field .instead , they illustrate the fact that _ outside _ of the refined region , the gravity field should include tidal contributions from the short - wavelength fluctuations inside the refinement volume . for the purpose of computing within the subvolume , we simply discard everything outside the upper left quadrant . as a test of our transfer function methods , we calculated using the exact transfer function instead of the spherical one .the rms difference between the fields so computed was 0.0014 standard deviations , a negligible difference .as a test of the whole procedure , we computed the power spectrum of the left panel of figure [ fig : delsub ] and checked that it agrees within cosmic variance with the input power spectrum .we also compared the displacement field computed using the exact transfer with that computed using the spherical one .the rms difference was 0.0062 standard deviations , still negligible .then we compared the divergence of the displacement field ( computed in fourier space as ) with the density field , expecting them to agree perfectly .interestingly , this is not the case for the exact ( or spherical ) transfer functions .when the transfer functions are truncated in real space and made periodic on a grid of size , despite the fact that on the full refined grid .the prime on the transfer functions indicates a _ different _ fourier space , as discussed after equation ( [ congen4 ] ) .the only way to test for the exact transfer function is to perform an fft on the full refined grid of grid points . in [ sec : test ] we will perform an equivalent test with an end - to - end test of the entire method using a grid .now we consider , the contribution to the density field from the coarse grid .as we see from equation ( [ del12a ] ) , in principle we can compute by spreading the original coarse - grid white noise sample to the subgrid points for each coarse grid point ( i.e. , all points ) and then convolving with the high - resolution transfer function as in equation ( [ congen2 ] ) : however , this method is impractical , because the contributions to coming from large distances are not negligible because of the long range of the transfer functions ( especially for the velocity transfer function ) . for the short wavelength field causes no problems because the noise field is nonzero only within the subvolume . here , however , the noise field is nonzero over the entire simulation volume .including all relevant contributions in the convolution as written would require working with the transfer function on the full grid of size , which is exactly what we are trying to avoid .a practical solution is to rewrite equation ( [ tildel1 ] ) as a convolution of the coarse - grid density field with a short - ranged filter : one can easily check that this is exactly equivalent to equation ( [ tildel1 ] ) provided that \,{t(k)\over t(k_0)}\ ] ] where is the projection of into the fundamental brillouin zone ( eq .[ kgrid ] ) .an exact evaluation of equation ( [ tildel2 ] ) still requires using a full grid .however , we will see that falls off sufficiently rapidly with distance that contributions to coming from large distances are negligible .( this will not be true for the velocity field , but we will develop a variation to handle that case later . )thus , we may truncate at the boundary of the refinement region and perform the convolution of equation ( [ tildel2 ] ) using a grid just as we did for the short - wavelength field .the errors of this procedure will be quantified below .equation ( [ tildel2 ] ) has a simple interpretation .the coarse - grid density field is spread to the fine grid by replicating the coarse - grid values to each of the grid points within a single coarse grid cell .the result is an artifact called aliasing . in real spacethis artifact is manifested by having constant values within pixels larger than the spatial resolution . in -spacethe effect is to replicate low - frequency power in the fundamental brillouin zone to higher frequencies .thus , the wrong transfer function is used if one simply sets to . equation ( [ filtad ] )defines an anti - aliasing filter which corrects the transfer function from the coarse grid ( with wavevectors in the fundamental brillouin zone ) to the full -space .it smooths the sharp edges that arise from spreading to the fine grid .the anti - aliasing filter removes the artifacts caused by replication of the fundamental brillouin zone .the anti - aliasing filter is manifestly nonspherical , so we can not use the spherical transform method of [ sec : spherical ] to evaluate it .however , is sharply peaked .this is obvious from the fact that its fourier transform is constant over the fundamental brillouin zone ; the fourier transform of a constant is a delta function .thus , we expect to be peaked on the scale of a few coarse grid spacings . as a result , the minimal -space sampling method of [ sec : minimal ] should suffice ( with a variation for the velocity field ) .the division by in equation ( [ filtad ] ) requires that we compute the coarse - grid density field without a hanning filter ; otherwise would be zero in the corners of each brillouin zone .simply put , if we want to correctly sample the density field at high resolution , we should not cut out long - wavelength power by filtering .however , we will apply a spherical hanning filter at the shortest wavelength to remove the anisotropic structure that was apparent in figure [ fig : transd3 ] . at the corners of each brillouin zone but ( aside from the fundamental mode for the whole box ) . at these wavevectors, has no power and so no error is made by setting . in the case of the displacement field , each component of vanishes along an entire face of the brillouin zone , as explained in the paragraph before equation ( [ gauss ] ) .we also set to zero these contributions to the fourier series in equation ( [ filtad ] ) .for the density .the left panel shows the filter computed using the exact method of [ sec : exact ] while the right panel uses the minimal -space sampling method of [ sec : minimal ] .false colors are scaled to the logarithm of absolute value of the filter , which shows six orders of magnitude .the banded appearance is caused by low - amplitude oscillations .the oscillations act to smooth the sharp edges of the coarse grid fields when they are refined to the subgrid .the difference between the two filters is negligible away from the edges.,title="fig : " ] for the density .the left panel shows the filter computed using the exact method of [ sec : exact ] while the right panel uses the minimal -space sampling method of [ sec : minimal ] .false colors are scaled to the logarithm of absolute value of the filter , which shows six orders of magnitude .the banded appearance is caused by low - amplitude oscillations .the oscillations act to smooth the sharp edges of the coarse grid fields when they are refined to the subgrid .the difference between the two filters is negligible away from the edges.,title="fig : " ] figure [ fig : filtd ] shows the density anti - aliasing filter computed with a spherical hanning filter for .the minimal -space method gives good agreement with the much slower exact calculation .along the axes at the edges of the volume the errors are up to a factor of two , but is very small and oscillates , making these errors unimportant .the banding is due to the sign oscillations of .they have a characteristic scale equal to the coarse grid spacing and they arise because of the discontinuity of at brillouin zone boundaries in equation ( [ filtad ] ) .such oscillations are characteristic of anti - aliasing filters .the filter falls off sufficiently rapidly with distance from the center that we can expect accurate results by truncating it outside the region shown .-component of displacement ( or velocity or gravity ) , computed with the exact ( left ) and minimal -space sampling ( right ) methods .false colors are scaled to the logarithm of absolute value of the filter spanning four orders of magnitude . because the displacement is sensitive to longer wavelengths than the density , the differences between the two computational methods here is more pronounced than for the density filter of figure [ fig : filtd].,title="fig : " ] -component of displacement ( or velocity or gravity ) ,computed with the exact ( left ) and minimal -space sampling ( right ) methods .false colors are scaled to the logarithm of absolute value of the filter spanning four orders of magnitude .because the displacement is sensitive to longer wavelengths than the density , the differences between the two computational methods here is more pronounced than for the density filter of figure [ fig : filtd].,title="fig : " ] figure [ fig : filtx ] shows the corresponding result for the displacement ( or velocity or gravity ) field filter .( recall that the displacement , velocity , and gravity are proportional to one another in linear theory . )now the errors of the minimal -space sampling method are significant .they arise because the minimal sampling method forces to be periodic on the scale of the box shown in the figure ( twice the subgrid size ) while with the exact method the scale of periodicity is larger by a factor ( or 4 in this case ) . in other words , the filter does not fall off very rapidly with distance , so truncating it and making it periodic in the box of size introduces noticeable errors . however , because the filter is still sharply peaked and oscillatory with small amplitude , it is possible that these errors are negligible .we will quantify the errors in [ sec : test ] .the procedure is now similar to that of [ sec : short ] .once we have the anti - aliasing filters , the next step is to obtain samples of the coarse - grid fields and that we wish to refine .we do this using the convolution method of [ sec : disconv ] . for testing purposes, we construct a coarse - grid sample of white noise , , which exactly equals the long - wavelength parts of the noise shown in the left panel of figure [ fig : wnsamp ] .this was achieved by modifying grafic to sample a white noise field in the spatial domain .fourier transformation to then allows the calculation of density ( and similarly displacement ) by equation ( [ delfft ] ) . as a result, we have chosen our coarse grid sample so that within the refinement subgrid .this choice is made so that later we can see directly how long- and short - wavelength components of the noise contribute to the final density field .for the coarse grid . left : full cube of size 256 mpc .right : magnification of the upper left corner by a factor of 4 to show the region that will be refined . aliasing ( sharp pixel boundaries ) is now evident .false colors are scaled to linear values ranging from standard deviations.,title="fig : " ] for the coarse grid .left : full cube of size 256 mpc .right : magnification of the upper left corner by a factor of 4 to show the region that will be refined . aliasing ( sharp pixel boundaries ) is now evident .false colors are scaled to linear values ranging from standard deviations.,title="fig : " ] figure [ fig : cnsamp ] shows the white noise sample adopted for the coarse grid .the right panel is obtained by averaging the left panel of figure [ fig : wnsamp ] over subgrid mesh points ( 1 mpc volume ) .figure [ fig : wnsamp ] shows only a single thin slice of width 0.25 mpc while figure [ fig : cnsamp ] shows coarse cells of thickness 1 mpc , so one should not expect the two figures to appear similar .the left panel of figure [ fig : cnsamp ] shows a full slice of size 256 mpc , obtained by filling out the rest of the volume with white noise .resulting from convolution of figure [ fig : cnsamp ] with the density transfer function sampled on the coarse grid .left : full cube of size 256 mpc , with periodic boundary conditions .right : magnification of the upper left corner by a factor of 4 to show a square of size 64 mpc .this panel shows the 32 mpc region that we wish to refine to include the correct small - scale power .the magnified pixels represents an aliasing artifact .false colors are scaled as in figure [ fig : delsub ] .random numbers were chosen so that the right panel corresponds to the coarsely sampled long wavelength components of the left panel of figure [ fig : delsub].,title="fig : " ] resulting from convolution of figure [ fig : cnsamp ] with the density transfer function sampled on the coarse grid .left : full cube of size 256 mpc , with periodic boundary conditions .right : magnification of the upper left corner by a factor of 4 to show a square of size 64 mpc .this panel shows the 32 mpc region that we wish to refine to include the correct small - scale power .the magnified pixels represents an aliasing artifact .false colors are scaled as in figure [ fig : delsub ] .random numbers were chosen so that the right panel corresponds to the coarsely sampled long wavelength components of the left panel of figure [ fig : delsub].,title="fig : " ] this white noise sample on the coarse grid was convolved with the transfer function using grafic to give the coarse density field that we wish to refine .the results are shown in figure [ fig : deltop ] .the right panel shows a 64 mpc subvolume including the 32 mpc refinement region .the obvious pixelization is the result of mesh refinement : the coarse grid density field has been spread to the fine grid .this pixelization causes power from wavelengths longer than the coarse grid spacing to be aliased to higher frequencies . if uncorrected , this aliasing would introduce spurious features into the power spectrumthus , the coarse grid sample must be convolved with an anti - aliasing filter as described above . .the bottom and right quartiles are filled with values from the top and left of the subvolume which were then wrapped periodically .this is clearer for the velocity field because of its larger coherence length .the buffer regions and periodic boundary conditions are needed because of the fft - based method for convolution with anti - aliasing filters.,title="fig : " ] .the bottom and right quartiles are filled with values from the top and left of the subvolume which were then wrapped periodically .this is clearer for the velocity field because of its larger coherence length .the buffer regions and periodic boundary conditions are needed because of the fft - based method for convolution with anti - aliasing filters.,title="fig : " ] special care is needed with the boundary conditions for the anti - aliasing convolution of equation ( [ tildel2 ] ) .the top grid density field fills the subgrid shown in the right panel of figure [ fig : deltop ] without periodic boundary conditions . the anti - aliasing filter ( fig .[ fig : filtd ] ) has finite extent ; therefore fft - based convolution of the two will lead to spurious contributions to at the subvolume edges coming from on the opposite side of the box . to avoid this , we surround the subvolume ( which occupies one octant of the convolution volume ) with a buffer region of width one - half of the subvolume in each dimension . the correct density values from the top grid are placed in this buffer . because our subvolume is not centered but rather is placed in the corner of the cube of size , we wrap half of the buffer to the other side of this cube . the results are shown in figure [ fig : delvlong ] . by the appropriate anti - aliasing filter ( fig .[ fig : filtd ] for the density , fig .[ fig : filtx ] for the velocity ) .the anti - aliasing filters have eliminated the pixelization artifacts present in figure [ fig : delvlong ] .convolution across the discontinuity at the boundary of the buffer region causes some errors but these are small within the desired refinement region ( the upper left quadrant in these images).,title="fig : " ] by the appropriate anti - aliasing filter ( fig .[ fig : filtd ] for the density , fig .[ fig : filtx ] for the velocity ) .the anti - aliasing filters have eliminated the pixelization artifacts present in figure [ fig : delvlong ] .convolution across the discontinuity at the boundary of the buffer region causes some errors but these are small within the desired refinement region ( the upper left quadrant in these images).,title="fig : " ] figure [ fig : condel1 ] shows the density field and the corresponding velocity field after convolution with the anti - aliasing filter .the minimal -space filter has been used here ; there would be almost no discernible difference if the exact filter was used instead .the pixelated images of figure [ fig : delvlong ] have now been smoothed appropriately for the transfer function .smoothing over pixelization artifacts is the purpose behind anti - aliasing filters , whether they be applied in image processing or cosmology .the convolution method used here is not exact . quantifying its errors requires evaluating equation ( [ tildel1 ] ) or ( [ tildel2 ] ) using a full convolution of size .we do this in the next subsection , where we test all stages of the mesh refinement method . having computed separately the short- and long - wavelength contributions to the density and velocity ( or displacement ) fields , we combine them in figure [ fig : multi2 ] using equation ( [ del12 ] ) to give the complete multiscale fields .the four - fold increase in resolution can be seen by comparing the subvolume with the rest of the field .the effects of higher resolution are much more pronounced for the density than they are for the velocity because of the density field s steeper dependence on wavenumber .-component ) fields in a region 64 mpc across extracted from the 256 mpc realization .false colors are scaled to linear values ranging from standard deviations of the high - resolution fields .the refinement subgrid is the upper left quadrant in each case . outside of this regionthe coarse ( top ) grid values are shown to illustrate how mesh refinement increases the resolution .the density figure may be compared directly with the right - hand panel of figure [ fig : deltop].,title="fig : " ] -component ) fields in a region 64 mpc across extracted from the 256 mpc realization .false colors are scaled to linear values ranging from standard deviations of the high - resolution fields .the refinement subgrid is the upper left quadrant in each case . outside of this regionthe coarse ( top ) grid values are shown to illustrate how mesh refinement increases the resolution .the density figure may be compared directly with the right - hand panel of figure [ fig : deltop].,title="fig : " ] a test of the entire mesh refinement procedure can be made by generating the density and velocity fields at full resolution over the whole 256 mpc box .this was done by modifying the author s grafic code to replace its random numbers in -space with an input white noise field in real space .the noise field was constructed to match the upper left quadrant of the left panel of figure [ fig : wnsamp ] in a high - resolution region 32 mpc across and to match the left panel of figure [ fig : cnsamp ] everywhere else , with noise values made uniform in 1 mpc cells ( grid cells of the grid ) .thus , the white noise field was sampled as in figure [ fig : amr ] . in order to have sufficient computer memory ,grafic was run on the origin 2000 supercomputer at the national computational science alliance .grid with random numbers chosen to match the multiscale calculation .this figure gives the exact results against which to compare figure [ fig : multi2].,title="fig : " ] grid with random numbers chosen to match the multiscale calculation .this figure gives the exact results against which to compare figure [ fig : multi2].,title="fig : " ] the results of this full - resolution calculation are shown in figure [ fig : grafic1024 ] .the high - resolution fields are smooth outside of the refinement volume simply because they have been convolved with a high - resolution transfer function ; by contrast , figure [ fig : multi2 ] shows only the sampling of a low - resolution mesh outside of the subvolume .these resolution differences are not important here .rather , it is the comparison in the high - resolution subvolume that is important .evidently the density field is accurately reproduced by the multiscale algorithm while there are some visible errors in the velocity field . and [ fig : grafic1024 ] .false colors are scaled to standard deviations for the density errors and standard deviations for the velocity errors .each map is a mosaic of 4 panels showing the errors resulting from the two major approximations used in the multiscale computation .the right columns , labelled `` approx .t(x ) , '' show the effect of the spherical transform method for computing the transfer functions . the lower rows , labelled `` approx .w(x ) , '' show the effect of the minimal -space sampling method for computing the anti - aliasing filters .the upper - left quadrants show the errors when exact ( and computationally expensive ) transfer and anti - aliasing filters are used while the lower - right quadrants show the errors in figure [ fig : multi2 ] .there are residual errors even with exact and because of the spatial truncation of .,title="fig : " ] and [ fig : grafic1024 ] .false colors are scaled to standard deviations for the density errors and standard deviations for the velocity errors .each map is a mosaic of 4 panels showing the errors resulting from the two major approximations used in the multiscale computation .the right columns , labelled `` approx .t(x ) , '' show the effect of the spherical transform method for computing the transfer functions . the lower rows , labelled `` approx .w(x ) , '' show the effect of the minimal -space sampling method for computing the anti - aliasing filters .the upper - left quadrants show the errors when exact ( and computationally expensive ) transfer and anti - aliasing filters are used while the lower - right quadrants show the errors in figure [ fig : multi2 ] .there are residual errors even with exact and because of the spatial truncation of .,title="fig : " ] to quantify these errors , in figure [ fig : errmosaic ] we show residuals obtained by subtracting the exact maps from the multiscale maps for the 32 mpc refinement subvolume . a priori we expect three main sources of error : 1 .the use of the spherical method for fast computation of the short - wavelength transfer functions ; 2 . the use of the minimal -space sampling method for fast computation of the long - wavelength anti - aliasing filters ; and 3 .truncation of the anti - aliasing filter to perform the convolution over a subvolume instead of the entire top grid .all three effects are visible in figure [ fig : errmosaic ] . the rows and columns that are not labeled use the exact filters but are still subject to the third error , truncation of . scaled to the standard deviation of the high - resolution density field , the rms errors of density in the subvolume shown in figure [ fig : errmosaic ] are 0.04% ( upper left ) , 0.09% ( upper right ) , 0.06% ( lower left ) , and 0.10% ( lower right ) .thus , the major source of error for adaptive refinement of the density field is the use of a spherical transfer function for the short - wavelength components .the magnitude of the error is insignificant for the accuracy of cosmological simulations . for the velocity field ,on the other hand , the corresponding rms errors are 3.2% ( top row ) and 7.0% ( bottom row ) .clearly the anti - aliasing filter step is causing problems for the long - wavelength velocity field .the long - range coherence of the velocity ( or gravity ) field has been seen to cause difficulties for the evaluation of the long - wavelength components by anti - aliasing the coarse - grid sample .this subsection presents a solution .several attempts were made to reduce the anti - aliasing errors while continuing to use a minimal -space sampling algorithm .none of the attempts succeeded until we split the long - wavelength velocity field from the top grid into parts due separately to the mass inside and outside the refinement subvolume .the motivation for this was the idea that the latter part ( the tidal field within the subvolume caused by mass outside it ) might be smooth enough to require minimal interpolation to the subgrid . for convenience ,tidal split was done by setting outside or inside the subvolume instead of setting ; the coherence length of the density field is so small that very little difference is made either way .linearity of the velocity field ensures that when we add together the two parts either way we get the complete long - range velocity field . .this decomposition is the key to reducing the anti - aliasing velocity field errors , as described in the text.,title="fig : " ] .this decomposition is the key to reducing the anti - aliasing velocity field errors , as described in the text.,title="fig : " ] figure [ fig : tides ] shows the decomposition of the velocity field into the `` outer '' and `` inner '' parts .they were computed by zeroing the white noise field in the appropriate regions and re - running grafic .the same boundary conditions are used as in figure [ fig : delvlong ] .the character of the two parts is strikingly different within the refinement subvolume ( the upper left quadrant ) .the outer part is smooth , as expected .the inner part has a smaller coherence length and it is well - localized over the upper left quadrant .this spatial localization and coherence suggest that the truncated minimal -space filter will be much more accurate for the inner part than for the complete velocity field . for the outer part , on the other hand , we know that the discontinuities at the boundary of the buffer regions will cause appreciable errors if convolved with the same filter .the smoothness of the tidal field inside the subvolume suggests that we use a much simpler and more localized filter . .the false colors are scaled to where is the standard deviation .the upper left ( exact / exact ) and lower middle ( minimal / minimal ) maps are the same as the two rightmost maps in figure [ fig : errmosaic ] , where they were imaged with a color stretch only half as large ( .rms errors for each map ( as a percentage of the rms one - dimensional velocity ) are 3.2 , 6.8 , 2.7 ( top row , left to right ) and 3.6 , 7.0 , 3.2 ( bottom row ) .the bottom right map gives the errors for the best fast method .as in figure [ fig : errmosaic ] , there are errors even in the `` exact '' case because of the truncation of the anti - aliasing filter . ]several different filters were tried for the outer ( tidal ) part of the velocity field .the results for three are shown in figure [ fig : vxerr3 ] .the best simple filter was found to be sharp -space filtering , which sets everywhere except the fundamental brillouin zone , where ( before the hanning filter applied at the fine mesh scale ) .this filter completely eliminates the aliasing error by eliminating the replication of the fundamental brillouin zone in -space .it is also much more localized than the exact filter , so that spurious effects from the buffer truncation in the left panel of figure [ fig : tides ] are not convolved into the subvolume .the price one pays is that it has the wrong shape at small distances compared with the exact filter , leading to a new source of errors in the rightmost columns of figure [ fig : vxerr3 ] .however , these errors are smaller than the error made with the exact filter due to the buffer truncation ( top left panel ) .a comparison of the top and bottom rows of figure [ fig : vxerr3 ] shows that the filtering of the inner part of the velocity field is a minor source of error .it is the tidal field ( the outer part ) that requires delicate handling . using a sharp -space filter for the outer part and minimal -space filter for the inner part , our final errors are 3.2% rms , the same as if we had used the computationally expensive exact filter throughout .these errors are probably small enough to be unimportant in cosmological simulations .they could be further reduced , at the expense of an increase in computer time and memory , by increasing the size of the buffer region for the top grid .the ability to refine an existing mesh opens the possibility of recursive refinement to multiple levels , offering a kind of telescopic zoom into cosmic structures . before this digital zoom lens can work , however , there are some implementation issues to face . the issues addressed in [ sec : test ] must be considered anew in light of recursive refinement . to see the issues arising in recursive refinement , consider a three - level refinement with refinement factors and . by analogy with equations ( [ xgrid ] ) and ( [ xisamp ] ), we write the grid coordinates and noise fields as where is obtained by averaging over . at each level of the hierarchythere is a different grid ( labeled by , , and , respectively ) .the variances of the white noise samples are related by .the main idea of recursive refinement is that , once we have refined to level ( where is the periodic top grid before any refinement ) , the fields computed at that level serve as top - grid fields to be refined to level . equations ( [ del12 ] ) , ( [ del12a ] ) , and ( [ tildel2 ] ) showed how that refinement works for by applying an anti - aliasing filter to the level-0 fields .for we get *t\ .\ ] ] the procedure for refinement to an arbitrary level is now clear .first we sample the fields at the preceding level and spread them to the new fine grid .then we convolve with the appropriate anti - aliasing filter .next we sample short - wavelength noise on the new fine grid and subtract the coarse - cell means so that so that the noise is zero at every higher level of the hierarchy .this noise is then convolved with the transfer function and added to the long wavelength field to give the high - resolution field .this procedure is the same for all levels of the hierarchy .however , there are some issues to consider involving the transfer functions and anti - aliasing filters .we discuss these next .as was the case for two - level refinement , exact sampling requires that we compute the upper - level sample without any filtering .that is , we should eliminate the hanning filter from both the anti - aliasing filter and the transfer function before computing all refinements except the last one at the highest degree of refinement .otherwise we would lose power present in the intermediate refinement levels .eliminating the hanning filter is straightforward for the anti - aliasing filters used in [ sec : implem ] .the minimal -space and sharp -space filters are equally easy to compute with or without a hanning filter. however , the transfer functions are an altogether different matter . in [ sec : short ] we used spherical transfer functions after concluding in [ sec : minimal ] that the coarse -space sampling of the minimal method would give significant errors for the short - wavelength fields .unfortunately , the unfiltered density transfer function is anisotropic , as was shown in figure [ fig : transd3 ]. it also has a higher peak value than the filtered transfer function in figure [ fig : transf ] .computing the exact transfer function is unacceptably costly , with the operations count scaling as the sixth power of the total refinement factor ( i.e. the product of the individual refinement factors for each level ) .thus , we are forced to reconsider the spherical and minimal sampling methods for the transfer functions .the unfiltered density is nonspherical because -space is sampled in a cube instead of a sphere . besides creating anisotropy, this sampling increases the small - scale power . as an alternative, we might use the spherical method of equation ( [ transkx3 ] ) with a hanning filter but with a maximum spatial frequency ( i.e. the cutoff for the hanning filter ) larger than the nyquist frequency for grid spacing .this is easily done by increasing the nyquist frequency by a factor to include the high - frequency waves in the brillouin zone corners with . for and the power spectrum parameters used before, this method reproduces the correct peak value of the density transfer function .however , the spherical method can not reproduce the anisotropy evident in figure [ fig : transd3 ] , which is important on small scales .( for multilevel refinement , exact sampling requires that we use the anisotropic transfer functions at all but the finest refinement .the refinement process itself magnifies cubical pixels .this anisotropy is cancelled by summing over the contributions from different levels of the hierarchy only if we use anisotropic filters . ) on the other hand , the velocity transfer function is an integral over the density transfer function and therefore is much smoother .errors at small due to the neglect of anisotropy are much less important for the velocity field .thus , we might approximate the correct , anisotropic transfer function for the radial velocity by the spherical one .-space filters as described in the text .the density errors have been scaled to and the velocity errors to .the minimal method is accurate at small spatial scales ( density ) while the spherical method is accurate at large scales ( velocity ) . ]based on these considerations , we reconsider both the spherical and minimal -space sampling methods for approximating the unfiltered transfer functions .figure [ fig : dvnohan ] shows the residuals from the exact , anisotropic transfer functions .comparison with figure [ fig : errmosaic ] shows that removal of the hanning filter adds no significant errors provided that the minimal method is used for the density and the spherical method is used for the velocity .the spherical method works poorly for the density field because it neglects the small - scale anisotropy .the minimal method works poorly for the velocity field because it assumes periodicity on a scale twice the subgrid extent .the minimal sampling method works better than expected for the density transfer function .the reason for its success is that for cdm - like power spectra the transfer function is dominated by large spatial frequencies for which the coarse sampling of fourier space introduces little error in the discrete fourier transform .for the velocity transfer function , long - wavelength contributions dominate and the small - scale errors of the spherical method cause little harm . the next issue to consider is the treatment of tidal fields from coarser levels of the grid hierarchy during the anti - aliasing step . as we found in [sec : fixv ] , contributions from fluctuations inside the subvolume can be filtered using the minimal -space sampling method , but contributions from tides generated outside the subvolume must be convolved with a sharp -space filter .this requires a clear separation of `` inner '' and `` outer . ''care is needed in the case of a multilevel hierarchy .consider , for example , refinement of the 256 mpc top grid shown in figure [ fig : deltop ] in two stages to produce the four - fold refinement of the 32 mpc level-2 subvolume in figure [ fig : multi2 ] .the level-1 subvolume may have any size between 64 and 128 mpc .( each refinement must be over a subvolume no more than half the size of the upper - level volume in order to accomodate the buffer region used in the anti - aliasing step . ) when computing the long - wavelength velocity contributions for the level-2 grid , the level-1 fields must be computed with a tidal volume of 32 mpc and not the size of the level-1 subvolume .moreover , the same is true of the level-0 fields .correct treatment of the tidal fields requires that _ all _ upper levels be sampled with inside ( or outside ) the final high - resolution subvolume .this requirement implies that a chain of refinements must be performed for every level of the hierarchy . computing a level-1 refinement requires only one application of convolution plus small - scale noise .computing a level-2 refinement requires two applications : one to get the level-1 samples with the correct tidal volume and a second to get the level-2 results .thus , computing all three levels ( 0 , 1 , and 2 ) requires three runs of the periodic grid routine in a box of size 256 mpc ( with no tides , with tides for the level-1 subvolume , and with tides for the level-2 subvolume ) plus three runs of the refinement algorithm .the process of successive refinement is illustrated in figure [ fig : tide2 ] for the computation of the level-2 velocity field .the top row is the same as figure [ fig : tides ] except that the buffer has been unwrapped to surround the volume .however , instead of being prepared for a refinement , these top - level fields are prepared here for a refinement .they are convolved with the appropriate anti - aliasing filters and short wavelength noise is added to give level 1 , shown in the lower row .the level-1 tidal fields are resolved better and do not suffer from aliasing at the resolution shown ( 0.5 mpc grid spacing ) .these fields provide the input to a final refinement to produce the level-2 fields . comparing the resulting level-2 fields with figure [ fig : grafic1024 ] , we find that the magnitude of the errors depends on the size of the level-1 grid .for the case shown in figure [ fig : tide2 ] , with a 64 mpc grid , the rms density and velocity errors are 0.02% and 3.9% , respectively .when the level-1 grid size is increased to its maximum value of 128 mpc , these errors drop to 0.0094% and 3.3% , respectively .these compare with the errors for a single refinement , 0.10% and 3.2% , respectively .the density errors have decreased with two refinements compared with one refinement mainly because the minimal -space sampling of the anti - aliasing filter is coarser ( hence less accurate ) for .for the velocity field the errors are dominated not by the coarse sampling errors in but rather in the errors due to its truncation .in other words , it is the discontinuities at the edge of the buffer region ( shown in figure [ fig : tides ] ) that cause problems .however , when the top grid is refined by in a subvolume of half its size , the doubling used for the convolution has a fortunate side effect : the buffer region fills out the volume so that the entire top grid is included . in this special case , which applies to our 128 mpc level-1 grid , there are no errors from periodic boundary conditions and the minimal -space filter is exact .the errors arise almost exclusively from the second level of refinement .thus , the two - level refinement in this case has the same velocity field errors as a single refinement .the convolution method presented in this paper lends itself to a variety of tricks that can be done with sampling of gaussian random noise .these need not always involve adaptive mesh refinement and convolution with isolated boundary conditions .for example , in figure [ fig : grafic1024 ] we achieved multiscale initial conditions using a single high - resolution grid but with the white noise sampled more finely within a subvolume .this procedure has the advantage of allowing multiscale fields to be computed free from aliasing errors .although it is limited by computer memory constraints , this method is the preferred choice for producing multiscale fields when computer memory is not a limitation . our white noise sampling and convolution methods offer another way to change the dynamic range of a simulation while retaining the sampling of a fixed set of cosmic structures .instead of refining to small scales , one may change the large scale structure in the simulation by expanding or shrinking the top grid size .this offers a simple and useful way , for example , to add or subtract long waves in order to examine their effect on small scale structure .this brief section presents the method for expanding or shrinking a simulation .one way to implement this idea , which we will not explore further , is to take an existing small - scale simulation to provide the high - resolution field as in equation ( [ del12 ] ) . the small volume , originally with periodic boundary conditions , is then embedded with isolated boundary conditions in a new top grid field .the white noise sample used to generate the existing small - scale simulation is taken to be and a new sample is created for the top grid .this procedure is the same as that described in [ sec : implem ] except that there the top grid sample was given and the subgrid sample was added . hereit is the other way around .the implementation proceeds as in [ sec : implem ] .it is straightforward and need not be elaborated .an alternative method is to change the size of an existing grid while retaining a fixed grid spacing without refinement .this method is easy to implement because no aliasing occurs if the grid is not refined .moreover , periodic boundary conditions are used for all convolutions .we simply change the scale of periodicity .this can be achieved using a modified version of the grafic code called grafic1 which is being distributed along with the mesh - refinement version grafic2 .the procedure is as follows .first , identify a volume ( perhaps a subvolume of an existing simulation ) whose size is to be changed .grafic1 should be run so as to output the white noise field used in constructing the initial conditions .note that the spatial mean noise level vanishes so that the mean density matches that of the background cosmological model .this is a consequence of periodic boundary conditions .the white noise field is now expanded with the addition of new white noise if one wishes to expand the box .if one wishes to shrink the box instead , then some of the noise field is excised .the amplitude of the white noise must be changed according to equation ( [ wiener3 ] ) .for example , if the grid size is doubled in each dimension , the existing sample must be multiplied by and the added noise must have the same variance .these manipulations are easy to perform in real space .the absence of any correlations for the white noise makes the treatment of boundary conditions very simple .note that must vanish on the final grid because of periodic boundary conditions .however , if the volume has been expanded by a factor , the mean value within the original volume can be changed by adding a constant , e.g. a normal deviate with variance .finally , the new white noise field is now given as input to a second run of grafic1 , which calculates the density and velocity fields using exact transfer functions .mpc has been extracted to create a new volume with periodic boundary conditions and with the same structures as the sample on the left.,title="fig : " ] mpc has been extracted to create a new volume with periodic boundary conditions and with the same structures as the sample on the left.,title="fig : " ] this procedure is illustrated in figure [ fig : shrinktop ] .one sees that the structures in the left original volume are reproduced in the new sample but that they are modified at the edges by the requirement of periodic boundary conditions .this affects the structure to within a distance of a few coherence lengths .the hot dark matter model ( with and ) was chosen for this test so that the coherence length would be interestingly large .one also sees that the initial conditions codes do not require the volume to be a cube nor the axis lengths to be powers of 2 .grafic1 and grafic2 allow for arbitrary parallelpipeds as long as there is at least one factor of two in each axis length .we have presented an algorithm for adaptive mesh refinement of gaussian random fields .the algorithm provides appropriate initial conditions for multiscale cosmological simulations . aside from small numerical errors ,the density and velocity fields at each refinement level are exact samples of gaussian random fields with the correct correlation functions including all contributions from tides generated at lower - resolution refinement levels .an arbitrary number of refinement levels is allowed in principle , enabling cosmological simulations to be performed which have the correct sampling of fluctuations over arbitrarily large dynamic ranges of length and mass .two convolutions are performed per refinement level for each field component .these convolutions are performed using ffts with the grid doubled in each dimension .thus , the computer memory and time requirements for adaptive mesh refinement are significantly greater than for sampling of gaussian random fields with a single grid .one advantage of the refinement algorithm is that the dynamic range in mass is not limited by the size of the largest fft that can fit into memory .also , it automatically provides the correct initial conditions for multiscale simulations such as that of . adaptive mesh refinement of gaussian random fields is more complicated than refinement of , for instance , the fluid variables in a hydrodynamics solver .the reason for this is that gaussian random fields have long - range correlations .correct refinement within a subvolume can not be done independently of the lower resolution fields outside that subvolume . when the resolution is increased by decreasing the pixel size , a given sample suffers from aliasing .correct sampling requires convolution by an anti - aliasing filter .short - wavelength contributions are then provided by convolution of white noise with the appropriate transfer function .due mainly to imperfect anti - aliasing , numerical errors prevent one from achieving perfect sampling of multiscale initial conditions .however , with careful analysis of the source of errors primarily from tides generated outside the subvolume we have reduced these errors to an acceptable level . in testing with a realistic cosmological model ,the rms errors for a four - fold refinement were 0.1% or smaller for the density and 3% for the velocity .we showed that the most accurate results are achieved by refinement factors of two , with each successive subvolume occupying one - eighth the volume ( half the linear extent ) of the parent mesh . for a single refinement level ,the anti - aliasing errors vanish in this case .further testing is advised before the code is run to more than 4 refinement levels or a total refinement greater than 16 .also , some of the same numerical issues ( e.g. refinement of tidal fields ) identified here may arise in the gravity solvers used by nonlinear evolution codes .careful testing of both the initial conditions and the nonlinear simulations codes is advised before workers apply them to dynamic ranges in mass exceeding .unfortunately , it is very difficult to provide exact standards for comparison with grid hierarchies of such large dynamic range .the algorithm described in this paper has been implemented in fortran-77 and released in a publically available code package that can be downloaded from http://arcturus.mit.edu / grafic/. the package has three main codes : 1 .lingers is an accurate linear general relativity solver that calculates transfer functions at a range of redshifts .2 . grafic1 computes single - grid gaussian random field samples with periodic boundary conditions .3 . grafic2 refines gaussian random fields starting with those produced by grafic1 .it may be run repeatedly to recursively refine gaussian random fields to arbitrary refinement levels .lingers is a modification of the linger_syn code from the cosmics package .it produces output at a range of times enabling accurate interpolation to the starting redshift of the nonlinear cosmological simulation .cmbfast could be used instead , although the treatments of normalization and units are different for the two codes .grafic1 is a modification of the grafic code from cosmics that incorporates exact transfer functions for both cdm and baryons at arbitrary redshift from lingers and uses white noise sampled in real space as the starting point for gaussian random fields . as demonstrated in [ sec : tricks ] , sampling of white noise enables one to change the size of the computational volume , or to embed a given realization into a larger volume with different resolution , simply by modifying the noise file . grafic1 and grafic2 also have optional half - mesh cell offsets for the cdm or baryon grids .grafic2 is the multiscale adaptive mesh refinement code .it requires substantial computing requirements for large grids , mainly because of the need to double the extent of each dimension . thus , suppose that one has a top grid and wishes to double the resolution in one - eighth of the volume . computing the sample with grafic1 requires 64 mb of memory . refining it with grafic2requires 1.02 gb of memory and 3.5 gb of scratch disk .the cpu time is also much larger , but is still far less than the time required for the nonlinear evolution .fortunately , these computing resources are now available on desktop machines .much larger grids are possible with parallel supercomputers .i thank my colleagues in the grand challenge cosmology consortium j. p. ostriker , m. l. norman , and l. hernquist for encouragement in this work .special thanks are given to l. hernquist and the harvard - smithsonian center for astrophysics for hosting my sabbatical during this work .supercomputer time was provided by the national computational science alliance at the university of illinois at urbana - champaign .financial support was provided by nsf grants aci-9619019 and ast-9803137 .99 abel , t. , bryan , g. l. , & norman , m. l. 2000 , , 540 , 39 bertschinger , e. 1995 , cosmics software release ( astro - ph/9506070 ) bertschinger , e. 1998 , , 36 , 599 colberg , j. m. et al ., , 319 , 209 fukushige , t. & makino , j. 1997 , , 477 , l9 ghigna , s. , moore , b. , governato , f. , lake , g. , quinn , t. , & stadel , j. 2000 , , 544 , 616 hoffman , y. & ribak , e. 1991 , , 380 , l5 katz , n. , quinn , t. , bertschinger , e. , & gelb , j. 1994 , , 270 , l1 ma , c .- p . & bertschinger , e. 1995 , , 455 , 7 pen , u .- l .1997 , , 490 , l127 salmon , j. 1996 , , 460 , 59 seljak , u. & zaldarriaga , m. 1996 , , 469 , 437
this paper describes the generation of initial conditions for numerical simulations in cosmology with multiple levels of resolution , or multiscale simulations . we present the theory of adaptive mesh refinement of gaussian random fields followed by the implementation and testing of a computer code package performing this refinement called grafic2 . this package is available to the computational cosmology community at http://arcturus.mit.edu/grafic/ or by email from the author .
in just a few years , peer - to - peer content distribution has come to generate a significant portion of the total internet traffic .the widespread adoption of such protocols for delivering large data volumes in a global scale is arguably due to their scalability and robustness properties . understanding the mechanisms that affect the performance of such protocols and overcoming the existing shortcomingswill ensure the continued success of 2p data delivery . to that end , this paper presents a detailed experimental study of the peer selection strategy in bittorrent , one of the most popular 2p content distribution protocols .recently , researchers have formulated analytical models for the problem of efficient data exchange among peers , and measurement studies using actual download traces have attempted to shed light into the success of bittorrent .however , certain properties of these studies have interfered with their accurate evaluation of the dynamics of algorithms and their impact on overall system performance .for example , analytical models can provide valuable insight , but they are typically based on unrealistic assumptions , such as giving all participants global system knowledge ; actual download traces may differ substantially from the their predictions .furthermore , most measurement studies have evaluated peers connected to public _ torrents_bittorrent download sessions .they provide detailed data about the overall behavior of deployed bittorrent systems , however , the inherent limitations in collecting per - peer information in a public torrent obstructs the understanding of individual peer decisions during the download .et al . _ recently attempted to evaluate those decisions , but only from the viewpoint of a single peer . to overcome these limitations, we conduct extensive experiments on a private testbed and collect data from all peers in a controlled environment .in particular , we focus on the so - called _ choking algorithm _ for peer selection , which may be the driving factor behind bittorrent s high performance .this approach allows us to examine the behavior of individual peers under a microscope and observe their decisions and interactions during the download .our main contribution is to demonstrate that the choking algorithm facilitates the formation of clusters of similar - bandwidth peers , ensures effective sharing incentives by rewarding peers who contribute data to the system , and maintains high upload utilization for the majority of the download duration .these properties have been hinted at in previous work ; this study constitutes their first experimental validation .we also show that , if the seed is underprovisioned , all peers tend to complete their download around the same time , independently of how much they upload .clusters are no longer formed , and , interestingly , high - capacity peers assist the seed in disseminating data to low - capacity ones , resulting in everyone maintaining high upload utilization .finally , based on our observations , we provide guidelines for seed provisioning by content providers , and discuss a tracker protocol extension that addresses an identified limitation of the protocol , namely the low upload utilization at the beginning of a torrent s lifetime . the rest of this paper is organized as follows .section [ background ] provides a description of the bittorrent protocol and an explanation of the choking algorithm , as implemented in the official bittorrent client .section [ methodology ] describes our experimental methodology and the rationale behind the experiments , while section [ results ] presents our results .section [ discussion ] discusses seed provisioning guidelines and the proposed tracker protocol extension .lastly , section [ related ] sets this study in the context of related work , and section [ conclusion ] concludes .bittorrent is a peer - to - peer content distribution protocol that scales well with the number of participating peers .a system capitalizes on the upload capacity of each peer in order to increase global system capacity as the number of peers increases .a major factor behind s success is a built - in incentives mechanism , implemented by its _ choking algorithm _ , which is designed to encourage peers to contribute data .the rest of this section introduces the terminology used in the paper and describes s operation in detail , with a particular focus on the choking algorithm .the terminology used in the community is not standardized . for the sake of clarity , we define here the terms used throughout this paper . ** torrent*. a _ torrent _ is the set of peers cooperating to download the same content using the protocol . * * tracker*. the _ tracker _ is the only centralized component of the system .it is not involved in the actual distribution of the content , but it keeps track of all peers currently participating in the download , and it collects statistics . * * pieces and blocks*. content transferred using is split into _ pieces _ , with each piece being split into multiple _although blocks are the transmission unit , peers can only share complete pieces with others . ** metainfo file*. the _ metainfo file _ , also called a torrent file , contains all the information necessary to download the content and includes the number of pieces , sha-1 hashes for all the pieces that are used to verify received data , and the ip address and port number of the tracker .* * interested and choked*. we say that peer is _ interested _ in peer when has pieces of the content that does not have . conversely , peer is _ not interested _ in peer when only has a subset of the pieces of .we also say that peer is _ choked _ by peer when decides not to send any data to .conversely , peer is _ unchoked _ by peer when is willing to send data to .note that this does not necessarily mean that peer is uploading data to , but rather that will upload to if issues a data request . ** peer set*. each peer maintains a list of other peers to which it has open tcp connections .we call this list the _ peer set _ , and it is also known as the neighbor set . * * local and remote peers*. when describing the choking algorithm , we take the viewpoint of a single peer , which we call the _ local peer_. we refer to the peers in the local peer s peer set as _ remote peers_. * * leecher and seed*. a peer can be in one of two states : the _ leecher _ state , when it is still downloading pieces of the content , and the _ seed _ state , when it has all the pieces and is sharing them with others . * * initial seed*. the _ initial seed _ is the first peer that offers the content for download. there can be more than one initial seeds . in this paper , however , we only consider the case of a single initial seed . ** rarest - first algorithm*. the _ rarest - first algorithm _ is the piece selection strategy used by clients .it is also known as the _ local _ rarest - first algorithm since it bases the selection on the available information locally at each peer .peers independently maintain a list of the pieces each of their remote peers has and build a _ rarest - pieces set _ containing the indices of the pieces with the least number of copies .this set is updated every time a remote peer announces that it acquired a new piece , and is used by the local peer to select the next piece to download . * * choking algorithm*. the _ choking algorithm _ , also known as the _ tit - for - tat algorithm _ , is the peer selection strategy used by clients .we provide a detailed description of this algorithm in section [ choking_algorithm ] . * * official client*. the official client , also known as the _ mainline _ client , was the first implementation and was initially developed by bram cohen , bittorrent s creator . prior to distribution ,the content is divided into multiple pieces , and each piece into multiple blocks .the metainfo file is then created by the content provider . to join a torrent ,a peer retrieves the metainfo file out of band , usually from a well - known website , and contacts the tracker that responds with a peer set of randomly selected peers , possibly including both seeds and leechers . then starts contacting peers in this set and requesting different pieces of the content .most clients nowadays use the rarest - first algorithm for piece selection . in this manner, peer selects the next piece to download from its rarest - pieces set .a local peer determines which pieces its remote peers have based on _ bitfield _ messages exchanged upon establishing new connections , which contain a list of all the pieces a peer has .peers also send _ have _ messages to everyone in their peer set when they successfully receive and verify a new piece .a peer uses the choking algorithm to decide which peers to exchange data with .the algorithm generally gives preference to those peers who upload data at high rates .once per _ rechoke period _ , typically set to ten seconds , a peer re - calculates the data receiving rates from all peers in its peer set .it then selects the fastest ones , a fixed number of them , and uploads only to those for the duration of the period . in bittorrent parlance ,a peer unchokes the fastest uploaders via a _regular unchoke _ , and chokes all the rest .in addition , it unchokes a randomly selected peer via a so - called _ optimistic unchoke_. the logic behind this is explained in detail in section [ choking_algorithm ] .seeds , who do not need to download any pieces , follow a different unchoke strategy .most implementations dictate that seeds unchoke those leechers that _ download _ data at the highest rates , in order to better utilize seed capacity in disseminating the content as efficiently as possible .however , the official bittorrent client recently introduced a modified unchoke algorithm in seed state , in version 4.0.0 .we perform the first detailed experimental evaluation of this modified algorithm and show that it enables a more uniform utilization of the seed bandwidth across all leechers .we now describe the choking algorithm in detail as implemented in the official client , version 4.0.2 .the algorithm was initially introduced to foster a high level of data exchange reciprocation and is one of the main factors behind bittorrent s fairness model : peers that contribute data to others at high rates should receive high download throughput , and _ free - riders _ , peers that do not upload , should be penalized by being unable to achieve high download rates .it is worth noting that , although the algorithm has been shown to perform well in a variety of scenarios , it has recently been found that it does not completely eliminate free - riding . in particular , a peer may improve its download rates by downloading from seeds , acquiring a large view of the peers in the torrent , or benefiting from many optimistic unchokes .we discuss this issue further in section [ sec : provi - sharing - incentives ] . as we noted earlier, the choking algorithm is different for leechers and seeds .when in leecher state , a peer unchokes a fixed number of remote peers . unless specified explicitly by the user , this number of parallel uploadsis determined by s upload bandwidth .for example , for an upload limit greater than or equal to 15 kb / s but less than 42 kb / s this number is set to 4 . for generality , in the followingwe assume that the number of parallel uploads is set to . in leecher state , the choking algorithmis executed periodically at every rechoke period , i.e. , every ten seconds , and in addition , whenever an unchoked and interested peer leaves the peer set , or whenever an unchoked peer switches its interest state . as a result , the time interval between two executions of the algorithm can sometime be shorter than a rechoke period . every time the choking algorithm is executed, we say that a new _ round _ starts , and the following steps are taken . 1 .[ step : l1 ] the local peer orders interested remote leechers according to the rates at which it received data from them , and ignores leechers that have not sent any data in the last thirty seconds .these so - called _ snubbed _ peers are excluded from consideration in order to guarantee that only contributing peers are unchoked .[ step : l2 ] the leechers with the highest rates are unchoked via a_ regular unchoke_. 3 .[ step : l3 ] in addition , every three rounds , an interested candidate peer is chosen _ at random _ to be unchoked via an _optimistic unchoke_. if this peer is not unchoked via a regular unchoke , it is unchoked via an optimistic unchoke and the round completes .if this peer is already unchoked via a regular unchoke , a new candidate peer is chosen _ at random_. 1 .[ step : l3a ] if the candidate peer is interested in the local peer , it is unchoked via an optimistic unchoke and the round completes .[ step : l3b ] otherwise , the candidate peer is unchoked anyway , and step [ step : l3a ] is repeated with a new randomly chosen candidate . the round completes when an interested peer is found or when there are no more peers to choose , whichever comes first .although more than peers can be unchoked by the algorithm , only interested peers can be unchoked in the same round .unchoking non - interested peers improves the reaction time in case one of those peers becomes interested during the following rechoke period ; data transfer can begin right away without waiting for the choking algorithm to be executed .furthermore , optimistic unchokes serve two major purposes .they function as a resource discovery mechanism to continually evaluate the upload bandwidth of peers in the peer set in an effort to discover better partners .they also enable new peers that do not have any pieces yet to bootstrap into the torrent by giving them some initial pieces without requiring any reciprocation . in the seed state , older versions of the official client , as well as many current versions of other clients , perform the same steps as in leecher state , with the only difference being that the ordering in step [ step : l1 ]is based on data transmission rates from the seed , rather than to it .consequently , peers with high download capacity are favored independently of their contribution to the torrent , a fact that could be exploited by free - riders . in version 4.0.0 ,the official client introduced a modified choking algorithm in seed state . according to this modified algorithm, a seed performs the same fixed number of parallel uploads as in leecher state , but with different peer selection criteria .the algorithm is executed periodically at every rechoke period , i.e. , every ten seconds , and in addition , whenever an unchoked and interested peer leaves the peer set , or whenever an unchoked peer switches its interest state .every time the choking algorithm is executed , a new round starts , and the following steps are taken . 1 . [ step : s1 ] the local peer orders the interested and _ unchoked _ remote leechers according to the time it has sent them an unchoke message , most recently unchoked peers first .this is the initial time the local peer had unchoked them ; if the local peer keeps uploading to them for more than one rechoke periods , it does not send them additional unchoke messages .this step only considers leechers to which an unchoke message has been sent recently ( less than twenty seconds ago ) or leechers that have pending requests for blocks ( to ensure that they get the requested data as soon as possible ) . in case of a tie , leechers are ordered according to their download rates from the seed , fastest ones first , just like the old algorithm did .note that , as leechers do not upload anything to seeds , the notion of snubbed peers does not exist in seed state .[ step : s2 ] the number of optimistic unchokes to perform _ over the duration of the next three rechoke periods _ ,i.e. , thirty seconds , is determined using a heuristic .these optimistic unchokes are uniformly spread over this duration , performing optimistic unchokes per rechoke period . due to rounding issues, can be different for each of the three rechoke periods .for instance , when the number of parallel uploads is 4 , the heuristic dictates that only two optimistic unchokes be performed in the entire thirty - second period .thus , one optimistic unchoke is performed during each of the first two periods and none during the last .[ step : s3 ] at each rechoke period , the first leechers in the list from step [ step : s1 ] are unchoked via regular unchokes .step [ step : s1 ] includes the key feature of the modified algorithm in seed state . on the one hand , leechers are no longer unchoked based on their observed download rates from the seed , but mainly based on the last time an unchoke message was sent to them .thus , after a seed has been sending data to a leecher for six rechoke periods ( when the number of parallel uploads is 4 ) , it will stop doing so and select another leecher to serve . in this manner ,a seed will provide service to all leechers sooner or later , preventing any single leecher from monopolizing it .on the other hand , according to the official client s version notes , this modified choking algorithm in seed state also aims to reduce the amount of duplicate data a seed needs to upload before it has pushed out a full copy of the content into the torrent .it strives to achieve that by keeping leechers unchoked for six rechoke periods , in order to prevent high leecher turnover from resulting in the transmission of the same pieces to different leechers .interestingly , the most recent version of the official client has reverted back to the original choking algorithm in seed state .although the modified version of the algorithm we described here is more robust to modified free - riding implementations , it might be less efficient in torrents with compliant peers .since the company behind the official client has been targeting legal content distribution , where client alteration would arguably be harder , it may aim to optimize the implementation for this scenario .some other implementations have included a _ super - seeding _ feature with similar goals , in particular to assist a service provider with limited upload capacity in seeding a large torrent .a seed with this feature masquerades as a normal leecher with no data .as other peers connect to it , it will advertise a piece that it has never uploaded before or that is very rare .after uploading this piece to a given leecher , the seed will not advertise any new pieces to that leecher until it sees another peer s have message for the piece , indicating that the leecher has indeed shared the piece with others .this algorithm has anecdotally resulted in much higher seeding efficiencies by reducing the amount of duplicate pieces uploaded by the seed , and limiting the amount of data sent to peers who do not contribute . a single seed running in this mode is rumored to be able to upload a full copy of the content after only uploading 105% of the content data volume .since the official client has not implemented this feature , our experiments do not measure its effect on the efficiency of the initial seed .we instead measure the number of duplicate pieces uploaded when employing the modified choking algorithm in seed state .all our experiments were performed in private torrents on the planetlab experimental platform .planetlab s convenient tools for collecting measurements from geographically dispersed clients greatly facilitated our work .for instance , in order to deploy and launch bittorrent clients on planetlab nodes , we utilize the _ pssh _ tools .planetlab nodes are typically not behind nats , so each peer in our experiments can be uniquely identified by its ip address .we chose to experiment on private torrents , as opposed to simulation , in order to examine both individual peer decisions and the resulting impact on the torrent .although simulation would have enabled us to run many more experiments , it would have been a difficult task to accurately model the dynamics of a bittorrent system .private torrents allow us to observe and record the behavior of all peers in real scenarios .we can also vary experimental parameters , such as peers upload rate limits , which helps us distinguish which factors are responsible for the observed behavior .we performed experiments with the different torrent configurations described in section [ configurations ] .there are no agreed - upon parameters in the bittorrent community , so we set our experiment parameters empirically and based on current best practice . during each experiment , leechers download a single file of 113 mb that consists of 453 pieces , 256 kb each . all our experiments were performed with peers that do not change their available upload bandwidth during the download , or disconnect before receiving a complete copy of the file . there is a single initial seed , and in all experiments , all leechers join the torrent at the same time , emulating a flash crowd scenario .although the behavior of the system might be different with other peer arrival patterns , we are interested in examining peer decisions under circumstances of high load .the initial seed stays connected to the torrent for the duration of the experiment , while leechers disconnect immediately after completing their download .we consider both a well - provisioned and an underprovisioned initial seed .seed upload capacity has already been shown to be critical to the performance at the beginning of a torrent s lifetime , before the seed has uploaded a complete copy of the content . however , the impact of an initial seed with limited capacity on system properties is not clear . nevertheless , appropriate provisioning of initial seeds is of critical importance to content providers .we attempt to sketch recommendations on this issue in section [ seed - provisioning ] based on our experimental results .the available bandwidth of planetlab nodes is relatively high for typical torrents .we define upload limits on the leechers and seed to model realistic scenarios , but _ do not define any download limits _ , nor do we attempt to match our upload limits to inherent limitations of planetlab nodes .thus , we might end up defining a high upload limit on a node that can not possibly send data that fast , due to network or other problems .our results include the effects of local network fluctuations , but we believe that the conclusions we draw are not predicated on such effects .our experiments utilize 41 planetlab nodes , of which 2 are located in canada and the rest are spread across the continental united states .we conduct all runs of an experiment consecutively in time on the same set of machines .we collect our measurements using a modified version of the official bittorrent implementation , instrumented to record interesting events and peer interactions .our instrumented client , which is based on version 4.0.2 of the official client ( released in may 2005 ) , is publicly available for download .we collect a log of each message sent or received along with the content of the message , a log of each state change , the rate estimates for remote peers used by the choking algorithm , and other relevant information , such as the internal states of the choking algorithm .otherwise specified , we run our experiments with the default client parameters .we experimented with several torrent configurations .the parameters we changed from configuration to configuration are the upload rate limits for the seed and leechers and the upload bandwidth distribution of leechers . as mentioned before , leecher download bandwidth is never artificially limited ,although local network characteristics may impose an effective upload or download limit .we ran experiments with the following configurations . * _ two - class_. leechers are divided into two categories with different upload limits .this configuration enables us to observe system behavior in highly bipolar scenarios .our experiments involve similar numbers of slow peers , with 20 kb / s upload limit , and fast peers , with 200 kb / s upload limit . * _ three - class_. leechers are divided into three categories with different upload limits .this configuration helps us identify the qualitative behavioral differences of more distinct classes of peers .our experiments involve similar numbers of slow peers , with 20 kb / s upload limit ; medium peers , with 50 kb / s upload limit ; and fast peers , with 200 kb / s upload limit .* _ uniform - increase_. upload limits are defined on leechers according to a uniform distribution , with a small 5 kb / s step .the slowest leecher has an upload limit of 20 kb / s , the second slowest a limit of 25 kb / s , and so on .this configuration provides insight into the behavior of torrents with more uniform distribution of peer bandwidth .our graphs in section [ results ] correspond to experiments run with the three - class configuration , but the conclusions we draw accord well with the results of other experiments .we stress distinctions where appropriate .we also ran preliminary experiments where the initial seed disconnects after uploading an entire copy of the content , but leechers remain connected after they complete their download , serving as seeds for a short time .peers in these experiments have somewhat lower completion times thanks to the extra help from leechers in content dissemination , but appear otherwise similar .the goal of our experiments is to understand the dynamics of the choking algorithm . to that end , we consider four metrics .clustering : : : the choking algorithm aims to encourage high peer reciprocation by favoring peers who upload .therefore , we expect that peers will more frequently unchoke other peers with similar upload capacities , since those are the ones that can reciprocate with high enough rates . the rules for peer selection by qiu _et al . _ also support this hypothesis .consequently , it is expected that the choking algorithm converges towards good clustering shortly after the beginning of the download by grouping together peers with similar upload capacity .this behavior , however , is not guaranteed and has never been previously verified experimentally .indeed , let s consider a simple example .peer will unchoke peer if has been uploading data at a high rate to . in order for to continue uploading to , should also start sending data to at a high enough rate . the only way to initiate such a reciprocal relationship is via an optimistic unchoke . yet , since optimistic unchokes are performed at random , it is not clear whether and when and will get a chance to interact .therefore , in order to preserve clustering , optimistic unchokes should successfully initiate interactions between peers with similar upload capacities .in addition , such interactions should persist despite potential disruptions , such as optimistic unchokes by others or network bandwidth fluctuations .sharing incentives : : : a major goal of the choking algorithm is to give peers an incentive to share data . the algorithm strives to encourage peers to contribute , since doing so will improve their own download rates .we evaluate the effectiveness of these sharing incentives by measuring how peers upload contributions affect their download completion time .we expect that the more a peer contributes , the sooner it will complete its download .however , we do not expect to observe strict _ data volume fairness _, where all peers contribute the same amount of data ; peers who upload at high rates may end up contributing more data than others .they should be rewarded though , by completing their download sooner .upload utilization : : : upload utilization constitutes a reliable metric of efficiency in 2p content distribution systems , since the total upload capacity of all peers represents the maximum throughput the system can achieve as a whole . as a result ,a 2p content distribution protocol should aim at maximizing peers upload utilization .we are interested in measuring this utilization in bittorrent systems , and identifying the factors that can adversely affect it .seed service : : : the modified choking algorithm in seed state bases its decisions on the time peers have been waiting for seed service , in addition to their download rates from the seed .thus , we expect to see uniform sharing of the seed upload bandwidth among all peers. it should also be impossible for fast leechers to monopolize the seed .we now report the results of representative experiments that demonstrate our main observations . for conciseness , we present only results drawn from the three - class torrent configuration , but our conclusions are consistent with our observations from other configurations as well .we first examine a scenario with a well - provisioned initial seed , i.e. , a seed that can sustain high upload rates .we expect this to be common for commercial torrents , whose service providers typically make sure there is adequate bandwidth to initially seed the torrent. an example might be red hat distributing its latest linux distribution .section [ under_provisioned_seed ] shows that peer behavior in the presence of an underprovisioned initial seed can differ substantially .we consider an experiment with a single seed and 40 leechers : 13 slow peers ( 20 kb / s upload limit ) , 14 medium peers ( 50 kb / s upload limit ) , and 13 fast peers ( 200 kb / s upload limit ) .the seed , which is represented as peer 41 in the following figures , is limited to upload 200 kb / s , as fast as a fast peer .different peer upload limits are defined in order to model different levels of contribution .the results we report are based on thirteen experiment runs .although the official bittorrent implementation would set the number of parallel uploads based on the defined upload limit ( 4 for the slow , 5 for the medium , and 10 for the fast peers and the seed ) , we set this number to 4 for all peers , which in fact is what most other clients would do .this ensures homogeneous conditions in the torrent and makes it easier to interpret the results . as explained in section [ rationale ] , we expect to observe clustering based on peers upload capacities .figure [ fig : corr - unchoke - upload - onethird - seed-200 ] demonstrates that peers indeed form clusters .the figure plots the total time peers unchoked each other via a regular unchoke , averaged over all runs of the experiment .it is clear that peers in the same class cluster together , in the sense that they prefer to upload to each other .this behavior becomes more apparent when considering a metric such as the _ clustering index_. we define this for a given peer in a given class ( fast , medium , or slow ) as the ratio of the duration of regular unchokes to the peers of its class over the duration of regular unchokes to all peers .a high clustering index indicates a strong preference to upload to peers in the same class .figure [ fig : clustering - index - onethird - seed - fast ] plots this index for all peers and demonstrates that peers prefer to unchoke other peers in their own class , thereby forming clusters .further experiments with upload limits following a uniform distribution also show that peers have a clear preference for peers with similar upload capacities .although from figure [ fig : corr - unchoke - upload - onethird - seed-200 ] it might seem that slow peers show a proportionally stronger preference for their own class , this is an artifact of the experiment .slow peers take longer to complete their download ( as shown in figure [ fig : completion - cdf - onethird - seed-200-class ] ) , and so they perform a higher number of regular unchokes on average than fast peers . also notice that medium peer 27 interacts frequently with slow peers ._ this peer s download capacity is inherently limited _, arguably due to machine or local network limitations , as seen in figure [ fig : download - speed - onethird - seed-200 ] that plots observed peer download speeds over time . as a result, it stays connected to the torrent even after all other peers of its class have completed their download . during that last period it has to interact with slow leechers , since those are the only ones left .figure [ fig : corr - unchoke - upload - onethird - seed-200 ] also shows that reciprocation is not necessarily mutual .slow peers frequently unchoke medium peers , but the favor is not returned . indeed , the slow peers unchoked medium peers for a total of 501,844 seconds , as shown by the relatively dark center - left partition . however , the medium peers unchoked slow peers for only 273,985 seconds , as shown by the lighter bottom - center .this lack of reciprocation is due to the fact that slow peers are of little use to medium ones , since they can not offer high enough upload rates . in summary ,the choking algorithm facilitates clustering , where peers mostly interact with others in the same class , with the occasional exception of random optimistic unchokes .we now examine whether bittorrent s choking algorithm provides effective sharing incentives , in the sense that a peer who contributes more to the torrent is rewarded by completing its download sooner than the rest .figure [ fig : completion - cdf - onethird - seed-200-class ] indeed demonstrates this to be the case .we plot the cumulative distribution of completion time for the three classes of leechers in the previous experiment .the vertical line in the figure represents the _ optimal completion time _ , the earliest possible time that any peer could complete its download .this is the time the seed finished uploading a complete copy of the content . on average ,this time is around 650 seconds for the experiment .fast leechers complete their download soon after the optimal completion time .medium and , especially , slow leechers take significantly longer to finish .contributing to the torrent enables a leecher to enter the fast cluster and receive data at higher rates .this in turn ensures a short download completion time . the choking algorithm does indeed foster reciprocation by rewarding contributing peers . in experiments with upload limits following a uniform distribution ,the peer completion time is also uniform : completion time decreases when a peer s upload contribution increases .this further indicates the algorithm s consistent properties with respect to effective sharing incentives .note , however , that this does not imply any notion of data volume fairness .fast peers end up uploading significantly more data than the rest .figure [ fig : agg - amount - bytes - onethird - seed-200 ] , which plots the actual volume of uploaded data averaged over all runs , demonstrates that fast peers are the major contributors to the torrent .most of their bandwidth is expended on other fast peers , per the clustering principle .interestingly , the slow leechers end up downloading more data from the seed .the seed provides equal service to peers of any class , as we show in section [ seed_service - fast ] , but slow peers have more opportunities than others to download from the seed , since they take longer to complete . in summary, bittorrent provides effective incentives for peers to contribute , as doing so will reward a leecher with significantly higher download rates .recent studies have shown that limited free - riding is possible in bittorrent under specific circumstances , although such free - riders do not appear to severely impact the quality of service for compliant peers. however , these studies do not significantly challenge the effectiveness of sharing incentives enforced by the choking algorithm .although free - riding is possible , such peers typically achieve lower download rates than they could if they followed the protocol . as a result ,if peers wish to obtain the highest possible rates , it is in their best interest to conform to the protocol .we now turn our attention to performance by examining whether the choking algorithm can maintain high utilization of peers upload bandwidth .figure [ fig : up - util - cdf - onethird - seed-200 ] is a scatterplot of such utilization in the aforementioned setup .a utilization of 1 represents taking full advantage of the available upload capacity .average utilization for each of the thirteen runs is plotted once per minute .the metric is torrent - wide : for each minute , we sum the upload bandwidth used by the peers during that minute , and divide by the upload capacity available over that minute for all peers still connected at the minute s end .the total capacity decreases over time as peers complete their downloads and disconnect .utilization is low at the beginning and the end of the session , but close to optimal for the majority of the download .it rises slightly after approximately 15 minutes , which corresponds to when fast peers leave the torrent .perhaps the four - peer limit on parallel uploads restricts fast peers utilization . in any case , utilization is good overall . in summary ,the choking algorithm , in cooperation with other bittorrent mechanisms such as rarest - first piece selection , does a good job of ensuring high utilization of the upload capacity of leechers during most of the download .low utilization during the startup period may pose a problem for small contents , for which it could dominate the total download time .we discuss a potential solution to this in section [ tracker - extension ] .the official client introduced a modified choking algorithm in seed state , as described in section [ choking_algorithm ] , although it reverted back to the original in the most recent version .the client s version notes claim that the modified algorithm aims to reduce the amount of duplicate data a seed needs to upload before it has pushed out a full copy of the content into the torrent .we study this modified algorithm for the first time and examine this claim .figure [ fig : seed - sku - sru - seed-200 ] shows the duration of unchokes , both regular and optimistic , performed by the seed in a representative run of the aforementioned setup .leechers are unchoked in a uniform manner , regardless of upload speed .fast peers , those with higher peer ids , complete their download sooner , after which time the seed divides its upload bandwidth among the remaining leechers .leecher 8 is the last to complete ( as shown in figure [ fig : download - speed - onethird - seed-200 ] ) , and receives exclusive service from the seed during the end of its download .we therefore see that the modified choking algorithm in seed state provides uniform service ; this is because it bases its unchoking decisions on the time peers have been waiting for seed service . as a result , the risk of fast leechers downloading the entire content and quickly disconnecting from the torrent is significantly reduced .furthermore , this behavior would mitigate the effectiveness of exploits that attempt to monopolize seeds . according to anecdotal evidence , initial seeds using the old algorithm might have to upload 150% to 200% of the total content size before other peers become seeds .our experiments show that the modified algorithm avoids this problem .figure [ fig : seed - unique - piece - seed-200 ] plots the number of pieces uploaded by the seed during the download session for a representative run .527 pieces are sent out before an entire copy of the content ( 453 pieces ) has been uploaded .thus , the duplicate piece overhead is around 14% , indicating that the modified choking algorithm in seed state avoids unnecessarily uploading duplicate pieces to a certain extent .this number was consistent across all our experiments , ranging from 11 to 15% .however , to the best of our knowledge , there has been no experimental evaluation of the corresponding overhead in the old algorithm , so it is not clear how much of an improvement this is . in any case, 14% duplication represents an opportunity for improvement .the official client always issues requests for pieces in the rarest - pieces set in the same order . as a result, leechers might end up requesting the same piece from the seed at approximately the same time. it would be preferable for leechers to request rarest pieces in random order instead .we now turn our attention to a scenario with an underprovisioned initial seed and demonstrate that the seed upload capacity is critical to performance during the beginning of a torrent s lifetime .the experiment we present here involves a single seed and 39 leechers , 12 slow , 14 medium , and 13 fast .these nodes are different than the nodes used in the previous experiment .the initial seed , represented as peer 27 in the following figures , is in this case limited to 100 kb / s , instead of 200 kb / s .we set the number of parallel uploads again to four for the seed and all the leechers .the results we present are based on eight experiment runs and are consistent with our observations from experiments with other torrent configurations .peer behavior in the presence of an underprovisioned initial seed is substantially different than with a well - provisioned one .figure [ fig : corr - unchoke - upload - onethird - seed - medium ] shows the total time peers unchoked each other via a regular unchoke , averaged over all runs of the experiment .in contrast to figure [ fig : corr - unchoke - upload - onethird - seed-200 ] , there is no discernible clustering among peers in the same class .the lack of clustering in the presence of an underprovisioned initial seed becomes more apparent when considering the clustering index metric defined in section [ clustering - fast ] .figure [ fig : clustering - index - onethird - seed - medium ] shows this metric for all peers .they are all similar , indicating a lack of preference to unchoke peers in any particular class .figure [ fig : cumul - interest - run1-seed - medium ] attempts to explain this behavior by plotting the peer availability of each peer to every other peer , averaged over all runs of the experiment .we define the _ peer availability _ of a downloading peer to an uploading peer as the ratio of the time was interested in to the time that spent in the peer set of .a peer availability of 1 means that the uploading peer was always interested in the downloading peer , while a peer availability of 0 means that the uploading peer was never interested in the downloading peer .we can see that the fast peers have poor peer availability to all other peers .this is because the seed is uploading new pieces at a low rate , so even if it uploaded only to fast peers , those would quickly replicate every piece as it was completed , remaining non - interested for the rest of the time .the same is not true for slow peers , however , since they upload even more slowly than the seed .in addition , when a fast leecher is unchoked by a slow leecher , it will always reciprocate with high rates , and thereby be preferred by the slow leecher . as a result, fast peers will get new pieces even from medium and slow peers . in this manner, fast peers prevent clustering by taking up slower peers unchoke slots and thus breaking any clusters that might be starting to form .this prevents medium and slow peers from clustering together , even though the seed is fast enough with respect to them .further experiments with other torrent configurations , including one with the initial seed further limited to 20 kb / s , confirm this conclusion .in summary , when the initial seed is underprovisioned , the choking algorithm does not enable peer clustering. we study in the next section how this lack of clustering affects the effectiveness of sharing incentives .we now examine how the lack of clustering affects the effectiveness of sharing incentives .in particular , we investigate whether fast peers still complete their download sooner than the rest .figure [ fig : completion - cdf - onethird - seed - medium - class ] shows that this is no longer the case .most peers complete their download at approximately the same time .the points in the tail of the figure are due to a single slow peer , peer 8 , which completed its download last in every run .this planetlab node has a poor effective download speed independently of the choking algorithm , likely due to machine or local network limitations .all other peers , for all runs , complete their download less than 2,000 seconds after the beginning of a run . clearly , seed upload capacity is the performance bottleneck .once the seed finishes uploading a complete copy of the content , all peers complete soon thereafter . since uploading data to othersdoes not shorten a peer s completion time , bittorrent s sharing incentives do not seem to be effective in this situation .fast peers are again the major contributors in the torrent , but in this case their upload bandwidth is expended equally across other fast and slower peers alike .figure [ fig : agg - amount - bytes - onethird - seed - medium ] , which plots the amount of uploaded data between each peer pair , shows that fast peers made the most contributions , distributing their bandwidth evenly to all other peers . in summary ,when the initial seed is underprovisioned , the choking algorithm does not provide effective incentives to contribute . nevertheless , the available upload capacity of fast peers is effectively utilized to replicate the pieces being uploaded by the seed .interestingly , even with a slow seed , upload utilization remains relatively high , as shown in figure [ fig : up - util - cdf - onethird - seed - medium ] .leechers manage to exchange data productively among themselves once new pieces are downloaded from the seed , so that the lack of clustering does not degrade overall performance significantly .the bittorrent design seems to lead the system to do the right thing : fast peers contribute their bandwidth to reduce the burden on the initial seed , helping disseminate the available pieces to slower peers .although this destroys clustering , it improves overall efficiency , which is a reasonable trade - off given the situation .we also experimented with a seed limited to an upload capacity of 20 kb / s .figure [ fig : up - util - cdf - onethird - seed - slow ] shows that , with this extremely low seed capacity , there are few new pieces available to exchange at any point in time , and each new piece gets disseminated rapidly after it is retrieved from the seed .the overall upload utilization is now low .slow peers exhibit slightly higher utilization than the rest , since they do not need many available pieces to use up their available upload capacity . in summary , even in situationswhere the initial seed is underprovisioned , the global upload utilization can be high .however , our experiments only involve compliant clients , who do not try to adapt their upload contributions according to a utility function of the observed download speed . on the other hand , in an environment with free - riders and an underprovisioned seed, one might expect a lower upload utilization due to the lack of altruistic peer contributions .we now discuss two limitations of the choking algorithm that we identified through our experiments : the initial seed upload capacity is fundamental to the proper operation of the incentives mechanism , and peers take some time to reach full upload utilization at the beginning of the download session .when the initial seed is underprovisioned , the choking algorithm does not lead to the clustering of similar - bandwidth peers . even without clustering , however , we observed high upload utilization .interestingly , in the presence of a slow initial seed , the protocol mechanisms lead the fast leechers to contribute to the download of all other peers , fast or slow , thereby improving performance . however , whenever feasible , one should engineer adequate initial seed capacity in order to allow fast leechers to achieve optimal performance .our results show that the lack of clustering occurs when fast peers can not maintain their interest in other fast peers . in order to avoid this situation ,the initial seed should _ at least be able to upload data at a speed that matches that of the fastest peers in the torrent_. this suggestion is of course a rule - of - thumb guideline , and assumes that the service provider knows a priori the maximum upload capacity of the peers that may join the torrent in the future . in practice ,reasonable bounds could be derived from measurements or from an analysis of deployed network technologies .further research is needed to evaluate the exact impact of initial seed capacity .we are currently developing an analytical model that attempts to express the effect of this parameter on peer performance .when a new leecher first joins the torrent , it connects to a random subset of already - connected peers that are returned by the tracker .however , in order to reach its optimal bandwidth utilization , this new leecher needs to exchange data with those peers that have a similar upload capacity to itself .if there are few such peers in the torrent , it may take some time to discover them , since this has to be done via random optimistic unchokes that occur only once every 30 seconds .consequently , it might be preferable to utilize the tracker in matching similar - bandwidth leechers . in this manner ,the duration of the discovery period could decrease and the upload utilization would be high even at the beginning of a peer s download . the new leecher could _ report its available upload capacity to the tracker when joining the torrent_. this parameter can be configured in the client software , or may possibly be the actual maximum upload rate measured during previous downloads .the tracker would then reply with a random subset of peers as usual , along with their upload capacities .the new leecher could optionally perform optimistic unchokes first to peers with similar upload capacity , in an effort to discover the best partners sooner . using this new tracker protocol extension ,if the peer set contains only a few leechers with similar upload capacity , they will discover each other quickly .leechers should employ some means of detecting and punishing others who lie about their available upload capacity .for instance , if a leecher does not respond to an optimistic unchoke with an upload rate close to the one it announced to the tracker , that leecher will not be unchoked again for some period of time . in this manner ,the possibility of a remote leecher initiating a new interaction is left open , yet the benefit from free - riding behavior is limited since free - riders will eventually end up choked by most peers . since the tracker still returns a random subset of peers , independently of the advertised upload capacity , there is no risk of creation of disconnected clusters . in a collaborative environment , however , the tracker might even want to return peers based on their capacity , as previously proposed , in order to speed up cluster creation even more .of course , although the proposed tracker extension is promising , further investigation is required to verify that it will work as expected .there has been a fair amount of work on the performance and behavior of bittorrent systems .bram cohen , the protocol s creator , has described bittorrent s main mechanisms and their design rationale .there have been several measurement studies examining real bittorrent traffic .et al . _ measure several peer characteristics derived from the tracker log for the redhat linux 9 iso image , including the number of active peers , the proportion of seeds and leechers , and the geographical spread of peers .they observe that while there is a correlation between upload and download rates , indicating that the choking algorithm is working , the majority of content is contributed by only a few leechers and the seeds .et al . _ study the content availability , integrity , and download performance for torrents on an once - popular tracker website .they observe that the centralized tracker component could potentially be a bottleneck .et al . _ study bittorrent sharing communities .they find that sharing - ratio enforcement and the use of rss feeds to advertise new content may improve peer contributions , yet torrents with a large number of seeds present ample opportunity for free - riding .furthermore , guo _ et al . _ demonstrate that the peer arrival and departure rate is exponential , and that performance fluctuates widely in small torrents .inter - torrent collaboration is proposed as an alternative to providing extra incentives for leechers to stay connected after the completion of their download . a more recent study by legout _et al . _ presents the results of extensive experiments on real torrents .they show that the rarest - first and choking algorithms play a critical role in bittorrent s performance , and claim that the replacement with a volume - based tit - for - tat algorithm , as proposed by other researchers , is not appropriate .however , they do not identify the reasons behind the properties of the choking algorithm and fail to examine its dynamics due to the single - peer viewpoint .several analytical studies have formulated models for bittorrent - like protocols .et al . _ provide a solution to a fluid model of bittorrent , where they study the choking algorithm and its effect on performance .they observe that optimistic unchoking may provide a way for peers to free - ride on the system .their model assumes peer selection based on global knowledge of all peers in the torrent , as well as uniform distribution of pieces .et al . _ introduce a probabilistic model of bittorrent - like systems and argue that overall system performance does not depend critically on either altruistic peer behavior or the rarest - first piece selection strategy .et al . _ characterize the complete design space of bittorrent - like protocols by providing a model that captures the fundamental trade - off between performance and fairness .whereas all these models provide valuable insight into the behavior of bittorrent systems , unrealistic assumptions limit their applicability in real scenarios .other researchers have relied on simulations to understand bittorrent s properties .et al . _ conducted an initial investigation of the impact of different peer arrival rates , peer capacities , and peer and piece selection strategies .et al . _ utilize a discrete event simulator to evaluate the impact of bittorrent s core mechanisms and observe that the rate - based tit - for - tat strategy is ineffective in preventing unfairness in peer contributions .they also find that the rarest - first algorithm outperforms alternative piece selection strategies .however , they do not evaluate a peer set larger than 15 peers , whereas the official implementation has a default value of 80 .this may affect the results since the accuracy of the piece selection strategy is affected by the peer set size .furthermore , tian _ et al . _ study peer performance towards the end of the download and propose a new peer selection strategy which enables more clients to complete their download after the departure of all the seeds .researchers have also looked into the feasibility of selfish behavior , when peers attempt to circumvent bittorrent mechanisms to gain unfair benefit .et al . _ were the first to demonstrate that bittorrent exploits are feasible .they briefly describe an attack to the tracker and an exploit involving leechers lying about the pieces they have .et al . _ argue that the choking algorithm is not sufficient to prevent free - riding and propose a new algorithm to enforce fairness in peers data exchanges .et al . _ design and implement three exploits that allow a peer who does not contribute to maintain high download rates under specific circumstances . even though such selfish peers can obtain more bandwidth ,there is no considerable degradation of the overall system s quality of service .et al . _ extend the work in and demonstrate that limited free - riding is feasible even in the absence of seeds .they also describe selfish behavior in bittorrent sharing communities .in addition , sirivianos _et al . _ evaluate an exploit based on maintaining a larger - than - normal view of the torrent .et al . _ observe that high - capacity peers typically provide low - capacity ones with an unfair share of the data .they design a choking algorithm optimization that reallocates the superfluous upload bandwidth to others in order to maximize peer download rates .our work differs from all previous studies in its approach and results .we perform the first extensive experimental study of bittorrent in a controlled environment , by monitoring all peers in the torrent and examining peer behavior in a variety of scenarios .our results validate protocol properties that have not been previously demonstrated experimentally , and identify new properties related to the impact of the initial seed on clustering and sharing incentives .in this paper we presented the first experimental investigation of bittorrent systems that links per - peer decisions and overall torrent behavior .our results validate three bittorrent properties that , though believed to hold , have not been previously demonstrated experimentally .we show that the choking algorithm enables clustering of similar - bandwidth peers , fosters effective sharing incentives by rewarding peers who contribute , and achieves high peer upload utilization for the majority of the download duration .we also examined the properties of the modified choking algorithm in seed state and the impact of initial seed capacity on the overall system performance .in particular , we showed that an underprovisioned initial seed does not facilitate the clustering of peers and does not provide effective sharing incentives .however , even in such a case , the choking algorithm facilitates efficient utilization of the available resources by having fast peers help others with their download . based on our observations , we offered guidelines for content providers regarding seed provisioning , and discussed a proposed tracker protocol extension that addresses an identified limitation of the protocol .this work opens up many avenues for future research .we are currently developing an analytical model to express the impact of seed capacity on peer performance .it would also be interesting to run experiments with the old choking algorithm in seed state and compare its properties to the modified algorithm , especially with respect to the upload of duplicate pieces .in addition , we would like to investigate the impact of different numbers of regular and optimistic unchokes on the protocol s properties .it has recently been argued that there is a fundamental trade - off between these two kinds of unchokes .the current values used by the protocol are intuition - based engineering choices ; we would like to conduct a systematic evaluation of system behavior under different parameter values .we wish to thank the anonymous reviewers and michael sirivianos for their invaluable feedback .n. andrade , m. mowbray , a. lima , g. wagner , and m. ripeanu .influences on cooperation in bittorrent communities . in _ proc . of the workshop on economics of peer - to - peer systems ( p2pecon05 ) _ , philadelphia , pa , august 2005 .a. felber and e. w. biersack .self - scaling networks for content distribution . in _ proc .of the international workshop on self- * properties in complex information systems ( self-*04 ) _ , bertinoro , italy , may 31june 2 , 2004 .m. izal , g. urvoy - keller , e. w. biersack , p. felber , a. a. hamra , and l. garcs - erice .dissecting bittorrent : five months in a torrent s lifetime . in _ proc .of pam04 _ , antibes juan - les - pins , france , april 2004 .j. shneidman , d. parkes , and l. massoulie .faithfulness in internet algorithms . in _ proc . of the workshop on practice and theory of incentives and game theory in networked systems ( pins04 ) _ , portland , or ,september 2004 .
peer - to - peer protocols play an increasingly instrumental role in internet content distribution . it is therefore important to gain a complete understanding of how these protocols behave in practice and how their operating parameters affect overall system performance . this paper presents the first detailed experimental investigation of the peer selection strategy in the popular bittorrent protocol . by observing more than 40 nodes in instrumented private torrents , we validate three protocol properties that , though believed to hold , have not been previously demonstrated experimentally : the clustering of similar - bandwidth peers , the effectiveness of bittorrent s sharing incentives , and the peers high uplink utilization . in addition , we observe that bittorrent s modified choking algorithm in seed state provides uniform service to all peers , and that an underprovisioned initial seed leads to absence of peer clustering and less effective sharing incentives . based on our results , we provide guidelines for seed provisioning by content providers , and discuss a tracker protocol extension that addresses an identified limitation of the protocol .
the web ontology language ( owl ) is a semantic web language designed to represent rich and complex knowledge about things , groups of things , and relations between things .there are mainly two modeling paradigms for the semantic web .the first paradigm is based on the notion of the classical logics , such as the description logics algorithms on which the owl is based .the other paradigm is based on the datalog paradigm .a subset of the owl semantics is transformed into rules that are used by a rule engine in order to infer implicit knowledge .this paper focuses on the second paradigm , that is , a rule - based owl reasoning using owl - horst rules .owing to the explosion of the semantic data , the number of rdf triples in large public knowledge bases , e.g. , dbpedia has increased to billions .therefore , to improve the performance of owl reasoning becomes a core problem .the traditional single - node approaches are no longer viable for such large scale data .some existing ontology reasoning systems are based on mapreduce framework .the owl reasoning in and perform reasoning over mapreduce with rule execution mechanism . however , the mapreduce - based approaches are not very efficient due to the data communication between memory and disk . to further improve the performance of reasoning ,some researchers have implemented the owl reasoning on spark , which is an in - memory and distributed cluster computing framework .recently , cichlid has greatly improved the performance of owl reasoning on spark as compared to the state - of - the - art distributed reasoning systems , but it only considers parts of owl rules and doest analyze the interdependence of rules .reasoning based on owl - horst rules can infer many more implicit information . and different rule execution strategy will influence the reasoning performance .for instance , let s be the set of triples of an ontology , where =\{ _ subclassof _ , _ subclassof _ }. r1 and o11 are the two rules of owl - horst , where =\{ _ rdfs : subclassof _ , _ rdfs : subclassof _ _ rdfs : subclassof _ } and =\{ _ owl : equivalentclass _ _ rdfs : subclassof _ } . by implementing the r1 entailment rule for subclass closure, we will get that =\{ _ subclassof _ , _ subclassof _ , _ subclassof _ }.if the o12 entailment rule for equivalent class is executed before the r1 , s will contain more new triples and reduce the iterative operation .therefore , it is desired to optimize reasoning by adjust the rule order .although kim & park has also implemented parallel reasoning algorithms with an executable rule order , but lacked the evidence to prove that the strategy is optimal . to find the optimal executable strategy , we use the depth - first algorithm to get all possible executable strategies , which are based on the dependency of rules .there are 259367372 possible strategies among the 27 rules in table [ tab : rules ] . due tothe very large number of strategies , it is challenge to find the optimal strategy by test every strategy . in this paper, we present an approach to enhancing the performance of the rule - based owl reasoning based on a locally optimal executable strategy and implement the new rule execution strategy on spark in a prototype called rors .the major contributions and novelties of our work are summarized as follows : we analyze the characteristic of dataset and divide the dataset into three classes : spo triples , sameas triples and type triples with analysing the proportion of the three classes respectively in the dataset . according to the data partition, we divide the owl - horst rules into four classes .we respectively analyze the rule interdependence of each class , and find the optimal executable strategies .based on the locally optimal strategies , we pick out an optimal rule execution order of each class and then combine them into a new rule execution strategy of all rules and implement the new rule execution strategy on spark .the rest of this paper is organized as follows .section [ sec : pre ] gives an brief introduce to preliminary knowledge about owl and spark .section [ sec : rda ] presents our locally optimal strategies .section [ sec : rea ] implements our proposed strategy on spark and section [ sec : eva ] evaluates the experiment on the lubm dataset . in section[ sec : dis ] , we discuss related works and summarize this paper .in this section , we briefly recall the ontology language owl and the framework spark , largely following the excellent expositions . [ [ owl ] ] * owl * + + + + + an ontology is a formal naming and definition of the types , properties , and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse .ontology is part of the w3c standards stacks for the semantic web .the language owl is a family of knowledge representation languages for authoring ontologies .there are three variants of owl , with different levels of expressiveness .these are owl lite , owl dl and owl full ( ordered by increasing expressiveness ) .each of these sublanguages is a syntactic extension of its simpler predecessor .owl dl is designed to preserve some compatibility with rdf schema ( or rdfs ) .however , owl full is undecidable , so no reasoning software is able to perform complete reasoning for it .owl dl designed to provide the maximum expressiveness possible while retaining computational completeness , decidability , and the availability of practical reasoning algorithms .owl lite was originally intended to support those users primarily needing a classification hierarchy and simple constraints .the three languages are one subset of the other .[ [ spark - distributed - computing - framework ] ] * spark : distributed computing framework * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + spark is an open source cluster computing framework , which is developed at the university of california , berkeley s amplab .one of the main features is the in - memory parallel computing model which all data will be loaded into the memory .spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset ( rdd ) , a read - only multiset of data items distributed over a cluster of machines , that is maintained in a fault - tolerant way .each rdd will be divided into multiple partitions that exist on different computing nodes . andit provides a variety of operations to transform one rdd into another rdd .there are two kinds of operations .transformations are lazy operations that define a new rdd ( e.g. , _ map _ , _ filter _ , and _ join _ ) , while actions launch a computation to return a value to the program or write data to external storage , such as _ collect _ , _ count _ , _ saveastextfile _ , etc .rdd achieves fault tolerance through a notion of lineage based on logging the transformations , if a partition of an rdd is lost , the rdd has enough information about how it was derived from other rdds to recompute just that partition .more details about spark please see the official web site http://spark.apache.org/.in this section , we propose a locally optimal strategy based on the dependency among the rules ..owl - horst rules [ cols="^,^,^",options="header " , ] we evaluate the performance of our method with kp and cichlid , where kp adopts the executable strategy in .all experiments run three times and the average value is listed as follow . plot coordinates ( 10 , 460)(50 , 1413)(100 , 2460)(150 , 4224)(200 , 7080 ) ; plot coordinates(10 , 480)(50 , 1910)(100 , 3250)(150 , 5460)(200 , 8850 ) ; coordinates ( 10 , 38425.335 ) ( 50 , 67471.298 ) ( 100 , 83201.436 ) ( 150 , 86510.952 ) ( 200 , 72749.017 ) ; coordinates ( 10 , 41840.289 ) ( 50 , 60950.250 ) ( 100 , 69645.680 ) ( 150 , 75657.241 ) ( 200 , 59880.319 ) ; figure [ fig : time ] displays the reasoning time of our approach and kp with different scale of data sets .the reasoning time includes the time of dividing input data and eliminating duplicated triples .we can see that the reasoning time of rors and kp increase almost linearly with the growth of data size .the result shows that our approach is better than the kp under the same environment .the performance of reasoning is improved by 30% approximately .figure [ fig : expres ] shows the number of inferred triples per second .our method can infer more implicit triples than cichlid and the performance is improved by 26% approximately .in this paper , we present an approach to enhancing the performance of the rule - based owl reasoning on spark based on a locally optimal executable strategy . our method performs better than kp in reasoning strategy . although the approach doest find the optimal executable strategy for global rules , our method can be used as the valuable foundation for future research in the rule - based owl reasoningtherefore , in the future work , we plan to design some algorithms to find the optimal strategy for global rules .there are many works to develop owl reasoning systems , including early works , such as pellet , jena , and sesame , etc .these reasoners use a composition tree model for ontology to infer implicit information and exhibit both large time and space complexity . due to the limitation of computing resource and running speed, these systems can hardly achieve excellent performance .therefore , many distributed reasoning system emerged . in and , they proposed a parallel reasoning method that the reasoning rules are executed repeatedly until no extra data is generated .but there exists much more data communication cost . in , weaver and handler proposed a data partitioning model based on mpi , but this method do not filter duplicate data . presented a distributed reasoning system which based on mapreduce .it analyzed the dependency between rules and builded a dependence graph .but it generated large amount of useless middle data and huge data communication cost .then urbani proposed a mapreduce - based parallel reasoning system with owl - horst rules called webpie .it can deal with large scale ontology on a distributed computing cluster .however , webpie exhibits poor reasoning time . are rule - based owl reasoner .although they can infer large scale triples , it costs too much reasoning time . proposed a rule - based reasoner that used massively parallel hardware to derive new facts based on a given set of rules , but that implementation was limited by the size of processable input data as well as on the number of used parallel hardware devices .seitz presented an owl reasoner for embedded devices based on clips .urbani proposed a hybrid rule - based reasoning method that combined forward and backward chaining , and implemented a prototype named querypie .terminological triples are pre - computed before query , which is used to speed up backward - chaining at query time . in , although the author has improved the reasoning time using spark , it ignores the analysis of interdependence among the rules and does not give the optimal executable strategy .besides , mppie recently implemented the rdfs reasoning on giraph .this work is supported by the program of the national natural science foundation of china ( nsfc ) under 61502336 , 61373035 and the national high - tech r&d program of china ( 863 program ) under 2013aa013204 . zaharia m. , chowdhury m. , das t. , dave a. , ma j. , mccauley m. , j. franklin m. , shenker s. , & stoica i. ( 2012 ) resilient distributed datasets : a fault - tolerant abstraction for in - memory cluster computing . in : _ proc . of nsdi 2012 at usenix _ , pp . 1528 .
the rule - based owl reasoning is to compute the deductive closure of an ontology by applying rdf / rdfs and owl entailment rules . the performance of the rule - based owl reasoning is often sensitive to the rule execution order . in this paper , we present an approach to enhancing the performance of the rule - based owl reasoning on spark based on a locally optimal executable strategy . firstly , we divide all rules ( 27 in total ) into four main classes , namely , _ spo rules _ ( 5 rules ) , _ type rules _ ( 7 rules ) , _ sameas rules _ ( 7 rules ) , and _ schema rules _ ( 8 rules ) since , as we investigated , those triples corresponding to the first three classes of rules are overwhelming ( e.g. , over 99% in the lubm dataset ) in our practical world . secondly , based on the interdependence among those entailment rules in each class , we pick out an optimal rule executable order of each class and then combine them into a new rule execution order of all rules . finally , we implement the new rule execution order on spark in a prototype called rors . the experimental results show that the running time of rors is improved by about 30% as compared to kim & park s algorithm ( 2015 ) using the lubm200 ( 27.6 million triples ) .
there is an extensive literature on the topic of graph partitioning and community detection in networks .this literature studies methods for partitioning the nodes in a network into a number of groups , often referred to as communities or clusters .the general idea is that nodes belonging to the same cluster should be relatively strongly connected to each other , while nodes belonging to different clusters should be only weakly connected . which methods for graph partitioning and community detection perform best in practice ?the literature does not provide a clear answer to this question , and if the question can be answered at all , then most likely the answer will be dependent on the type of network that is being studied and on the type of partitioning that one is interested in . in this paper , we therefore address the above question in one specific context .we are interested in grouping scientific publications into clusters and we expect each cluster to represent a set of publications that are topically related to each other . clustering scientific publications is a problem that has received a lot of attention in the bibliometric literature . in this literature ,publications have for instance been clustered based on co - occurring words in titles , abstracts , or full text , based on co - citation or bibliographic coupling relations , and sometimes even based on a combination of different types of relations . following waltman and van eck and boyack and klavans , our interest in this paper is in clustering publications based on direct citation relations .direct citation relations are of special interest because they allow large sets of publications to be clustered in an efficient way .waltman and van eck for instance cluster ten million publications from the period 2001 - 2010 based on about hundred million citation relations between these publications . in this way, they obtain a highly detailed classification system of scientific literature covering all fields of science .the analysis presented in this paper focuses on systematically comparing the performance of a large number of clustering methods when applied to the problem of clustering scientific publications based on citation relations .the following clustering methods are included in the analysis : spectral methods , modularity optimization , map equation methods , matrix factorization , statistical methods , link clustering , label propagation , random walks , clique percolation and expansion , and selected other methods .these are all methods that have been proposed during the past years in the literature on graph partitioning and community detection . to evaluate the performance of the different clustering methods , we perform an in - depth analysis of the statistical properties of the clusterings obtained by each method . on the one hand we focus on general properties of the clusterings , but on the other hand we also consider a number of properties that are of special relevance in the context of citation networks of publications .however , to obtain a deep understanding of the differences between clustering methods , we believe that analyzing the statistical properties of clusterings is not sufficient . understanding the differences between clustering methodsalso requires an expert - based assessment of different clusterings .this is a challenging task that involves a number of practical difficulties , but in this paper we nevertheless make an attempt to perform such an expert - based assessment .the expert - based assessment is performed for publications in the field of library and information science , focusing on the subfield of scientometrics .this paper is organized as follows .we first discuss the data and methods included in our analysis .we then present the results of the analysis .we conclude the paper by providing a detailed discussion of our findings .below we first discuss the citation networks of publications that we consider in our analysis .we then discuss the clustering methods included in the analysis . finally , we discuss the criteria that we use for comparing the clustering methods .these criteria relate to the following four properties of a clustering method : cluster sizes .: : ideally the differences in the size of clusters should not be too large .for instance , the largest cluster preferably should be no more than an order of magnitude larger than the smallest cluster .small clusters . : : for practical purposes , it is usually inconvenient to have a large number of very small clusters. therefore the number of very small clusters should be minimized as much as possible .clustering stability .: : running the same clustering method multiple times may yield different results ( due to random elements in many clustering methods ) , but the results should be reasonably similar . likewise ,when small changes are made to a citation network , this should not have too much effect on the results of a clustering method .computing time .: : preferably , a clustering method should be fast . especially in applications to large citation networks the issue of computing time is of significant importance .in addition to the above four properties , a fifth property for comparing clustering methods is the intuitive sensibility of the results provided by a method .experts should be able to interpret the clusters obtained from a clustering method in terms of meaningful research topics .we do not evaluate this fifth property using quantitative criteria . instead , our expert - based assessment of the results of different clustering methods is focused on this criterion .[ [ citation - networks - of - scientific - publications . ] ] citation networks of scientific publications .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + citation relations between scientific publications are represented as a simple undirected and unweighted graph by first discarding the directions of citations , any multiple citations and citations from a publication to itself .publications neither citing nor cited by any other are also discarded .let be the number of nodes , , and the number of links in such citation network . denote to be the average node degree , i.e. the number of links incident to a node , , and lcc the largest connected component , i.e. the largest subset of mutually reachable nodes .we analyze four citation networks representing publications in the fields of scientometrics , library & information science and physics , and also the entire science ( see table [ tbl : nets ] ) .publications and their citations were collected from the web of science bibliographic database produced by thomson reuters .more specifically , we used the in - house version of the web of science database of the centre for science and technology studies of leiden university .this version of the web of science database is very similar to the one available online at http://www.webofscience.com[www.webofscience.com ] .however , there are some differences , notably in the identification of citations between publications .data collection was restricted to the science citation index expanded , the social sciences citation index and the arts & humanities citation index , while only publications of the web of science document types ` article ' and ` review ' were included in the data collection .-2.25in0 in .*statistics of citation networks of scientific publications in web of science .* we consider three scientific fields and the entire web of science .see text for the definitions of the statistics and the details of the data collection procedure . [cols="<,<,>,>,>,>,>",options="header " , ] [ tbl : wos ] to better understand the nature of different clusterings and the effects of the post - processing approach , fig .[ fig : wos ] shows the sizes and coverage of the largest clusters returned by the selected methods ( see methods ) .the coverage of an individual cluster is defined as the average internal degree of the nodes in the cluster divided by the total degree of these nodes . asalready lengthly discussed above , the spectral analysis approach metilus returns clusters with very low ( see left - hand side of fig . [fig : wos ] , panel * b * ) , while the modularity optimization and label propagation methods ( i.e. louvain and bpa ) give clusters with very high ( see right - hand side of fig .[ fig : wos ] , panel * b * ) . for the map equation algorithm metimap , .one can also observe that , in the case of the label propagation algorithm bpa , the post - processing approach fails to further partition the largest clusters with , where is represented by horizontal lines in fig .[ fig : wos ] , panel * a*. on the contrary , the post - processing does partition the largest clusters in the case of the modularity optimization method louvain . however , the results are far from satisfactory . each cluster with indeed split into smaller clusters , but the number of such clusters thus actually increases ( see middle of fig .[ fig : wos ] , panel * a * ) . and coverage of the largest clusters , respectively .horizontal lines in panel * a * represent the threshold size .see text for the definition of cluster coverage.,title="fig:",scaledwidth=100.0% ] +which methods for graph partitioning and community detection perform best for the purpose of grouping scientific publications into clusters ? in this paper , we have carried out an extensive analysis comparing the performance of a large number of methods .the methods have been applied to a number of networks of publications connected by direct citation relations .we have studied the statistical properties of the results provided by the different methods , and we have also performed an expert - based assessment of the results . from a bibliometric point of view ,a good clustering of publications ideally should have a number of properties .first of all , although it is natural to expect that there will be larger and smaller clusters , it is inconvenient for practical purposes if there are very large differences in the size of clusters . as a rule of thumb , we ideally would like the difference in size between the largest and the smallest clusters to be no more than an order of magnitude .second , if it turns out to be inevitable that some publications end up in very small clusters , for instance because these publications have almost no citation relations with other publications , then at least we would prefer the number of publications assigned to these insignificant clusters to be as limited as possible .third , we would like the results of a clustering method to be reasonably stable .many methods include a random element , in which case different runs of a method may yield different results .however , running the same method multiple times should not affect the results too much , and the results should also be reasonably robust to small changes in a citation network of publications .fourth , the computing time of a clustering method should not be excessive .this is especially important when one aims to apply a method to networks consisting of large numbers of publications and citation relations . finally , and perhaps most importantly, the results produced by a clustering method should make intuitive sense .experts should be able to recognize the scientific topics represented by clusters of publications .our analysis shows that most clustering methods yield results with large differences in the size of clusters .the larger clusters are typically several orders of magnitude larger than the smaller clusters .sometimes more than half of the publications in a citation network are all assigned to the same cluster .this was for instance observed for the results obtained from the links and scp methods in the library & information science citation network .the only methods that yield clusters of more or less similar size are the spectral methods ( e.g. graclus ) .these methods produce results that are characterized by a much more uniform cluster size distribution . depending on the cluster size distribution and also on the resolution of a clustering, there can be large differences in the share of all citation relations that are covered by clusters .coverage for instance ranges from less than to more than in the library & information science citation network .clustering methods also often assign a significant share of the publications in a citation network to very small clusters . in the library & information science citation network ,the graclus and infomap methods for instance assign more than of the publications to clusters consisting of fewer than publications .the stability or robustness of the results obtained from a clustering method also partly depends on the size of the clusters produced by the method .not surprisingly , methods that produce one or more very large clusters tend to yield relatively robust results .furthermore , in the library & information science citation network , spectral and statistical methods ( e.g. graclus and oslom ) produce results with a relatively low robustness , while infomap and modularity optimization yield quite robust results . in terms of computing time , there are substantial differences between the various methods . for instance , clustering the publications in the library & information science citation network takes more than times longer for the slowest method than for the fastest method .modularity optimization methods ( e.g. louvain ) , label propagation ( e.g. bpa ) , and spectral analysis methods ( e.g. graclus ) perform best in terms of computing time .other methods require a more significant amount of computing time , making them less suitable for applications on large citation networks .turning now to the expert - based assessment of the results produced by different clustering methods for the scientometrics subfield within the library & information science citation network , we find that the infomap and metimap ( i.e. infomap combined with spectral method metis ) methods give the most satisfactory results , with a slight preference for the infomap results over the results obtained from metimap .other methods , such as oslom and louvain , provide less satisfactory results .our analysis seems to provide most support for the use of infomap and related methods such as metimap to cluster the publications in a citation network .infomap has the best performance in our expert - based assessment , and it yields quite robust results . compared with some of the other methods , infomap has a relatively high computing time , but this can be overcome by using metimap in larger citation networks .the price that we pay for the good performance of infomap seems to be the assignment of a relatively large number of publications to small clusters .paying this price seems necessary to obtain high - quality clustering results . in large citation networks, a post - processing procedure can be applied to minimize the number of small clusters , but the effect of the use of such a procedure on the quality of the clustering results is not clear .the promising results obtained for infomap are in line with earlier findings reported in the network science literature .although infomap has been introduced in the bibliometric literature and has been applied to citation networks in a number of studies , the method has not yet gained a widespread popularity in the bibliometric community , where researchers seem to prefer the use of modularity - based methods .our findings suggest that the bibliometric community could benefit from exploring the use of other clustering methods in addition to modularity - based methods .infomap seems to be of particular interest .future studies should reveal whether infomap indeed consistently performs well in applications to citation networks .[ [ limitations - of - the - analysis . ] ] limitations of the analysis .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + it is important to emphasize that our results should be interpreted cautiously because of a number of limitations of our analysis .one obvious limitation is that , despite the large number of clustering methods included in our analysis , we did not exhaustively cover all methods proposed in the literature .the selection of the methods included in our analysis was made based on the popularity of a method and to some degree also on our familiarity with a method .in addition , the availability of source code played a role as well .many methods discussed in the literature are not included in our analysis . in particular , methods that produce overlapping clusters or clusters at multiple levels of resolution not covered .also , we for instance do not cover some recently developed principled methods based on statistical inference .a second limitation is that each clustering method was applied using the default parameter settings .we did not try to optimize the parameter values of the different methods .so the performance of some methods may have been better if we had used optimized parameter values for these methods .some methods for instance have a parameter that can be used to fine - tune the level of granularity of the clustering results .one could use such a parameter to try to obtain results at similar levels of granularity for different methods , and in that way a more accurate comparison between different methods may be possible .we did not explore this possibility in our analysis , but we do consider this an interesting direction for future research .we note that the clustering method proposed by two of us in an earlier paper requires a careful choice of parameter values . for this reason ,this method was not included in our present analysis .a third limitation is our exclusive focus on undirected and unweighted networks of direct citation relations between publications .we did not consider the possibility of taking into account the direction of a citation relation , and we did not test the effect of assigning weights to citation relations .we also did not study the use of indirect citation relations between publications , in particular co - citation and bibliographic coupling relations .finally , we should emphasize the limitations of our expert - based assessment of the clustering results obtained for the scientometrics subfield within the library & information science citation network .the expert - based assessment was carried out at a high level of detail by two experts with an extensive expertise in the field of scientometrics .nevertheless , any expert - based assessment will necessarily be of a subjective nature , and different experts therefore may not always reach the same conclusions .moreover , experts typically have a deep understanding of the literature only in a relatively small area of science .this for instance explains why in our expert - based assessment we could not cover the entire field of library and information science but only the subfield of scientometrics .unfortunately , it is difficult to say to what extent conclusions reached for such a relatively small area of science can be expected to generalize to other areas .for this reason , the findings of our expert - based assessment should be interpreted with some caution .we thank numerous authors for kindly providing the source code of their methods . this work has been supported in part by the slovenian research agency program no .p2 - 0359 .boyack kw , newman d , duhon rj , klavans r , patek m , biberstine jr , et al .clustering more than two million biomedical publications : comparing the accuracies of nine text - based similarity approaches .plos one . 2011;6(3):e18029 .boyack kw , klavans r. co - citation analysis , bibliographic coupling , and direct citation : which citation approach represents the research front most accurately ?j am soc inf sci tec. 2010;61(12):23892404 .yang j , leskovec j. overlapping community detection at scale : a nonnegative matrix factorization approach . in : proceedings of the acm international conference on web search and data mining .rome , italy ; 2013 .p. 587596 .lee c , reid f , mcdaid a , hurley n. detecting highly overlapping community structure by greedy clique expansion . in : proceedings of the acm sigkdd workshop on social network mining and analysis .washington , dc , usa ; 2010 .p. 3342 .coscia m , rossetti g , giannotti f , pedreschi d. : a local - first discovery method for overlapping communities . in : proceedings of the acm sigkdd international conference on knowledge discovery and data mining .beijing , china ; 2012 .p. 615623 .yang j , mcauley j , leskovec j. detecting cohesive and 2-mode communities in directed and undirected networks . in : proceedings of the acm international conference on web search and data mining .new york , ny , usa ; 2014 .p. 323332 .flake gw , lawrence s , giles cl .efficient identification of web communities . in : proceedings of the acm sigkdd international conference on knowledge discovery and data mining .boston , ma , usa ; 2000 .p. 150160 .macqueen jb . some methods for classification and analysis of multivariate observations . in : proceedings of berkeley symposium on mathematical statistics and probability .berkeley , ca , usa ; 1967 .p. 281297 .bohlin l , edler d , lancichinetti a , rosvall m. community detection and visualization of networks with the map equation framework . in : ding y , rousseau r , wolfram d , editors .measuring scholarly impact .switzerland : springer international publishing ; 2014 .
clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields . these methods are for instance used to group publications into clusters based on their relations in a citation network . in the network science literature , many clustering methods , often referred to as graph partitioning or community detection techniques , have been developed . focusing on the problem of clustering the publications in a citation network , we present a systematic comparison of the performance of a large number of these clustering methods . using a number of different citation networks , some of them relatively small and others very large , we extensively study the statistical properties of the results provided by different methods . in addition , we also carry out an expert - based assessment of the results produced by different methods . the expert - based assessment focuses on publications in the field of scientometrics . our findings seem to indicate that there is a trade - off between different properties that may be considered desirable for a good clustering of publications . overall , map equation methods appear to perform best in our analysis , suggesting that these methods deserve more attention from the bibliometric community .
models of opinion formation have been studied by physicists since the 80 s and are now part of the new branch of physics called sociophysics .this recent research area uses tools and concepts of statistical physics to describe some aspects of social and political behavior . from the theoretical point of view , opinion models are interesting to physicists because they present order - disorder transitions , scaling and universality , among other typical features of physical systems , which called the attention of many groups throughout the world .the basic ingredient of models of opinion dynamics is conformity , an important behavior of individuals that emerges as a result of their interactions with other individuals in the population . as examples : ( i ) an individual may copy the state ( opinion ) of one of his / her neighbors ( the voter model ) , or ( ii ) he / she can consider the majority or the minority opinion inside a small group ( the majority - rule models ) , or ( iii ) a given pair of individuals interact throught kinetic exchanges like an ideal gas . among these models ,we highlight the galam s majority - rule model . indeed , influence of majority opinions against minorities have been studied by social scientists since the 50 s . however , recently the impact of nonconformity in opinion dynamics has attracted attention of physicists .there are two kinds of nonconformity , namely anticonformity and independence , and it is important to distinguish between them .the anticonformists are similar to conformists , since both take cognizance of the group norm .thus , conformists agree with the norm , anticonformists disagree .as discussed in , an anticonformist actively rebels against influence .this is the case , for example , of the galam s contrarians , individuals that known the opinion of the individuals in a group of discussion , and adopt the choice opposite to the prevailing choice of the others , whatever this choice is . on the other hand, we have the independent behavior . in this case , the agent also take cognizance of the group norm , but he / she decides to take one of the possible opinions independently of the majority or the minority opinion in the group . as stated by willis in , _ `` the completely independent person may happen to behave in ways which are prescribed or proscribed by the norms of his group , but this is incidental .it should also be noted that pure anticonformity behavior , like pure conformity behavior , is pure dependent behavior''_. in terms of the statistical physics of opinion dynamics , independence acts on an opinion model as a kind of stochastic driving that can lead the model to undergo a phase transition . in fact , independence plays the role of a random noise similar to social temperature . finally , another interesting and realistic kind of social behavior is usually called inflexibility .individuals with such characteristic are averse to change their opinions , and the presence of those agents in the population affects considerably the opinion dynamics . from the theoretical point of view , the introduction of inflexible agents works in the model as the introduction of a quenched disorder , due to the frozen character of the opinions of such agents . in this workwe study the effects of conformity and nonconformity in opinion dynamics . for this purpose, we consider groups of 3 or 5 agents that can interact through the majority rule , but with the inclusion of disorder ( inflexibility ) and/or noise ( independence ) .we analyze these effects separately in the standard majority - rule model , and all together , in order to study the critical behavior of the system induced by the mentioned effects .this work is organized as follows . in sectionii we present separately in three subsections the microscopic rules that define the distinct formulations of the model , as well as the numerical results .these numerical results are connected with the analytical considerations presented in the appendix .finally , our conclusions are presented in section iii .our model is based on the galam s majority - rule model .we consider a fully - connected population of agents with opinions or concerning a given subject . in this sense, we are considering a mean - field - like limit , since each agent can interact with all others . in this case , the microscopic dynamics disregards correlations , that will be taken into account after , when we will consider the model on regular lattices .the opinions are represented by ising - like variables ( ) , and the initial concentration of each opinion is ( disordered state ) .we will consider three distinct mechanisms in the formulation of our model , namely majority - rule dynamics , inflexibility and independence .our objective is to analyze the critical behavior of the system , and in this case we will consider separately in the following subsections three distinct cases : ( i ) the majority - rule model with independent behavior , ( ii ) the majority - rule model with inflexible agents , and ( iii ) the majority - rule model with inflexible and independent individuals . in this case, we consider that some individuals in the population can show a nonconformist behavior called independence .the following microscopic rules govern the dynamics : 1 .a group of agents , say , is randomly chosen ; 2 . with probability the three agents in the group will act independently of the opinions of the group s individuals , i.e. , independent of the majority / minority opinion inside the group . in this case , with probability all the three agents flip their opinions and with probability nothing occurs ; 3 . on the other hand , with probability group follows the standard majority rule . in this case, all agents in the group follow the local majority opinion ( if the opinion of one agent is different from the other two , the former flips alone ) . in the case where the 3 agents do not act independently , which occurs with probability , the change of the states of the agents inside the group will occur according to the galam s majority - rule model .the parameter can be related to the agents flexibility . as discussed in , independence is a kind of nonconformity , and it acts on an opinion model as a kind of stochastic driving that can lead the model to undergo a phase transition .in fact , independence plays the role of a random noise similar to social temperature .we analyze the critical behavior of the system , in analogy to magnetic spin systems , by computing the order parameter where stands for time averages taken in the steady state .in addition to the time average , we have also considered configurational averages , i.e. , averages over different realizations .the order parameter is sensitive to the unbalance between the two distinct opinions , and it plays the role of the `` magnetization per spin '' in magnetic systems .in addition , we also consider the fluctuations of the order parameter ( or `` susceptibility '' ) and the binder cumulant , defined as as we are considering a mean - field formulation of the model , one can follow refs . to derive analytically the behavior of the stationary order parameter .the behavior of is given by ( see appendix 1 ) or in the usual form , where and we found a typical mean - field exponent , as expected due to the mean - field character of the model . the comparison of eq .( [ eq4 ] ) with the numerical simulations of the model is given in fig .[ fig1 ] , for typical values of the flexibility .one can see an excellent agreement among the two results .( [ eq5 ] ) also predicts that there is an order - disorder transition for all values of , which was confirmed numerically , see fig .[ fig2 ] ( a ) . versus the independence probability for typical values of the flexibility , for the mean - field formulation of the model ( no lattice ) .the symbols correspond to numerical simulations for population size ( averaged over simulations ) and the full lines represent the analytical prediction , eq .( [ eq4]).,scaledwidth=55.0% ] versus for typical values of and population size .it is also exhibited the finite - size scaling analysis for ( pannels b , c and d ) .we obtained , , and .data are averaged over simulations.,title="fig:",scaledwidth=48.0% ] versus for typical values of and population size .it is also exhibited the finite - size scaling analysis for ( pannels b , c and d ) .we obtained , , and .data are averaged over simulations.,title="fig:",scaledwidth=46.0% ] + versus for typical values of and population size .it is also exhibited the finite - size scaling analysis for ( pannels b , c and d ) .we obtained , , and .data are averaged over simulations.,title="fig:",scaledwidth=46.0% ] versus for typical values of and population size .it is also exhibited the finite - size scaling analysis for ( pannels b , c and d ) .we obtained , , and .data are averaged over simulations.,title="fig:",scaledwidth=48.0% ] we also estimated the critical exponents for many values of .as a typical example , we exhibit in fig .[ fig2 ] the finite - size scaling ( fss ) analysis for ( see pannels b , c and d ) .the critical values were identified by the crossing of the binder cumulant curves , as can be seen in the inset of fig .[ fig2 ] ( b ) , and the critical exponents , and were found by the best collapse of data . for all values of we found , and , which suggests a universality of the order - disorder phase transition .in particular , the numerical estimates of the exponent agree with eq .( [ eq4 ] ) , that predicts for all values of .notice that the exponents and are typical ising mean - field exponents , which is not the case for . this same discrepancy was observed in other discrete opinion models , and was associated with a superior critical dimension , that leads to an effective exponent , obtained from . in this case, one can say that our model is in the same universality class of the kinetic exchange opinion models with two - agent interactions , as well as in the mean - field ising universality class . versus the independence probability for the 2d ( triangular lattice , ) and 3d ( bcc lattice , ) cases , considering ( a ) .one can see the typical behavior of a phase transition .we also shown versus for the model defined on a 1d ring with sites ( b ) . in this case , the results for distinct values of suggest the absence of a phase transition .all data are averaged over simulations.,title="fig:",scaledwidth=48.0% ] versus the independence probability for the 2d ( triangular lattice , ) and 3d ( bcc lattice , ) cases , considering ( a ) .one can see the typical behavior of a phase transition .we also shown versus for the model defined on a 1d ring with sites ( b ) . in this case , the results for distinct values of suggest the absence of a phase transition .all data are averaged over simulations.,title="fig:",scaledwidth=48.0% ] to test the universality of the model under the presence of a topology , we simulated the dynamics on two distinct lattices , namely a two - dimensional triangular lattice and a three - dimensional body - centered cubic ( bcc ) lattice . in this case , the presence of a topology will introduce correlations in the system , and we expected that the mean - field results are not valid anymore . the lattices were built as folows . the triangular lattice was built from a finite square lattice with extra bonds along one diagonal direction . in this case , each group of 3 agents is chosen as follows .first , we choose an agent at random , say .then , we choose at random two nearest neighbors of ( say and ) , in a way that each one of the 3 agents ( , and ) is a neighbor of the other two agents , forming a triangle . on the other hand , the bcc lattice was built from a cubic structure with linear size , and each group contains 5 agents that were chosen as follows .first , we choose a random plaquette of 4 neighbor sites , forming a square .the fifth site is randomly chosen between the 2 possible sites in order to form a pyramid . a typical behavior of the order parameter as a function of is shown in fig .[ fig3_new ] ( a ) for both cases ( 2d and 3d , considering ) , where one can observe a typical behavior of an order - disorder transition . considering distinct values of , we performed a fss analysis ir order to estimate the critical exponents ( not shown ) .thus , for the 2d lattice we obtained the same critical exponents of the 2d ising model for all values of , i.e. , , and , and for the 3d lattice we obtained the same critical exponents of the 3d ising model for all values of , i.e. , , and .these results suggest that considering a bidimensional ( tridimensional ) system the model is in the universality class of the 2d ( 3d ) ising model . finally , we simulated the model on an one - dimensional ring , where each 3-agents group was formed by a randomly chosen site and its two nearest neighbors .typical results for the order parameter as a function of are exhibited in fig .[ fig3_new ] ( b ) . in this case, the results suggest that there is no order - disorder transition , as in the 1d ising model . in this case , considering the results for 1d , 2d and 3d lattices and also for the mean - field case , one can say that the majority - rule model with independent behavior is in the ising model universality class . the comparative phase diagram ( mean field x 2d x 3d ) is exhibited in fig . [ fig3 ] , where we plot for the fully - connected case the analytical solution for , eq .( [ eq5 ] ) .the behavior is qualitatively similar in all cases , suggesting a frontier of the form where for the mean - field case , for the triangular lattice and for the bcc lattice ( these last two values were obtained by a fit of the data exhibited in fig . [ fig3 ] ) . ) , with for the mean - field case [ full line , according to eq .( [ eq5 ] ) ] , ( dashed line ) for the triangular lattice and ( dotted line ) for the bcc lattice , the last two values obtained from data fits.,scaledwidth=55.0% ] notice that the mean - field analytical calculation , eq . ( [ eq5 ] ) , overestimates the critical points , as it is common in mean - field approximations . however , our calculations predict the occurrence of order - disorder phase transitions , as well as correctly predicts the form between and . from the phase diagram of this formulation of the model we can see that the increase of the flexibility parameter leads to the decrease of , as also indicated in eq .( [ eq5 ] ) . this can be understood as follows .the increase of leads the agents to perform more independent opinion changes or spin flips ( which represents a nonconservative society ) .this action tends to disorder the system even for a small value of the independence probability , which decrease the critical point .notice that we obtained here for the mean - field case the same result for obtained in ref . , where the independent behavior was considered in the sznajd model .indeed , in the mean - field formulation of the sznajd model , the dynamics is very similar to the mean - field majority - rule dynamics for groups of size , which explains the identical result .however , in the mentioned reference , the model was not mapped in any universality class . as a second formulation of our model , we consider the majority - rule dynamics with the presence of some agents with the inflexibility characteristic , individuals whose stubbornness makes them reluctant to change their opinions . as in , we have considered a fraction of agents that are averse to change their opinions .the following microscopic rules govern the dynamics : 1 .a group of agents , say , is randomly chosen ; 2 .we verify if there is a majority of 2 ( say and ) in favor of a given opinion or , and in this case the other ( say ) is a supporter of the minority opinion ; 3 .if agent is a flexible individual , he / she will follow the local majority and flip his state , otherwise nothing occurs . in this case , the frozen states of the inflexible agents work in the model as the introduction of a quenched disorder . as in magnetic systems , one can expect that a disorder can induce / suppress a phase transition , as was also observed in the kinetic exchange opinion model with the presence of inflexibles . versus the fraction of inflexible individuals for the mean - field formulation of the model ( main plot ) .the squares are the numerical results for population size and the full line is the analytical prediction , eq .( [ eq7 ] ) .it is also exhibited in the inset the binder cumulant curves for different sizes , showing the crossing of the curves for , in agreement with eqs .( [ eq7 ] ) and ( [ eq8 ] ) .data are averaged over simulations.,scaledwidth=55.0% ] as in the previous case ( subsection a ) , one can derive analytically the behavior of the order parameter in this mean - field formulation of the model .the dependence of the order parameter with the the fraction of inflexibles is given by ( see appendix 2 ) or in the usual form , where and again we found a typical mean - field exponent , as expected due to the mean - field character of the model .the comparison of eq .( [ eq7 ] ) with the numerical simulations of the model is given in fig .in addition , we also show in the inset of fig .[ fig4 ] the binder cumulant curves for different population sizes , where one can observe the crossing of the curves at , in agreement with eqs .( [ eq7 ] ) and ( [ eq8 ] ) .furthermore , a complete fss analysis ( not shown ) give us , and , i.e. , the same values obtained for the model presented in the subsection a. thus , the presence of intransigents in the population leads the system to an order - disorder transition at a critical density , and this transition is in the mean - field ising model universality class . versus the fraction of inflexible individuals for the model with no independence defined on triangular lattices of distinct sizes ( main plot ) .it is also exhibited in the inset the binder cumulant curves for different sizes , showing no crossing of the curves .data are averaged over simulations.,scaledwidth=55.0% ] as a final observation of this subsection , we also simulated ( as in the previous section ) the majority - rule model with inflexible agents on a two - dimensional triangular lattice , in order to test the universality of the model in comparison with the ising model .the results are exhibited in fig .one can see that the order parameter ( at least for the larger sizes ) does not present the typical behavior of a phase transition , i.e. , the usual change of concavity of the curves . in addition , in the inset of fig .[ fig5 ] one can see that the binder cumulant curves do not cross .a similar behavior was also reported in for another opinion model with inflexibility . in this case, those results suggest that the inclusion of inflexibility as we done here works in the model as a quenched disorder , and it destroys the phase transition in small dimensions like . in order to verify this hypothesis , we simulated the model on square lattices .in such case , we randomly choose a lattice site , and the group is formed by this random individual and his / her four nearest neighbors , forming a group of size 5 as in .the behavior of and are very similar to the ones observed for the triangular lattice ( not shown ) , suggesting that there is no order - disorder transition for .in some magnetic models such type of destruction due to quenched disorder was also observed . as a third formulation of our model ,we consider the majority - rule dynamics where agents can exhibit the independent behavior , as well as inflexibility . in this case, the model carries the rules of the two previous models ( subsections a and b ) , namely : 1 .a group of agents , say , is randomly chosen ; 2 . with probabilityeach one of the three agents in the group will act independently of the opinions of the group s individuals , provided he / she is not an inflexible individual .thus , with probability all flexible agents flip their opinions and with probability nothing occurs ; 3 . on the other hand , with probability group follows the standard majority rule . in this case, each flexible agent follows the local majority opinion .notice that , even if the agents decide to act independently of the group s opinions , we will not see necessarily 3 changes of opinions , as in the model of subsection b. indeed , the 3 agents can change their opinions , but we can have two , one or even zero spin flips due to the frozen states of the inflexible agents . versus the independence probability for and typical values of the flexibility , for the mean - field formulation of the model .the symbols correspond to numerical simulations for population size ( averaged over simulations ) and the full lines represent the analytical prediction , eq .( [ eq9]).,scaledwidth=55.0% ] as in the previous cases , one can derive analytically the behavior of the order parameter as a function of the fraction of inflexibles and the independence probability , in the mean - field formulation of the model .the calculations give us ( see appendix 3 ) ^{1/2 } ~,\ ] ] where is given by writing the order parameter in the the usual form , one obtains and again we found a typical mean - field exponent , as expected due to the mean - field character of the model .notice that we recover the results of eqs .( [ eq5 ] ) and ( [ eq8 ] ) for ( no inflexibility ) and ( no independence ) , respectively . the comparison of eq .( [ eq9 ] ) with the numerical simulations of the model for and typical values of is given in fig .in addition , a complete fss analysis ( not shown ) for many values of give us , and , i.e. , the same values obtained for the model presented in the previous subsections .thus , this formulation of the model also leads the system to undergoes phase transitions in the same universality class of the mean - field ising model .we also obtained from the fss analysis the critical points for typical values of and .the comparison among the numerical estimates and the analytical prediction of eq .( [ eq11 ] ) is shown in the phase diagram of fig .[ fig7 ] . versus for typical values of .the symbols are the numerical estimates of the critical points , obtained from the crossing of the binder cumulant curves for different population sizes .the full lines are the analytical form given by eq .( [ eq11]).,scaledwidth=55.0% ] from the phase diagram of this formulation of the model we can see that the decrease of the flexibility parameter , related to the independent behavior , makes the ordered phase greater , for a given value of . as in the case with no inflexibility ,the increase of leads the agents to perform more independent spin flips , and this action tends to disorder the system even for a small value of the independence probability , which decrease the critical point .the presence of intransigent agents reinforces this behavior , leading to the decrease of the ordered phase for increasing values of .in this work , we have studied a discrete - state opinion model where each agent carries one of two possible opinions , . for this purpose, we considered three distinct mechanisms to model the social behavior of the agents : majority - rule dynamics , inflexibility and independence .our target was to study the critical behavior of the opinion model under the presence of the mentioned mechanisms .thus , we performed computer simulations of the model , and some analytical calculations complemented the numerical analysis .let us remember that the original majority - rule model presents only absorbing consensus states with all opinions or .first we considered the majority - rule model with independence in a fully - connected population . in this case, there is a probability that the 3 agents forming a group behave as independent individuals , changing opinion with probability and keeping opinion with probability .this mechanism acts in the system as a social temperature . in this case, we showed that independence induces an order - disorder transition for all values of , with the critical points being a function of .in addition , the model is in the same universality class of the mean - field ising model and of the kinetic exchange opinion models . in the ordered phasethere is a coexistence of both opinions , but one of them is a majority in the population .we observed that the larger the flexibility ( nonconservative societies ) , the smaller the value of needed to disorder the system .in other words , for a population debating a subject with two distinct choices , it is easier to reach a final decision for a small flexibility concerning the independent behavior , as observed in conservative societies .consensus states were obtained only for or . as a test to the universality of the model , we simulated it on triangular and on bcc lattices , and we found the same critical exponents of the 2d and the 3d ising models , respectively .in addition , simulations on 1d lattices suggest the absence of a phase transition .all those results suggest the majority - rule model with independence is in the ising model universality class .after that , we considered the majority - rule model in a fully - connected population with a fraction of agents with the inflexibility characteristic . in this case , these agents present frozen states and can not be persuaded to change opinion . in the language of magnetic systems , those special agents behave as the introduction of quenched disorder in the system .we showed that there is a critical fraction above which there is no order in the system , i.e. , there is no decision or majority opinion .this order - disorder phase transition is also in the universality class of the mean - field ising model .we observed consensus in the population only in the absence of inflexible agents , i.e. , for .in other words , the presence of intrasigents in the population makes the model more realistic , since there is a clear decision in the public debate for . again , as a test for the universality of the model , we simulated it on triangular and square lattices . in this case , we did not observe a phase transition for both lattices , suggesting that the model with quenched disorder does not undergo a phase transition in small dimensions like . finally , we considered both effects , independence and inflexibility , in the majority - rule model . in this casewe also observed a phase transition at mean - field level , and the critical points depend on and .consensus states were only obtained for , and the critical exponents are the same as observed before , i.e. , we found again the universality class of the mean - field ising model . from the phase diagram of this formulation of the model we observed that the increase of the flexibility parameter , related to the independent behavior , makes the ordered phase smaller .it was recently discussed that the majority - rule model with limited persuasion can lead to the victory of the initial minority , provided it is sufficiently small .thus , as a future extension of the present work , it may be interesting to analyze how different initial concentrations of the opinions affect the dynamics , as well as mechanisms of limited persuasion in the majority - rule dynamics . [ [ section ] ] let us consider the model with independent behavior in the mean - field formulation . following the approach of ref . , we computed the stationary order parameter , as well as the critical values .let us first define and as the stationary probabilities of each possible state ( or , respectively ) .we have to calculate the probability that a given agent suffers the change or .we are considering groups of 3 agents , so one can have distinct variations of the magnetization , depending on the states of the 3 agents .for example , the probability to choose at random 3 agents with opinions , i.e , a configuration , is . with probability the configuration remains , which does not affect the magnetization of the system .with probability the configuration also remains , and with probability the configuration changes to , which cause a variation of in the magnetization . in other words ,the magnetization decreases units with probability .one can denote this probability as , i.e. , the probability that the magnetization variation is equal to .generalizing , one can define , with in this case , as the probability that the magnetization variation is after the application of the models rules . as the order parameter ( magnetization ) stabilizes in the steady states, we have that its average variation must vanish in those steady states , namely , + 2\,[r(+2)-r(-2)]=0 \,.\ ] ] in this case , we have thus , the null average variation condition eq .( [ nullshift1 ] ) give us =0\ ] ] which give us the solution ( disordered state ) or where we used the normalization condition .( [ ap1 ] ) give us two solutions for , and the order parameter can be obtained from , which give us the critical points can be obtained by taking , that is the eq .( [ eq5 ] ) of the text .now we consider the model with inflexibility .let us now denote the fraction of agents who have opinion and are non - inflexibles by , and similarly for .notice that the total fraction of inflexibles is , and the fraction of inflexibles with opinion is , as well as the fraction of inflexibles with opinion . in this casethe normalization condiction becomes since the complementary fraction represents the agents that have frozen states or .following the same approach of the previous appendix , the null average variation condition becomes =0 \,,\ ] ] where the probabilities are given by thus , the null average variation condition eq .( [ nullshift2 ] ) give us which give us the solution ( disordered state ) or where we used the normalization condition , eq .( [ ap3 ] ) .( [ ap5 ] ) give us two solutions for , and the order parameter can be obtained from , which give us the critical point can be obtained by taking , that is the eq .( [ eq8 ] ) of the text .now we consider the model with inflexibility and independence . as in the previous section ,let us now denote the fraction of agents who have opinion and are non - inflexibles by , and similarly for .notice that the total fraction of inflexibles is , and the fraction of inflexibles with opinion is , as well as the fraction of inflexibles with opinion .the normalization condition is given by eq .( [ ap3 ] ) following the same approach of the previous appendices , the null average variation condition becomes + 4\,[r(+4)-r(-4)]+2\,[r(+2)-r(-2)]=0 \,,\ ] ] where the probabilities are given by thus , the null average variation condition eq .( [ nullshift3 ] ) give us =0 ~,\end{aligned}\ ] ] which give us the solution ( disordered state ) or =0 ~,\ ] ] where we used the normalization condition , eq . ( [ ap3 ] ) .( [ ap8 ] ) give us two solutions for , and the order parameter can be obtained from , which give us ^{1/2 } ~,\ ] ] where is given by the critical points can be obtained by taking , that is the eq .( [ eq11 ] ) of the text .notice that we recover the results ( [ ap2 ] ) and ( [ ap6 ] ) for ( no inflexibility ) and ( no independence ) , respectively .the authors acknowledge financial support from the brazilian scientific funding agencies cnpq and capes .
in this work we study opinion formation in a population participating of a public debate with two distinct choices . we considered three distinct mechanisms of social interactions and individuals behavior : conformity , nonconformity and inflexibility . the conformity is ruled by the majority - rule dynamics , whereas the nonconformity is introduced in the population as an independent behavior , implying the failure to attempted group influence . finally , the inflexible agents are introduced in the population with a given density . these individuals present a singular behavior , in a way that their stubbornness makes them reluctant to change their opinions . we consider these effects separately and all together , with the aim to analyze the critical behavior of the system . we performed numerical simulations in some lattice structures and for distinct population sizes , and our results suggest that the different formulations of the model undergo order - disorder phase transitions in the same universality class of the ising model . some of our results are complemented by analytical calculations .
the glioblastoma , also known as glioblastoma multiforme ( gbm ) , is a highly invasive glioma in the brain .it is the most common and most aggressive brain tumour in humans . from a medical point of view, gbm is a fast growing tumour made up of an heterogeneous mixture of poorly differentiated astrocytes , with pleomorphism , necrosis , vascular proliferation and high rate mitosis .this glioma can appear at any age but is more frequent among adults older than 45 years .usually , they appear in the cerebral hemispheres but they could also appear in the cerebellum . from a mathematical point of view , they can be considered to have a spherical geometry as it is illustrated in figure [ f1 ] see . in 1997p. k. burgess _ et ._ proposed a 3-dimensional mathematical model that describes the growth of a glioblastoma free of any medical treatment that could grow with no restrictions .this model provides information about the density change of the tumour in any spatiotemporal point but does not give any information about the case in which some annihilation of tumour cells could appear due , possibly , to the administration of cancericidal substances and hence does not study the dynamics of proliferation - annihilation of gliomas .it is worthy to say that some bi - dimensional mathematical models preceded the burgess model as the ones formulated in y . in the present work , and taking the burgess model as an starting point , we will formulate a mathematical model that takes into account the action of some cancericidal substances ( as temozolomide and chemotherapy ) and hence the possibility to annihilate or diminish the growth of the gliomas .our resulting model , in agreement with clinical data , is expressed in terms of a partial non - linear differential equation that is solved using the adomian decomposition method , .the model proposed also allows to compare the profile of a tumour growing without any treatment with the profile of a tumour subject to treatment , i , e . , our model includes a term that gives the difference between the growth and annihilation of the glioma .calibration of doses using this model as a basis could result in the lengthening of life for glioma patients .[ h ] , scaledwidth=40.0% ]adomian decomposition method ( adm ) is a technique to solve ordinary and partial nonlinear differential equations . using this method , it is possible to express analytic solutions in terms of a rapidly converging series . in a nutshell ,the method identifies and separates the linear and nonlinear parts of a differential equation . by inverting and applying the highest order differential operator that is contained in the linear part of the equation , it is possible to express the solution in terms of the the rest of the equation affected by this inverse operator . at this point, we propose to express this solution by means of a decomposition series with terms that will be well determined by recursion and that gives rise to the solution components .the nonlinear part is expressed in terms of the adomian polynomials . the initial or the boundary condition andthe terms that contain the independent variables will be considered as the initial approximation . in this way and by means of a recurrence relations , it is possible to calculate the terms of the series by recursion that gives the approximate solution of the differential equation .+ given a partial ( or ordinary ) differential equation with the initial condition where is differential operator that could itself , in general , be nonlinear and therefore includes linear and nonlinear terms .+ in general , equation ( [ eq : y1 ] ) is be written as where , is the linear remainder operator that could include partial derivatives with respect to , is a nonlinear operator which is presumed to be analytic and is a non - homogeneous term that is independent of the solution .+ solving for , we have as is presumed to be invertible , we can apply to both sides of equation ( [ eq : y4 ] ) obtaining an equivalent expression to ( [ eq : y5 ] ) is where is the constant of integration with respect to that satisfies . in equationswhere the initial value , we can conveniently define .+ the adm proposes a decomposition series solution given as the nonlinear term is given as where is the adomian polynomials sequence given by ( see deduction in appendix at the end of this paper ) \rvert_{\lambda=0}.\label{eq : y9}\ ] ] substituting ( [ eq : y7 ] ) , ( [ eq : y8 ] ) y ( [ eq : y9 ] ) into equation ( [ eq : y6 ] ) , we obtain with identified as , and therefore , we can write from which we can establish the following recurrence relation , that is obtained in a explicit way for instance in reference , using ( [ eq : y11 ] ) , we can obtain an approximate solution of ( [ eq : y1 ] ) , ( [ eq : y2 ] ) as this method has been successfully applied to a large class of both linear and nonlinear problems .the adomian decomposition method requires far less work in comparison with traditional methods .this method considerably decreases the volume of calculations .the decomposition procedure of adomian easily obtains the solution without linearising the problem by implementing the decomposition method rather than the standard methods . in this approach, the solution is found in the form of a convergent series with easily computed components ; in many cases , the convergence of this series is extremely fast and consequently only a few terms are needed in order to have an idea of how the solutions behave .convergence conditions of this series have been investigated by several authors , e.g. , .mathematical modelling of the spread of aggressive brain cancers such as glioblastoma multiforme has been discussed by several authors , , .it is noteworthy to say that some authors like have included a killing term . in any case , they describe tumour - growth by using spatiotemporal models that can be read as rate of change of tumour cell density + = ( diffusion of tumour cells ) + + ( growth of tumour cells ) + -(killing rate of the same cells ) in mathematical terms , in this equation , is the concentration of tumour cells at location at time , is the diffusion coefficient , _i.e. _ a measure of the areal speed of the invading glioblastoma cells , is the rate of reproduction of the glioblastoma cells , and the killing rate of the same cells .the last term has been used by some authors to investigate the effects of chemotherapy , . in this model ,the tumour is assumed to have spherical symmetry and the medium through which it is expanding , to be isotropic and uniform .we can assume that at the beginning of time ( diagnostic time ) , the density of cancer cells is , _ i. e. _ , and so the equation ( [ eq : def3 ] ) is \eta(r , t),\\ \eta(r_{0},t_{0})=n_0 .\end{array } \right.\label{eq : y13}\ ] ] the solution of ( [ eq : y13 ] ) is given , without many details , in and in .they solve this equation for the -non - very realistic- case in which and and are constants .the solution for this case is given by using this equation ( [ eq : y14 ] ) , the mentioned authors calculate the expected survival time ( in months ) for a person that has a brain tumour modelled using equation ( [ eq : y13 ] ) .following , we propose the change of variables , and .using this change of variables and keeping constant , equation ( [ eq : y13 ] ) is given by in , a medical study is presented that stresses the advantages of using combined therapies such as chemotherapy and radiotherapy in the treatment of brain cancer .concretely , they present the results of using temozolomide in combination with radiotherapy .the results show a lengthening in the patient life as a consequence of the tumour size decrease .mathematically this is traduced as a negative growth of the tumour , in other words , the term ( growth of the cancer cells minus eliminations of cancer cells ) is negative . in present work, we will study the case presented by roger stupp _ et ._ in .our model will make use of equation ( [ eq : y15 ] ) taking constant and \times [ 0,1]][f2 ] , scaledwidth=80.0% ] the adm has been used by several authors to solve linear and non - linear diffusion equations as well as fractional diffusion equations , some important references can be found in . in the present work we are interested in the solution ofthe diffusion equation ( [ eq : y15 ] ) in which a non - linear source is modelling the effects of the combined use of radiotherapy and chemotherapy treatment with _temozolomide _ as is reported in .considering the equation ( [ eq : y15 ] ) , with and our model will be given by the following non - linear partial differential equation in equation ( [ eq : y16 ] ) we have made the _ a priori _ assumption that the initial condition is .this assumption considers that the initial tumour growth profile is given by in the time we start the annihilation or attenuation of the gliomas by means of some treatment ( as chemotherapy ) .the initial growth profile is illustrated in figure [ f3 ] .[ h ] [ f3 ] , scaledwidth=80.0% ] using \rvert_{\alpha=0}\quad n\geq 0\ ] ] to calculate the adomian polynomials , we have : using the sequence for and the recurrence relation given in ( [ eq : y11 ] ) we can calculate , in this way : \hat{t}=\frac{t}{r+2}\ ] ] the partial sums of the adomian series are and taking into account the equation ( [ eq : y12 ] ) , we have taking the sum of the first terms , we can see that the above series converges to .then , using ( [ eq : y18 ] ) we have [ h ] with \times [ 0,1] ] . in figure[ f4 ] , we can observe the approximate - linear tumour growth - profile after the patient is under chemotherapy treatment with _temozolomide _ in contrast with the fast exponential growth given by ( [ eq : y14 ] ) and corresponding to free - growth tumour .the free - growth tumour profile was shown in , in which the value of is given for different values of the parameters , and .+ in order to see the effect of medical treatment , we can compare the radius of the tumour under medical treatment versus the radius of the untreated tumour .using the solution ( [ eq : y14 ] ) of the burgess linear partial equation and solving for ( also see ) that accounts for free growth of an untreated tumour , we obtain \right|}.\ ] ] solving for in the solution of the non - linear equation ( [ eq : y19 ] ) that accounts for case in which we have proliferation and annihilation of tumour cells due to medical treatment , we have if we take , and ( see ) and recalling that , then . using this values in equation ( [ eq : y-20 ] ) and ( [ eq : y-21 ] ) we obtain the following table .radius growth in untreated tumour versus radius growth in treated tumour . [ cols="<,<,<",options="header " , ] using the data of table [ t1 ] we obtain figures [ f5 ] and [ f6 ] . from the figure [ f5 ] , we observe that the radius of the untreated tumour grows as predicted in ( [ eq : y14 ] ) meanwhile we observe , in figure [ f6 ] , that the treated tumour s radius decreases with time .[ h ! ] versus time ] for equation ( 21 ) [ f6 ] , width=453,height=226 ]in this work we have proposed a model for cerebral tumour ( glioblastoma ) growth under medical treatment . taking the burgess equation as departing point , we considered additional non - linear terms that represent the dynamics of proliferation and annihilation of gliomas resulting from medical care as suggested in the clinical study done by r. stupp .the effect of the medical treatment on the tumour is represented by a non - linear term .the final model that describes the proliferation and annihilation of tumour cells is represented by a partial non - linear differential equation that is solved using the adomian decomposition method ( adm ) . finally , as is observed in table [ t1 ] and figures[ f5 ] and [ f6 ] our model and the solution given using the adm appropriately models the effects of the combined therapies . by means of a proper use of parameters, this model could be used for calculating doses in radiotherapy and chemotherapywe would like to thank anonymous referees for their constructive comments and suggestions that helped to improve the paper .in this appendix we will deduce equation ( [ eq : y9 ] ) that accounts for every term in the succession of the adomian polynomials assuming the following hypotheses stated in : * the series solution , , of the problem given in equation ( [ eq : y1 ] ) is absolutely convergent , * the non - linear function can be expressed by means of a power series whose radio of convergence is infinite , that is is worthy to note that ( [ 2.2 ] ) is a rearranged expression of the series ( [ 2.1 ] ) , and note that , due to hypothesis , this series is convergent .consider now , the parametrisation proposed by g. adomian in given by where is a parameter in and is a complex - valued function such that . with this choosing of and using the hypotheses above stated , the series ( [ 2.3 ] ) is absolutely convergent .+ substituting ( [ 2.3 ] ) in ( [ 2.2 ] ) , we obtain due to the absolute convergence of we can rearrange in order to obtain the series of the form . using ( [ 2.5 ] ) we can obtain the coefficients de , and finally we deduce the adomian s polynomials .that is , using equation ( [ 2.6 ] ) making and taking derivative at both sides of the equation , we can make the following identification + + + + + + + hence we have obtain equation ( [ eq : y9 ] ) : \rvert_{\lambda=0}.\label{2.7}\ ] ] burgess , p. k. , _ et .al . _ : the interaction of growth rates and diffusion coefficients in a three - dimensional mathematical model of gliomas . j. of neuropathology and experimental neurology . * 56 * , 704 - 713 ( 1997 ) das , s. : solution of extraordinary differential equation with physical reasoning by obtaining modal reaction .series modelling and simulation in engineering . *2010 * , 1 - 19 ( 2010 ) .doi:10.1155/2010/739675 murray , j. d. : glioblastoma brain tumors : estimating the time from brain tumor initiation and resolution of a patient survival anomaly after similar treatment protocols .j. of biological dynamics .* 6 * , suppl . 2 , 118 - 127 ( 2012 ) .doi : 10.1080/17513758.2012.678392 saha ray , s. : a new approach for the application of adomian decomposition method for solution of fractional space diffusion equation with insulated ends . applied math . and computations .* 202*(2 ) , 544 - 549 ( 2008 ) sardar , t. , saha ray , s. , bera , r. k. , biswas b. b. , das , s. : the solution of coupled fractional neutron diffusion equations with delayed neutron .j. of nuclear energy science and tech .* 5*(2 ) , 105 - 113 ( 2010 )
in the present work we consider the mathematical model that describes brain tumour growth ( glioblastomas ) under medical treatment . based on the medical study presented by r. stupp et al . ( new engl journal of med 352 : 987 - 996 , 2005 ) which evidence that , combined therapies such as , radiotherapy and chemotherapy , produces negative tumour - growth , and using the mathematical model of p. k. burgess et al . ( j neuropath and exp neur 56 : 704 - 713 , 1997 ) as an starting point , we present a model for tumour growth under medical treatment represented by a non - linear partial differential equation that is solved using the adomian decomposition method ( adm ) . it is also shown that the non - linear term efficiently models the effects of the combined therapies . by means of a proper use of parameters , this model could be used for calculating doses in radiotherapy and chemotherapy . keywords : burgess equation , adomian polynomials , glioblastoma , temozolomide .
since its introduction in a centralized context , the minimum spanning tree ( or mst ) problem gained a benchmark status in distributed computing thanks to the seminal work of gallager , humblet and spira . the emergence of large scale and dynamic systems revives the study of scalable algorithms .scalable _ algorithm does not rely on any global parameter of the system ( e.g. upper bound on the number of nodes or the diameter ) . in the context of dynamic systems , after a topology change a minimum spanning tree previously computed is not necessarily a minimum one ( e.g. , an edge with a weight lower than the existing edges can be added ) .a mechanism must be used to replace some edges from the constructed tree by edges of lower weight .park et al . proposed a distributed algorithm to maintain a mst in a dynamic network using the gallager , humblet and spira algorithm . in their approach , each node know its ancestors and the edges weight leading to the root in the tree .moreover , the common ancestor between two nodes in the tree can be identified . for each non - tree edge ,the tree is detected as not optimal by and if there exist a tree edge with a higher weight than between ( resp . ) and the common ancestor of and . in this case, the edge of maximum weight on this path is deleted .this yields to the creation of several sub - trees , from which a new mst can be constructed following the merging procedure given by gallager et al .flocchini et al . considered another point of view to address the same problem .the authors were interested to the problem of precomputing all the replacement minimum spanning trees when a node or an edge of the network fails .they proposed the first distributed algorithms to efficiently solve each of these problems ( i.e. , by considering either node or edge failure ) .additional techniques and algorithms related to the construction of light weight spanning structures are extensively detailed in .large scale systems are often subject to transient faults . _ self - stabilization _ introduced first by dijkstra in and later publicized by several books deals with the ability of a system to recover from catastrophic situation ( i.e., the global state may be arbitrarily far from a legal state ) without external ( e.g. human ) intervention in finite time .although there already exist self - stabilizing solutions for the mst construction , none of them considered the extension of the gallager , humblet and spira algorithm ( ghs ) to self - stabilizing settings . interestingly , this algorithm unifies the best properties for designing large scale msts : it is fast and totally decentralized and it does not rely on any global parameter of the system .our work proposes an extension of this algorithm to self - stabilizing settings .our extension uses only poly - logarithmic memory and preserves all the good characteristics of the original solution in terms of convergence time and scalability .antonoiu and srimani , and gupta and srimani presented in the first self - stabilizing algorithm for the mst problem .the mst construction is based on the computation of all shortest paths ( for a certain cost function ) between all pairs of nodes . while executing the algorithm , every node stores the cost of all paths from it to all the other nodes . to implement this algorithm, the authors assume that every node knows the number of nodes in the network , and that the identifiers of the nodes are in .every node stores the weight of the edge placed in the mst for each node . therefore the algorithm requires bits of memory at node .since all the weights are distinct integers , the memory requirement at each node is bits .the main drawback of this solution is its lack of scalability since each node has to know and maintain information for all the nodes in the system .note that the authors introduce a time complexity definition related to the transmission of beacon in the context of ad - hoc networks . in a round ,each node receives a beacon from all its neighbors .so , the time complexity announced by the authors stays only in the particular synchronous settings . in asynchronoussetting , a node is activated at the reception of a beacon from each neighbor leading to a time complexity .a different approach for the message - passing model was proposed by higham and liang .the algorithm works roughly as follows : every edge checks whether it should belong to the mst or not . to this end , every non tree - edge floods the network to find a potential cycle , and when receives its own message back along a cycle , it uses the information collected by this message ( i.e , the maximum edge weight of the traversed cycle ) to decide whether could potentially be in the mst or not .if the edge has not received its message back after the time - out interval , it decides to become tree edge .the memory used by each node is bits , but the information exchanged between neighboring nodes is of size bits , thus only slightly improving that of .this solution also assumes that each node has access to a global parameter of the system : the diameter .its computation is expensive in large scale systems and becomes even harder in dynamic settings .the time complexity of this approach is rounds where and are the number of edges and the upper bound of the diameter of the network respectively , i.e. , rounds in the worst case . + in we proposed a self - stabilizing loop - free algorithm for the mst problem .contrary to previous self - stabilizing mst protocols , this algorithm does not make any assumption on the network size ( including upper bounds ) or the uniqueness of the edge weights .the proposed solution improves on the memory space usage since each participant needs only bits while preserving the same time complexity as the algorithm in .clearly , in the self - stabilizing implementation of the mst algorithms there is a trade - off between the memory complexity and their time complexity ( see table [ tableresume ] ) .the challenge we address in this paper is to design fast and scalable self - stabilizing mst with little memory .our approach brings together two worlds : the time efficient mst constructions and the memory compact informative labeling schemes .we do this by extending the ghs algorithm to the self - stabilizing setting while keeping it memory space compact , but using a self - stabilizing extension of the nearest common ancestor labeling scheme .note that labeling schemes have already been used in order to maintain compact information linked with vertex adjacency , distance , tree ancestry or tree routing , however none of these schemes have been studied in self - stabilizing settings ( except for the tree routing ) .our contribution is therefore twofold .we propose for the first time in self - stabilizing settings a bits scheme for computing the nearest common ancestor .furthermore , based on this scheme , we describe a new self - stabilizing algorithm for the mst problem .our algorithm does not make any assumption on the network size ( including upper bounds ) or the existence of an a priori known root .the convergence time is asynchronous rounds and the memory space per node is bits .interestingly , our work is the first to prove the effectiveness of an informative labeling scheme in self - stabilizing settings and therefore opens a wide research path in this direction .the description of our algorithm is _ explicit _ , in the sense that we describe all procedures using the formal framework the recent paper announces an improvement of our results , by sketching the _ implicit _ description of a self - stabilizing algorithm for mst converging in rounds , with a memory of bits per node .this algorithm is also based on an informative labeling scheme .the approach proposed by korman et al . is based on the composition of many sub - algorithms ( some of them not stabilizing ) presented in the paper as black boxes and the composition of all these modules was not proved formally correct in self - stabilizing settings up to date .the main feature of our solution in comparison with is its straightforward implementation .we consider an undirected weighted connected network where is the set of nodes , is the set of edges and is a positive cost function .nodes represent processors and edges represent bidirectional communication links .the processors asynchronously execute their programs consisting of a set of variables and a finite set of rules .we consider the local shared memory model of computation bits per node by considering also the local copies of neighbors variables , with the maximum degree of a node in the network . ] .the variables are part of the shared register which is used to communicate with the neighbors. a processor can read and write its own registers and can read the shared registers of its neighbors .each processor executes a program consisting of a sequence of guarded rules .each _ rule _ contains a _ guard _ ( boolean expression over the variables of a node and its neighborhood ) and an _ action _ ( update of the node variables only ) .any rule whose guard is _ true _ is said to be _enabled_. a node with one or more enabled rulesis said to be _ enabled _ and may execute the action corresponding to the chosen enabled rule .a _ local state _ of a node is the value of the local variables of the node and the state of its program counter .configuration _ of the system is the cross product of the local states of all nodes in the system .the transition from a configuration to the next one is produced by the execution of an action at a node .computation _ of the system is defined as a _ weakly fair , maximal _ sequence of configurations , , where each configuration follows from by the execution of a single action of at least one node . during an execution step ,one or more processors execute an action and a processor may take at most one action . _ weak fairness _ of the sequence means that if any action in is continuously enabled along the sequence , it is eventually chosen for execution ._ maximality _ means that the sequence is either infinite , or it is finite and no action of is enabled in the final global state . in this context , a _ round _ is the smallest portion of an execution where every process has the opportunity to execute at least one action . in the sequel we consider the system can start in any configuration .that is , the local state of a node can be corrupted .we do nt make any assumption on the number of corrupted nodes . in the worst case all the nodes in the system may start in a corrupted configuration . in order to tackle these faults we use self - stabilization techniques .the definition hardly uses the legitimate predicate .a legitimate predicate is defined over the configurations of a system and describes the set of correct configurations .let be a non - empty _ legitimate predicate _ of an algorithm with respect to a specification predicate such that every configuration satisfying satisfies .algorithm is _ self - stabilizing _ with respect to iff the following two conditions hold : + every computation of starting from a configuration satisfying preserves and verifies ( _ closure _ ) . +every computation of starting from an arbitrary configuration contains a configuration that satisfies ( _ convergence _ ) . to compute the time complexity, we use the definition of _ round _given a computation ( ) , the _ first round _ of ( let us call it ) is the minimal prefix of containing the execution of one action ( an action of the protocol or a disabling action ) of every enabled processor from the initial configuration .let be the suffix of such that .the _ second round _ of is the first round of .we propose to extend the gallager , humblet and spira ( ghs ) algorithm , to self - stabilizing settings via a compact informative labeling scheme .thus , the resulting solution presents several advantages appealing to large scale systems : it is compact since it uses only memory whose size is poly - logarithmic in the size of the network , it scales well since it does not rely on any global parameter of the system . the notion of a _fragment _ is central to the ghs approach .a fragment is a sub - tree of the graph , i.e. , a fragment is a tree which spans a subset of nodes .note that a fragment can be limited to a single node .an outgoing edge of a fragment is an edge with a single endpoint in .the minimum - weight outgoing edge of a fragment is an outgoing edge of with minimum weight among outgoing edges of , denoted in the following as . in the ghs construction , initially each node is a fragment . for each fragment , the ghs algorithm in identifies the and merges the two fragments endpoints of .it is important to mention that with this scheme , more than two fragments may be merged concurrently .the merging process is repeated in a iterative fashion until a single fragment remains .the result is a mst .the above approach is often called _ blue rule _ for mst construction .this approach is particularly appealing when transient faults create a forest of fragments ( which are sub - trees of a mst ) .the direct application of the blue rule allows the system to reconstruct a mst and to recover from faults which have divided the existing mst .however , when more severe faults hit the system the process states may be corrupted leading to a configuration of the network where the set of fragments are not sub - trees of some mst .this may include , a spanning tree but not a mst or spanning structure containing cycles .in these different types of spanning structures , the application of the _ blue rule _ is not always sufficient to reconstruct a mst . to overcome this difficulty, we combine the _ blue rule _ with another method , referred in the literature as the _ red rule _the _ red rule _ considers all the possible cycles in a graph , and removes the heaviest edge from every cycle , the resulting is a mst . to maintain a mst regardless of the starting configuration , we use the _ red rule _ as follows .let denote a spanning tree of graph , and an edge in but not in .clearly , if is added to , this creates a ( unique ) cycle composed by and some edges of .this cycle is called a _ fundamental cycle _ , and denoted by . according to the _ red rule _, if is not the edge of maximum weight in , then there exists an edge in , such that . in this case , can be removed since it is not part of any mst .our solution , called in the following algorithm , combines both the _ blue rule _ and _ red rule_. the application of the _ blue rule _ needs that each node identifies the fragment it belongs to .the _ red rule _ also requires that each node can identify the fundamental cycle associated to each of its adjacent non - tree - edges .note that a simple scheme broadcasting the root identifier in each fragment ( of memory size bits per node ) can be used to identify the fragments , but this can not allow to identify fundamental cycles . in order to identify fragments or fundamental cycles, we use a self - stabilizing labeling scheme , called .this scheme provides at each node a distinct label . for two nodes and in the same fragment , the comparison of their labels provides to these two nodes their * nearest common ancestor * in a tree ( see section [ sec : label ] ) .thus , the advantage of this labeling is twofold .first the labeling scheme helps each node to identify the fragment it belongs to .second , given any non - tree edge , the path in the tree going from to the nearest common ancestor of and , then from there to , and finally back to by traversing , constitute the fundamental cycle . to summarize , algorithm will use the _ blue rule _ to build a spanning tree , and the _ red rule _ to recover from invalid configurations . in both cases , it uses our algorithm to identify both fragments and fundamental cycles .note that , in distributed algorithms using the _ blue _ and _ red rules _ to construct a mst in a dynamic network are proposed , however these algorithms are not self - stabilizing .in this section we fix some general assumptions used in the current paper .let be an undirected weighted graph , where is the set of nodes , is the set of edges and the weight of each edge is given by a positive cost function .we consider w.l.o.g . that the edges weight are polynomial in .moreover , the nodes are allowed to have unique identifiers denoted by encoded using bits where .no assumption is made about the fact that edges weight must be distinct . in the current paper the set of all neighbors of in , for any node .each node maintains several information a pointer to one of its neighbor node called _ the parent_. the set of these pointers induces a spanning tree if the spanning structure is composed with all the nodes and contains no cycle .we denote by the path from to in the tree . for handling the nearest common ancestor labeling scheme we will define some notations .let be the label of a node composed by a list of pairs of integers , where each pair is an identifier and a distance . ] and the second one by [1] ] , while the second integer by ] ) , then the node takes the label of its parent but it increases by one the distance of the last pair of the parent label ( see predicate in figure [ fig : predicates1 ] ) . + otherwise , a node is tagged by its parent as a light node ( i.e. , \neq { \mbox{\sf id}}_u ] and \ell ] then on this example .this subsection is dedicated to the correctness of the self - stabilizing nearest common ancestor labeling scheme .let be the set of all possible configurations of the system .in order to prove the correctness of the algorithm , we denote the set of configurations in such that variables are correct in the system .more precisely , we define the following function : be the function defined by - 1)-\mathlarger{\sum}_{\mathsmaller{u\in \mathcal{c}(v ) } } { \mbox{\it size}}_u[0]\big)|.\ ] ] note that , and the variable has a correct value at node if and only if . in the following ,we show that any execution of the system converges to a configuration in , and the set of configurations is closed .the following lemma establishes the former property .we assume that all the nodes of the system belongs to the tree and we define below a legitimate configuration for the informative labeling scheme considered in this section .a configuration is called legitimate if the following conditions are satisfied : 1 .the root node of the tree has label equal to , 2 .every _ heavy _node has a label equal to , { \mbox{\rm }}^{-1}_{{\mbox{\it p}}_v}[1]+1) \ell \ell \ell \ell \ell \ell \ell \ell \ell \ell \ell ] of their parent ( see rule ) .let be a node at depth .the parent of is at depth .thus , its variable have not changed , and therefore becomes zero after round . as a consequence , .therefore , we get and thus the system will eventually reach a legitimate configuration for algorithm . to measurethe number of rounds it takes to get into a legitimate configuration for algorithm , observe that decreases by at least one at each round .since for every , we get that , starting from any configuration in configuration , the system reaches a legitimate configuration for algorithm in rounds . using lemma [ lem : size_convergence ] and lemma [lem : size_closure ] , we can conclude starting from any arbitrary configuration , the system reaches a legitimate configuration for algorithm in rounds .[ lem : label_closure ] the set of legitimate configurations for is closed .that is , starting from any legitimate configuration , the system remains in a legitimate configuration . according to algorithm ,the labeling procedure is done using only rule .let a legitimate configuration .for each node in , we have and .moreover in , predicates and are true and rules and can not be executed by any node . in conclusion ,starting from a legitimate configuration for algorithm the system remains in a legitimate configuration .the following theorem is a direct consequence from lemmas [ lem : label_convergence ] and [ lem : label_closure ] .[ thm : self - stab_lab ] algorithm is self - stabilizing for the informative nearest common ancestor labeling scheme .] in this section we describe our self - stabilizing algorithm for constructing the minimum spanning tree , called algorithm .our algorithm uses the blue rule " to construct a spanning tree and the red rule " to recover from invalid configurations ( see section [ sec : over ] ) . in both cases , it uses algorithm to identify fragments and fundamental cycles .we assume in the following that the _ merging _ phases have a higher priority than the _ recovering _ phases .that is , the system recovers from an invalid configuration if and only if no merging is possible .unfortunately , due to arbitrary initial configuration , the structure induced by the parent pointer of all nodes may contain cycles .we use first a well known approach to break cycles before giving a detailed description of merging and recovering phases .figure [ fig : schemarules ] illustrates the different phases of algorithm . starting from an arbitrary configuration , first all the cyclesare destroyed then fragments are defined and correctly labeled using the parent pointers . based on the label of nodes , the minimum _ outgoing edge _( i.e. , edge whose extremities belong to different fragments ) of each fragment is computed in a bottom - up fashion , and allowing to a pair of fragments which have selected the same outgoing edge to be merged together through this edge . a _ merging step _gives a new fragment which is the result of the merging of a pair of fragments .when a new fragment is created , the nodes of this fragment have to compute their new label .this process is repeated until there is only one remaining fragment spanning all the nodes of the network . in this case, the recovering phase can begin by detecting that no outgoing edge can be selected . to handle this phaseeach fragment has to compute its _ internal edges _ ( i.e. , edges whose extremities belong to the same fragment ) and to identify the _ nearest common ancestor _based on the labels of the edge extremities .the weight of the internal edges are broadcasted up in the tree from the leaves to the root .let an internal edge of tree , due to the red rule " if an edge of the path in has a weight bigger than , then is an _ valid edge _ since is part of an ( by `` red rule '' ) .more precisely , if during the bottom - up transmission of the weight of , a node has a parent link edge such that then is deleted from the tree and becomes the root of a new fragment .we present first the variables used by algorithm , then we describe the approach used to delete the cycles , followed by the merging and recovering phases .finally , we show the correctness and the time and memory complexities of the algorithm .we list below the eight variables maintained at each node : * the three variables described in section [ sec : label ] are used , i.e. , variables and . *the distance of each node from the root of the fragment is stored in variable .* for handling the _ blue rule _ mentioned in section [ sec : over ] , the minimum outgoing edge of each fragment is stored in variable .this edge is composed of three elements : the edge weight , and the identifiers of the edge extremities . the -th element of is accessed by ] with .the previous section was dedicated to the labeling procedure for an unique tree , due to the arbitrary starting configuration , the network can contain a forest of subtrees ( several fragments ) and cycles .therefore , the labeling procedure described in previous section ( using rules and ) is executed separately in each subtree in algorithm .however , to apply this procedure it is crucial to detect the cycles in the fragments induced by the parent pointers . to this end, we use a common approach used to break cycles in a spanning structure .each node computes its distance ( in hops ) to the root by using the distance of its parent plus one . by following this procedure ,there is at least a node which has a distance higher or equal than the distance of its parent in the fragment .therefore , this condition is used at each node to detect a cycle . in this case , a node deletes its parent pointer by selecting no parent and a new fragment rooted at is created .unfortunately , due to the arbitrary initial configuration a cycle can be falsely detected because of erroneous distances values at and its parent .this mechanism based on distances ensures that after rounds the network is cycle free .the destruction of cycles is managed by rule . when all the cycles have been deleted , the labeling procedure is applied in algorithm .note that the cycle detection must have a higher priority over the labeling procedure . to thisend , rule is the first rule to execute and in exclusion with rules and in algorithm .furthermore , the labeling scheme must also have a higher priority over the merging and recovering phases .indeed , the label of the nodes are used to identify the internal and outgoing edges of a fragment ( see figure [ fig : fusion ] ) . to guarantee the execution priority, the rules of the labeling scheme can only be executed when predicate is satisfied at node .in the same way , the rules of merging and recovering phases can only be executed at a node when predicate is satisfied at . * if * * then * + + * if * * then * + * if * * then * + * if * * then * we give below the rules associated with the labeling encoder ( given in the previous section ) . in order to use these two rules for the mst construction , we add predicate in the guards .this allow to disable these rules when a cycle is detected with rule . * if * * then * + * if * * then * + * else * * if * * then * + * if * ={\mbox{\sf id}}_v \ell ] + * else * when the graph induced by the parent pointers is cycle free and every node of a fragment has a correct label ( see predicate ) , then every node is able to determine if spans all the nodes of the network or not. this knowledge is given by the label of the nodes , more precisely using the decoder given in subsection [ subsec : decoder ] .indeed , given a non - tree edge , if the nodes and have no common ancestor then and are in two distinct fragments .in this case , the merging phase can be executed at and . a merging phase is composed of several _ merging steps _ in which at least two fragments are merged .each merging step is performed following four steps : 1 .the root of each fragment identifies the minimum - weight outgoing edge of its fragment ( see rule ) .2 . after the computation of each node on the path between the root of and computes in variable its future parent ( see rule ) . the nodes in the sub - tree rooted at every node executes also rule .3 . when the two merging fragments have finished the two first steps , then each node of these two fragments can compute their future distance ( see rule ) .4 . finally , every node belonging to these two fragments copies the content of its variables ( resp . ) into variable ( resp . ) . let us proceed with a more detailed description of the these steps .we process the computation of the minimum - weight outgoing edge of each fragment in a bottom - up manner ( see rule ) .each node can identify its adjacent outgoing edges by computing locally that has no nearest common ancestor using the labels of and .this is done via the decoder given in subsection [ subsec : decoder ] and macro at .each node computes the minimum - weight outgoing edge of its sub - tree ( given by macro ) by selecting the edge of minimum - weight among its adjacent outgoing edges ( given by macro ) and the one given by its children ( given by macro ) .the weight and the identifier of the extremities of the minimum - weight outgoing edge are stored in variable at .all these information will be used for the merging step .figure [ fig : fusion ] depicts the selection of the minimum outgoing edge for two fragments. corresponds to the label of the node .the black bubble at each node represent the selection of minimum outgoing edge .the information under the node corresponds to the variable and the information on top of the node represent the distance of the node from the root . ] when the computation of the minimum - weight outgoing edge is finished at the root of a fragment ( i.e. , ) , then can start the computation of the future parent pointers in ( predicate is satisfied ) , done in a top - down manner ( see rule ) .let be the extremity of of minimum identity between and .if is selected as the minimum - weight outgoing edge of two fragments and , then will become the new root of fragment resulting from the merging between and .otherwise , is the minimum - weight outgoing edge selected only by a single fragment , w.l.o.g .let . in this case , will wait for that is selected as the minimum - weight outgoing edge of . in the two cases , every node of a fragment in a merging step computes its future parent pointer in variable .each node on the path from the root of the fragment leading to the minimum - weight outgoing edge selects its child on this path as its future parent , while the other nodes select their actual parent .when is selected as the minimum - weight outgoing edge by and and the computation of the future parent is done ( i.e. , is satisfied ) , then the future distance is computed in variable by each node in ( predicate is satisfied ) , in a top - down manner following the parent relation given by variable ( see rule ) . note that the extremity of with the minimum identifier becomes the root of the new fragment with a zero distance .finally , when the future parent and distance are computed by every node in then can execute rule ( see predicate ) to copy the content of variable ( resp . ) into variable ( resp . ) . note that this is done in a bottom - up fashion following the parent relation given by variable in order to not destabilize fragment or . : [ * minimum computation * ] + * if * \neq { \mbox{\rm candidate}}(v ) \neq\emptyset \big ) ] * then * + + * if * * then * + * else * + : [ * new distance * ] + * if * * then * + + : [ * end of merging * ] + * if * * then * + + * if * * then * this subsection is dedicated to the description of the recovering phase .recall that , since the system can start from an arbitrary configuration , edges which do not belong to any mst can be part a fragment in . given a fragment ,the addition of an edge which do not belong to creates a unique cycle , called _ fundamental cycle _ related to and denoted ( i.e. , ) .thus , the `` red rule '' may not be satisfied for every constructed fragment , i.e. , for some fundamental cycle defined by an internal edge of a fragment the maximum edge weight belong to the fragment .to identify these edges , we verify that for each internal edge there is no edge in the fundamental cycle with a higher weight than . to this end , in a fragment the label of the nodes are used to identify the edges which do not belong to such that and have a common ancestor .let us consider a fragment and an edge belonging to such that .if then must become an edge of .consequently , we need to verify all the edge weights of . to achieve this task ,the weight of is sent up in along the two paths and . clearly , to maintain low space complexity , the nodes can not store the information about all internal edges . consequently , we decide that each node stores only the information of a single internal edge at a time .specifically , we need to organize the circulation of the internal edges .a natural question to ask at this point is whether the information of all non - tree edges are needed . to answer to this question , we first make some observations .first , suppose the following case ( see figure [ fig : idemnca ] ) : let and be internal edges such that , and and are closer to than and . on and only the internal edge with the smallest weight is needed . to justify this assertion ,let us consider without loss of generality that and is a tree edge such that . moreover, suppose that all edges in a and have a weight smaller than .consequently , is not part of the mst , and if we delete , the minimum outgoing edge of the fragment composed by the is edge .consider now , the case when several adjacent edges of node have the same common ancestor ( see figure [ fig : idemnca2 ] ) . in this caseonly the internal edge with the smallest weight is relevant on the to avoid the maximum weight of the fundamental cycles .the last case considered is the following ( see figure [ fig : upedges ] ) .consider a path between two nodes and , and such that and .let be an edge such that and an edge such that . if , the weight of is needed to verify if the weights of the edges on have a higher weight than .however , the weight of is needed to verify the weight of edge .consequently , we need to collect all the outgoing edges from the leaves to the root , from the farthest to the nearest of the root .the rule for collecting the relevant internal edges is based on the above observations ( see rule ) .the internal edges are sent up in the fragment from the leaves to the root using variable at every node .the internal edges are collected locally by beginning from the edge with the farthest nearest common ancestor to the edge with the nearest common ancestor , i.e. , following the lexicographical order on the nearest common ancestor labels and beginning by the smallest one . in case thereexist several edges with the same nearest common ancestor , only the edge with the smallest weight is kept . the list of the ordered internal edges at node is given by macro .this list is computed by different predicates ( see macros in figure [ fig : predicatesin ] ) .each node compares the weight stored in variable ] then knows that the internal edge indicated by must belong to the mst .consequently deletes the edge from the fragment ( only if is not the nearest common ancestor of the internal edge given by ) , and becomes the root of the new fragment ( see rule ) .a node can select a new internal edge by executing rule in the following case ( i.e. , predicate is satisfied ) : ( i ) the internal edge of is propagated up by its parent and has no more child propagating the same internal edge ( see predicate ) , ( ii ) is the nearest common ancestor of the adjacent internal edge actually selected ( see predicate ) , or ( iii ) is neither the root of the fragment nor the nearest common ancestor of the selected internal edge and its parent propagates an internal related with the same common ancestor but ( see predicate ) .this allows to obtain a piplined propagation of the internal edges .figure [ fig : recovering ] illustrates the bottom - up spread of the internal edges . * if * * then * * if * ,{\mbox{\it in}}_v[2 ] ) \neq { \mbox{\rm }}_v \wedge w(v,{\mbox{\it p}}_v)>{\mbox{\it in}}_v[0] ] ) , then rule is enabled at , a contradiction .otherwise , in each fragment in we have ={\mbox{\rm candidate}}(v) ] ) .thus , becomes the root of a new fragment when executes rule , a contradiction .we denote by the set of configurations in such that there are no cycles in the subgraph induced by parent link relations ( i.e. , for every we have ) .[ lem : time_cf ] starting from any arbitrary configuration , the system reaches in rounds a configuration in .[ lem : cycle free ] this lemma can be proved using the same arguments given in .+ we now define some notations and predicates which will be used in the following proofs . given a configuration , we note the set of all fragments in by . moreover , we define below several sets of fragments with different properties and the notion of _ attractor _ , introduced by gouda and multari , will be used to show that during the convergence of algorithm each fragment gains additional properties .we define five sets of fragments in a configuration : * let be the set of fragments in in which all the nodes are correctly labeled .* let ={\mbox{\rm candidate}}(v ) \neq \emptyset)\} ] . in base case .consider the root of fragment ( i.e. , ) . if then rule is enabled at in round 0 , since .therefore , since the daemon is weakly fair then in the first configuration of round 1 , executes rule and we have at which verifies the proposition .induction case : we assume that in round we have \vee newp_v={\mbox{\it p}}_v) ] . consider any node of height in . by induction hypothesis, we have either or . in the former case ,if and then rule is enabled at in round ( because ) . in the latter case , if and then rule is enabled at in round ( because ) . thus , since the daemon is weakly fair then in the first configuration of round executes rule .so , we have \vee newp_v={\mbox{\it p}}_v) ] and this implies that . [lem : merging_two_frag_dist ] let any two fragments and , of a configuration . if and the same minimum - weight outgoing edge is selected by the two fragments then every computation suffix starting from contains a configuration such that .assume , by the contradiction , that there exists a suffix starting from with no configuration such that in computation .as rule is the only rule to modify variable such that is satisfied when executed , this implies that there exists a node which never executes rule in the computation suffix .consider the configuration . according to the formal description of algorithm ,rules , and are disabled for any node since . moreover , rule is disabled at any node as we have for every node , because and by the execution of rule .so , only rule could be enabled at every node .this implies that rule is disabled for every node ( i.e. , we have ) .consider without loss of generality fragment .note that and we have due to the execution of rule . if is the future root of ( after the merging phase ) then either , a contradiction because and have selected the same minimum - weight outgoing edge , or , a contradiction since .if is any other node in then we have , a contradiction since .therefore , the system reaches a configuration in which for every node we have , so .finally , we can observe that the set of fragments is included in the set by definition in a configuration .[ lem : time_f4 ] let any fragment in a configuration . in rounds , we have , with and the height of . we can show by induction on the height of that in rounds every node satisfies ) using the same method as in proof of lemma [ lem : time_mwoe ] .[ lem : merging_two_frag_end ] let any fragment of a configuration . if then every computation suffix starting from contains a configuration such that .assume , by the contradiction , that there exists a suffix starting from with no configuration such that in computation .consider the configuration .since , only rule could be enabled at a node ( by definition of and according to the guards of rules given in the formal description of algorithm ) . moreover , as there exists at least one node such that predicate is satisfied at .consider a computation step of .assume that rule is enabled at in and not in but did not execute rule .if is a leaf and is not adjacent to the minimum - weight outgoing edge of then implies that ( because has no neighbor ) , a contradiction since rule is the only rule which could copy the value of ( resp . ) in ( resp . ) .otherwise for every other node , implies that either or such that . in the former case, there is a contradiction since rule is the only rule which could copy the value of ( resp . ) in ( resp . ) . in the latter case ,there is a neighbor which modified its variable or by executing rule or , a contradiction since only rule could be enabled at a node in . by weakly - fairness assumption on the daemon ,every node executes rule and satisfies .finally , we can observe that the set of fragments is included in the set by definition in a configuration .[ lem : time_f5 ] let any fragment in a configuration . in rounds , we have , with and the height of . we can show by induction on the height of that in rounds every node satisfies ) using the same method as in proof of lemma [ lem : time_mwoe ] .[ lem : nb_frag_diminution ] let a configuration such that .we have for any configuration obtained after a merging step in every computation suffix starting from .assume , by the contradiction , that there exists a suffix starting from with a configuration obtained after a merging step for which in computation . this implies that in either there are no two fragments which can merge together using the same minimum - weight outgoing edge , or and does not belong to the same fragment after a merging step .first of all , by lemmas [ lem : labelmst ] and [ lem : min ] every fragment which does not satisfies and ={\mbox{\rm candidate}}(v) ] . by description of rule , does not delete the edge only if ,{\mbox{\it in}}_v[2])={\mbox{\rm }}_v)$ ] , which is a contradiction with the hypothesis of the lemma .[ lem : time_recover ] starting from any configuration which contains a single spanning tree , the recovering phase is performed in rounds , the network size .first of all , every node sends up in the internal edge of minimum weight associated to each common ancestor , given by macro based on macro .moreover , by lemma [ lem : end_propag_ie ] every internal edge is not propagated by the ancestors of the common ancestor in . by lemma [ cor : propag_internal_edge ], every node sends up in the tree the internal edges selected by and its descendants ordered locally on the nearest common ancestors , that is following the lexicographical order on the label of nearest common ancestors .observe that every node is the common ancestor of at most internal edges selected to be propagated up in , with the height of .furthermore , each propagated internal edge reaches its related nearest common ancestor in rounds .however , the propagation of the internal edges is pipelined in , since a node can execute rule when its parent propagates its internal edge or the nearest common ancestor is reached ( see predicate ) .thus , for every nearest common ancestor the propagation of the internal edges related to is performed in rounds . finally ,there are at most nearest common ancestors in the spanning tree , so the propagation of all the internal edges of is performed in rounds .[ lem : mst_preserved ] starting from every configuration satisfying definition [ def : mst ] , the system can only reach a configuration which satisfies definition [ def : mst ] . by lemma [ lem : disable_merge_rules ] , in every configuration every rule of algorithm , except rule , is disabled at .consider any configuration which satisfies definition [ def : mst ] .this implies that there is only a single spanning tree in and in every fundamental cycle defined by each internal edge of lemma [ lem : create_new_frag ] can not be applied .therefore , by executing rule at any node no new fragment is created and the constructed minimum spanning tree is preserved .algorithm is a self - stabilizing algorithm for specification [ spec : mst ] under a weakly fair daemon with a convergence time of rounds and memory complexity of bits per node , with the network size .we have to show first that starting from any configuration the execution of algorithm verifies property [ tc1 ] and [ tc2 ] of specification [ spec : mst ] .first of all , by theorem [ thm : no_deadlock ] while the system does not reach a configuration satisfying definition [ def : mst ] , there is a rule enabled , except rule , at a node . according to lemmas [ lem : nb_frag_diminution ] , [ lem : create_new_frag ] ,[ lem : time_merging ] and [ lem : time_recover ] , from any configuration algorithm reaches a configuration satisfying definition [ def : mst ] in finite time , which verifies property [ tc1 ] .moreover , according to lemma [ lem : mst_preserved ] from a configuration satisfying definition [ def : mst ] algorithm can only reach a configuration in satisfying definition [ def : mst ] , which verifies property [ tc2 ] of specification [ spec : mst ] .we consider now the convergence time and memory complexity of algorithm .according to lemmas [ lem : time_merging ] and [ lem : time_recover ] , each part of the algorithm ( merging and recovering part ) have a convergence time of at most rounds to construct a minimum spanning tree .moreover , algorithm maintains height variables at every node , composed of six variables of size bits ( variables , and ) and two variables of size bits used to stored labels of nodes ( variables and ) . according to , variable necessitates bits of memory at every node .therefore , no more than bits per node are necessary .we extended the gallager , humblet and spira ( ghs ) algorithm , , to self - stabilizing settings via a compact informative labeling scheme .thus , the resulting solution presents several advantages appealing for large scale systems : it is compact since it uses only poly - logarithmic in the size of the network memory space ( bits per node ) and it scales well since it does not rely on any global parameter of the network .the convergence time of the proposed solution is rounds .quite recently , another self - stabilizing algorithm was proposed by korman et al . for the mst problem with a convergence time of rounds and memory complexity of bits .however , this approach requires the use of several sub - algorithms leading to a complex solution to be used in a practical situation , comparing to our algorithm .llia blin , shlomi dolev , maria gradinariu potop - butucaru and stephane rovedakis fast self - stabilizing minimum spanning tree construction - using compact nearest common ancestor labeling scheme . ,volume 6343 of _ lecture notes in computer science _ ,pages 480494 , 2010 .llia blin , maria potop - butucaru , stephane rovedakis and sbastien tixeuil . a new self - stabilizing minimum spanning tree construction with loop - free property . ,volume 5805 of _ lecture notes in computer science _ , pages 407422 .springer 2009 .gheorghe antonoiu and pradip k. srimani .distributed self - stabilizing algorithm for minimum spanning tree construction . ,volume 1300 of _ lecture notes in computer science _ , pages 480 - 487 .springer 1997 .lisa higham and zhiying liang .self - stabilizing minimum spanning tree construction on message - passing networks . in _15th international conference on distributed computing ( disc ) _ , volume 2180 of _ lecture notes in computer science _ , pages 194208 , 2001 .amos korman and shay kutten and toshimitsu masuzawa .fast and compact self stabilizing verification , computation , and fault detection of an mst . in _30th annual acm symposium on principles of distributed computing ( podc ) _ , pages 311320 , 2011 .paola flocchini and toni mesa enriquez and linda pagli and giuseppe prencipe and nicola santoro .distributed computation of all node replacements of a minimum spanning tree . ,volume 4641 of _ lecture notes in computer science _ , pages 598 - 607 .springer 2007 .
we present a novel self - stabilizing algorithm for minimum spanning tree ( mst ) construction . the space complexity of our solution is bits and it converges in rounds . thus , this algorithm improves the convergence time of previously known self - stabilizing asynchronous mst algorithms by a multiplicative factor , to the price of increasing the best known space complexity by a factor . the main ingredient used in our algorithm is the design , for the first time in self - stabilizing settings , of a labeling scheme for computing the nearest common ancestor with only bits .
large - scale complex systems , such as , for instance , the electrical power grid and the telecommunication system , are receiving increasing attention from researchers in different fields .the wide spatial distribution and the high dimensionality of these systems preclude the use of centralized solutions to tackle classical estimation , control , and fault detection problems , and they require , instead , the development of new decentralized techniques .one possibility to overcome these issues is to geographically deploy some monitors in the network , each one responsible for a different subpart of the whole system .local estimation and control schemes can successively be used , together with an information exchange mechanism to recover the performance of a centralized scheme .power systems are operated by system operators from the area control center .the main goal of the system operator is to maintain the network in a secure operating condition , in which all the loads are supplied power by the generators without violating the operational limits on the transmission lines . in order to accomplish this goal , at a given point in time , the network model and the phasor voltages at every system bus need to be determined , and preventive actions have to be taken if the system is found in an insecure state . for the determination of the operating state , remote terminal units and measuring devicesare deployed in the network to gather measurements .these devices are then connected via a local area network to a scada ( supervisory control and data acquisition ) terminal , which supports the communication of the collected measurements to a control center . at the control center, the measurement data is used for control and optimization functions , such as contingency analysis , automatic generation control , load forecasting , optimal power flow computation , and reactive power dispatch . a diagram representing the interconnections between remote terminal units and the control centeris reported in fig .[ fig : control_center ] . various sources of uncertainties , e.g. , measurement and communication noise , lead to inaccuracies in the received data , which may affect the performance of the control and optimization algorithms , and , ultimately , the stability of the power plant .this concern was first recognized and addressed in by introducing the idea of ( static ) state estimation in power systems .power network state estimators are broadly used to obtain an optimal estimate from redundant noisy measurements , and to estimate the state of a network branch which , for economical or computational reasons , is not directly monitored . for the power system state estimation problem ,several centralized and parallel solutions have been developed in the last decades , e.g. , see . being an online function , computational issues , storage requirements , and numerical robustness of the solution algorithm need to be taken into account . within this regard , distributed algorithms based on networkpartitioning techniques are to be preferred over centralized ones .moreover , even in decentralized setting , the work in on the blackout of august 2003 suggests that an estimation of the entire network is essential to prevent networks damages .in other words , the whole state vector should be estimated by and available to every unit .the references explore the idea of using a global control center to coordinate estimates obtained locally by several local control centers . in this work ,we improve upon these prior results by proposing a fully decentralized and distributed estimation algorithm , which , by only assuming local knowledge of the network structure by the local control centers , allows them to obtain in finite time an optimal estimate of the network state . being the computation distributed among the control centers , our procedure appears scalable against the power network dimension , and , furthermore , numerically reliable and accurate .a second focus of this paper is false data detection and cyber attacks in power systems . because of the increasing reliance of modern power systems on communication networks , the possibility of cyber attacks is a real threat .one possibility for the attacker is to corrupt the data coming from the measuring units and directed to the control center , in order to introduce arbitrary errors in the estimated state , and , consequently , to compromise the performance of control and optimization algorithms . this important type of attack is often referred in the power systems literature to as _ false data injection attack_. recently , the authors of show that a false data injection attack , in addition to destabilizing the grid , may also lead to fluctuations in the electricity market , causing significant economical losses .the presence of false data is classically checked by analyzing the statistical properties of the _ estimation residual _ , where is the measurements vector , is a state estimate , and is the state to measurements matrix . for an attack to be successful ,the residual needs to remain within a certain confidence level . accordingly, one approach to circumvent false data injection attacks is to increase the number of measurements so as to obtain a more accurate confidence bound . clearly , by increasing the number of measurements , the data to be transmitted to the control center increases , and the dimension of the estimation problem grows . by means of our estimation method, we address this dimensionality problem by distributing the false data detection problem among several control centers . starting from the eighties ,the problem of distributed estimation has attracted intense attention from the scientific community , generating along the years a very rich literature .more recently , because of the advent of highly integrated and low - cost wireless devices as key components of large autonomous networks , the interest for this classical topic has been renewed . for a wireless sensor network ,novel applications requiring efficient distributed estimation procedures include , for instance , environment monitoring , surveillance , localization , and target tracking .considerable effort has been devoted to the development of distributed and adaptive filtering schemes , which generalize the notion of adaptive estimation to a setup involving networked sensing and processing devices . in this context ,relevant methods include incremental least mean - square , incremental recursive least - square , diffusive least mean - square , and diffusive recursive least - square .diffusion kalman filtering and smoothing algorithms are proposed , for instance , in , and consensus based techniques in .we remark that the strategies proposed in the aforementioned references could be adapted for the solution of the power network static estimation problem .their assumptions , however , appear to be not well suited in our context for the following reasons .first , the convergence of the above estimation algorithms is only asymptotic , and it depends upon the communication topology . as a matter of fact , for many communication topologies , such as cayley graphs and random geometric graphs , the convergence rate is very slow and scales badly with the network dimension .such slow convergence rate is clearly undesirable because a delayed state estimation could lead the power plant to instability .second , approaches based on kalman filtering require the knowledge of the global state and observation model by all the components of the network , and they violate therefore our assumptions .third and finally , the application of these methods to the detection of cyber attacks , which is also our goal , is not straightforward , especially when detection guarantees are required . an exception is constituted by , where a estimation technique based on local kalman filters and a consensus strategy is developed .this latter method , however , besides exhibiting asymptotic convergence , does not offer guarantees on the final estimation error .our estimation technique belongs to the family of kaczmarz ( row - projection ) methods for the solution of a linear system of equations .see for a detailed discussion .differently from the existing row - action methods , our algorithms exhibit finite time convergence towards the exact solution , and they can be used to compute any weighted least squares solution to a system of linear equations . the contributions of this work are threefold .first , we adopt the static state network estimation model , in which the state vector is linearly related to the network measurements .we develop two methods for a group of interconnected control centers to compute an optimal estimate of the system state via distributed computation .our first estimation algorithm assumes an _ incremental _ mode of cooperation among the control centers , while our second estimation algorithm is based upon a _ diffusive _ strategy .both methods are shown to converge in a finite number of iterations , and to require only local information for their implementation . differently than ,our estimation procedures assume neither the measurement error covariance nor the measurements matrix to be diagonal .furthermore , our algorithms are advantageous from a communication perspective , since they reduce the distance between remote terminal units and the associated control center , and from a computational perspective , since they distribute the measurements to be processed among the control centers .second , as a minor contribution , we describe a finite - time algorithm to detect via distributed computation if the measurements have been corrupted by a malignant agent .our detection method is based upon our state estimation technique , and it inherits its convergence properties .notice that , since we assume the measurements to be corrupted by noise , the possibility exists for an attacker to compromise the network measurements while remaining undetected ( by injecting for instance a vector with the same noise statistics ) .with respect to this limitation , we characterize the class of corrupted vectors that are guaranteed to be detected by our procedure , and we show optimality with respect to a centralized detection algorithm .third , we study the scalability of our methods in networks of increasing dimension , and we derive a finite - memory approximation of our diffusive estimation strategy . for this approximation procedurewe show that , under a reasonable set of assumptions and independent of the network dimension , each control center is able to recover a good approximation of the state of a certain subnetwork through little computation .moreover , we provide bounds on the approximation error for each subnetwork .finally , we illustrate the effectiveness of our procedures on the ieee 118 bus system . the rest of the paper is organized as follows . in section[ sec : setup ] we introduce the problem under consideration , and we describe the mathematical setup .section [ sec : solver ] contains our main results on the state estimation and on the detection problem , as well as our algorithms .section [ sec : approximate ] describes our approximated state estimation algorithm . in section[ sec : numerical ] we study the ieee 118 bus system , and we present some simulation results .finally , section [ sec : conclusion ] contains our conclusion .for a power network , an example of which is reported in fig .[ fig : power_network118 ] , the state at a certain instant of time consists of the voltage angles and magnitudes at all the system buses .the ( static ) state estimation problem introduced in the seminal work by schweppe refers to the procedure of estimating the state of a power network given a set of measurements of the network variables , such as , for instance , voltages , currents , and power flows along the transmission lines . to be more precise ,let and be , respectively , the state and measurements vector .then , the vectors and are related by the relation where is a nonlinear measurement function , and where , which is traditionally assumed to be a zero mean random vector satisfying = \sigma_{\eta } = \sigma_{\eta}^{\mathsf{t}}>0 ] and = \sigma = \sigma^{\mathsf{t}}>0 ] ; , ; receive and from monitor ; ; ; transmit and to monitor ; ; let , and let , where denotes the range space spanned by the matrix .consider the system of linear equations , and recall that the unique minimum norm solution to coincides with the vector such that and is minimum .it can be shown that being minimum corresponds to being orthogonal to the null space of .let and be partitioned in blocks as in , and let be a directed graph such that corresponds to the set of monitors , and , denoting with the directed edge from to , .our incremental procedure to compute the minimum norm solution to is in algorithm [ algo : pseudoinverse ] , where , given a subspace , we write to denote any full rank matrix whose columns span the subspace .we now proceed with the analysis of the convergence properties of the _ incremental minimum norm solution _ algorithm . _( convergence of algorithm [ algo : pseudoinverse])_[thm : algo : pseudo ] let , where and are partitioned in row - blocks as in . in algorithm[ algo : pseudoinverse ] , the -th monitor returns the vector such that and .see section [ pf_algopseudo ] . it should be observed that the dimension of decreases , in general , when the index increases . in particular , and . to reduce the computational burden of the algorithm , monitor could transmit the smallest among and , together with a packet containing the type of the transmitted basis . _ * ( computational complexity of algorithm [ algo : pseudoinverse ] ) * _ in algorithm [ algo : pseudoinverse ] , the main operation to be performed by the -th agent is a singular value decomposition ( svd ) .is usually very sparse , since it reflects the network interconnection structure .efficient svd algorithms for very large sparse matrices are being developed ( cf ._ svdpack _ ) .] indeed , since the range space and the null space of a matrix can be obtained through its svd , both the matrices and can be recovered from the svd of .let , , and assume the presence of monitors , .recall that , for a matrix , the singular value decomposition can be performed with complexity .hence , the computational complexity of computing a minimum norm solution to the system is . in table[ table : complexity ] we report the computational complexity of algorithm [ algo : pseudoinverse ] as a function of the size . .computational complexity of algorithm [ algo : pseudoinverse ] . [ cols="^,^,^,^",options="header " , ] [ table : complexity ] the following observations are in order .first , if , then the computational complexity sustained by the -th monitor is much smaller than the complexity of a centralized implementation , i.e. , .second , the complexity of the entire algorithm is optimal , since , in the worst case , it maintains the computational complexity of a centralized solution , i.e. , .third and finally , a compromise exists between the blocks size and the number of communications needed to terminate algorithm [ algo : pseudoinverse ] .in particular , if , then no communication is needed , while , if , then communication rounds are necessary to terminate the estimation algorithm .communication rounds are needed to transmit the estimation to every other monitor . ]we now focus on the computation of the weighted least squares solution to a set of linear equations .let be an unknown and unmeasurable random vector , with and .consider the system of equations and assume .notice that , because of the noise vector , we generally have , so that algorithm [ algo : pseudoinverse ] can not be directly employed to compute the vector defined in .it is possible , however , to recast the above weighted least squares estimation problem to be solvable with algorithm [ algo : pseudoinverse ] .note that , because the matrix is symmetric and positive definite , there exists , where is a basis of eigenvectors of and is the corresponding diagonal matrix of the eigenvalues . ]a full row rank matrix such that .then , equation can be rewritten as \left [ \begin{array}{c } x \\\bar v \end{array } \right],\end{aligned}\ ] ] where , =0 ] .observe that , because has full row rank , the system is underdetermined , i.e. , ) ] .let = \left [ \begin{array}{cc } h & \varepsilon b \end{array } \right]^\dag z.\end{aligned}\ ] ] the following theorem characterizes the relation between the minimum variance estimation and . _( convergence with )_[existence_limit ] consider the system of linear equations .let and for a full row rank matrix .let ^{-1 } b^{\mathsf{t}}(hh^{\mathsf{t}})^\dag ( i -\varepsilon b c^\dag ) . \end{split}\end{aligned}\ ] ] then ^\dag = \left [ \begin{array}{c } h^\dag - \varepsilon h^\dag b ( c^\dag + d ) \\c^\dag + d \end{array } \right];\end{aligned}\ ] ] and see section [ proof_existence_limit ] . throughout the paper ,let be the vector defined in , and notice that theorem [ existence_limit ] implies that [ remark : incremental ] for the system of equations , let be the covariance matrix of the noise vector , and let ,\quad b=\left [ \begin{array}{c } b_1\\ b_2\\ \vdots\\ b_m \end{array } \right],\quad z=\left [ \begin{array}{c } z_1\\ z_2\\ \vdots\\ z_m \end{array } \right ] , \end{aligned}\ ] ] where , , , and . for , the estimate of the weighted least squares solution to can be computed by means of algorithm [ algo : pseudoinverse ] with input ] for a full row rank matrix .then where is as in theorem [ existence_limit ] . with the same notation as in the proof of theorem [ existence_limit ] , for every value of , the difference equals .since for every , it follows .therefore , for the solution of system by means of algorithm [ algo : pseudoinverse ] , the parameter is chosen according to corollary [ approx_error ] to meet a desired estimation accuracy .it should be observed that , even if the entire matrix needs to be known for the computation of the exact parameter , the advantages of our estimation technique are preserved .indeed , if the matrix is unknown and an upper bound for is known , then a value for can still be computed that guarantees the desired estimation accuracy . on the other hand ,even if is entirely known , it may be inefficient to use to perform a centralized state estimation over time .instead , the parameter needs to be computed only once . to conclude this section, we characterize the estimation residual .this quantity plays an important role for the synthesis of a distributed false data detection algorithm . _( estimation residual)_[residual ] consider the system , and let = \sigma = \sigma^{\mathsf{t } } > 0 ] ; )) ] ; ; transmit and ; consider the system of linear equations , where = 0 ] .let , and be partitioned as in , and let .let the monitors communication graph be connected , let be its diameter , and let the monitors execute the diffusive state estimation algorithm .then , each monitor computes the estimate of in steps .let be the estimate of the monitor , and let be such that , where denotes the network state , and .notice that \hat x_i ] and \hat x_i^+ ] , and hence ^{\mathsf{t}}\not\perp { \operatorname{ker}}([-k_i \ ; k_j]) ] , and , since , it follows .the theorem follows from the fact that after a number of steps equal to the diameter of the monitors communication graph , each vector verifies all the measurements , and . as a consequence of theorem [ existence_limit ] , in the limit for to zero , algorithm[ algo : finite_solver ] returns the minimum variance estimate of the state vector , being therefore the diffusive counterpart of algorithm [ algo : pseudoinverse ] .a detailed comparison between incremental and diffusive methods is beyond the purpose of this work , and we refer the interested reader to and the references therein for a thorough discussion . herewe only underline some key differences .while algorithm [ algo : pseudoinverse ] requires less operations , being therefore computationally more efficient , algorithm [ algo : finite_solver ] does not constraint the monitors communication graph .additionally , algorithm [ algo : finite_solver ] can be implemented adopting general asynchronous communication protocols .for instance , consider the _ asynchronous ( diffusive ) state estimation _ algorithm , where , at any given instant of time in , at most one monitor , say , sends its current estimates to its neighbors , and where , for , monitor performs the following operations : 1 . [ -k_i \enspace k_j]^\dag ( \hat x_i - \hat x_j) ] and = b b^{\mathsf{t}} ] for all . based on this considerations ,our distributed detection procedure is in algorithm [ algo : detection ] , where the matrices and are as defined in equation , and is a predefined threshold . , , ; collect measurements ; estimate network state via algorithm [ algo : pseudoinverse ] or [ algo : finite_solver ] false data detected ; in algorithm [ algo : detection ] , the value of the threshold determines the false alarm and the misdetection rate . clearly ,if and is sufficiently small , then no false alarm is triggered , at the expenses of the misdetection rate . by decreasing the value of the sensitivity to failures increases together with the false alarm rate .notice that , if the magnitude of the noise signals is bounded by , then a reasonable choice of the threshold is , where the use of the infinity norm in algorithm [ algo : detection ] is also convenient for the implementation . indeed , since the condition is equivalent to for some monitor , the presence of false data can be independently checked by each monitor without further computation .notice that an eventual alarm message needs to be propagated to all other control centers .a different strategy for the detection of false data relies on statistical techniques , e.g. , see . in the interest of brevity , we do not consider these methods , and we only remark that , once the estimation residual has been computed by each monitor , the implementation of a ( distributed ) statistical procedure , such as , for instance , the ( distributed ) -test , is a straightforward task .the procedure described in algorithm [ algo : pseudoinverse ] allows each agent to compute an optimal estimate of the whole network state in finite time . in this section ,we allow each agent to handle only local , i.e. , of small dimension , vectors , and we develop a procedure to recover an estimate of only a certain subnetwork .we envision that the knowledge of only a subnetwork may be sufficient to implement distributed estimation and control strategies .we start by introducing the necessary notation .let the measurements matrix be partitioned into , being the number of monitors in the network , blocks as , \end{aligned}\ ] ] where for all .the above partitioning reflects a division of the whole network into competence regions : we let each monitor be responsible for the correct functionality of the subnetwork defined by its blocks .additionally , we assume that the union of the different regions covers the whole network , and that different competence regions may overlap . observe that , in most of the practical situations , the matrix has a sparse structure , so that many blocks have only zero entries .we associate an undirected graph with the matrix , in a way that reflects the interconnection structure of the blocks .to be more precise , we let , where denotes the set of monitors , and where , denoting by the undirected edge from to , it holds if and only if or . noticed that the structure of the graph , which reflects the sparsity structure of the measurement matrix , describes also the monitors interconnections . by using the same partitioning as in, the moore - penrose pseudoinverse of can be written as ,\end{aligned}\ ] ] where .assume that has full row rank , and observe that .consider the equation , and let ] , with , be the smallest interval containing the spectrum of .then , for and , there exists and such that before proving the above result , for the readers convenience , we recall the following definitions and results . given an invertible matrix of dimension , let us define the _ support sets _ being the -th entry of , and the _ decay sets _ [ decay_rate ]let be of full row rank , and let ] be the resulting matrix . accordingly , let ^{\mathsf{t}} ] , where .because has full row rank , we have \left [ \begin{array}{ccc } p_{11 } & p_{12 } & p_{13}\\ p_{21 } & p_{22 } & p_{23}\\ p_{31 } & p_{32 } & p_{33 } \end{array } \right ] = \left [ \begin{array}{ccc } i_1 & 0 & 0\\ 0 & i_2 & 0\\ 0 & 0 & i_3 \end{array } \right],\qquad h^\dag = \left [ \begin{array}{ccc } p_{11 } & p_{12 } & p_{13}\\ p_{21 } & p_{22 } & p_{23}\\ p_{31 } & p_{32 } & p_{33 } \end{array } \right ] , \end{aligned}\ ] ] where , , and are identity matrices of appropriate dimension . for a matrix ,let denote the number of columns of .let , ) \} ] . in order to prove the theorem, we need to show that there exists and such that notice that , for to hold , the matrix can be any basis of ^{\mathsf{t}}) ] . because every entry of decays exponentially , the theorem follows .in section [ sec : simulation_approx ] we provide an example to clarify the exponential decay described in theorem [ thm : local_estimation ] .the effectiveness of the methods developed in the previous sections is now shown through some examples .the ieee 118 bus system represents a portion of the american electric power system as of december , 1962 .this test case system , whose diagram is reported in fig . [fig : power_network118 ] , is composed of 118 buses , 186 branches , 54 generators , and 99 loads .the voltage angles and the power injections at the network buses are assumed to be related through the linear relation where the matrix depends upon the network interconnection structure and the network admittance matrix .for the network in fig .[ fig : power_network118 ] , let be the measurements vector , where = 0 ] , .then , following the notation in theorem [ existence_limit ] , the minimum variance estimate of can be recovered as ^\dag z.\end{aligned}\ ] ] in fig .[ fig : error_epsilon ] we show that , as decreases , the estimation vector computed according to theorem [ existence_limit ] converges to the minimum variance estimate of . in order to demonstrate the advantage of our decentralized estimation algorithm, we assume the presence of control centers in the network of fig . [fig : power_network118 ] , each one responsible for a subpart of the entire network .the situation is depicted in fig .[ fig : network_118_partitioned ] .assume that each control center measures the real power injected at the buses in its area , and let , with = 0 ] , be the measurements vector of the -th area .finally , assume that the -th control center knows the matrix such that .then , as discussed in section [ sec : solver ] , the control centers can compute an optimal estimate of by means of algorithm [ algo : pseudoinverse ] or [ algo : finite_solver ] .let be the number of measurements of the -th area , and let .notice that , with respect to a centralized computation of the minimum variance estimate of the state vector , our estimation procedure obtains the same estimation accuracy while requiring a smaller computation burden and memory requirement .indeed , the -th monitor uses measurements instead of .let be the maximum number of measurements that , due to hardware or numerical contraints , a control center can efficiently handle for the state estimation problem . in fig .[ fig : increase_measurements ] , we increase the number of measurements taken by a control center , so that , and we show how the accuracy of the state estimate increases with respect to a single control center with measurements . to conclude this section , we consider a security application , in which the control centers aim at detecting the presence of false data among the network measurements via distributed computation . for this example, we assume that each control center mesures the real power injection as well the current magnitude at some of the buses of its area . by doing so , a sufficient redundancy in the measurementsis obtained for the detection to be feasible .suppose that the measurements of the power injection at the first bus of the first area is corrupted by a malignant agent . to be more precise , let the measurements vector of the first area be , where is the first canonical vector , and is a random variable . for the simulation we choose to be uniformly distributed in the interval ] . ] the residual functions are reported in fig .[ fig : detection ] . observe that , since the first residual is greater than the threshold , the control centers successfully detect the false data . regarding the identification of the corrupted measurements, we remark that a regional identification may be possible by simply analyzing the residual functions . in this example , for instance , since the residuals are below the threshold value , the corrupted data is likely to be among the measurements of the first area .this important aspect is left as the subject of future research .consider an electrical network with buses , where .let the buses interconnection structure be a two dimensional lattice , and let be the graph whose vertices are the buses , and whose edges are the network branches .let be partitioned into identical blocks containing vertices each , and assume the presence of control centers , each one responsible for a different network part .we assume the control centers to be interconnected through an undirected graph .in particular , being the set of buses assigned to the control center , we let the control centers and be connected if there exists a network branch linking a bus in to a bus in .an example with and is in fig . [fig : grid ] . in order to show the effectiveness of our approximation procedure ,suppose that each control center aims at estimating the vector of the voltage angles at the buses in its region .we assume also that the control centers cooperate , and that each of them receives the measurements of the real power injected at only the buses in its region .algorithm [ algo : finite_solver ] is implemented by the control centers to solve the estimation problem . in fig .[ fig : simulation_finite ] we report the estimation error during the iterations of the algorithm .notice that , as predicted by theorem [ thm : local_estimation ] , each leader possess a good estimate of the state of its region before the termination of the algorithm .two distributed algorithms for network control centers to compute the minimum variance estimate of the network state given noisy measurements have been proposed .the two methods differ in the mode of cooperation of the control centers : the first method implements an incremental mode of cooperation , while the second uses a diffusive interaction .both methods converge in finite time , which we characterize , and they require only local measurements and model knowledge to be implemented . additionally , an asynchronous and scalable implementation of our diffusive estimation method has been described , and its efficiency has been shown through a rigorous analysis and through a practical example .based on these estimation methods , an algorithm to detect cyber - attacks against the network measurements has also been developed , and its detection performance has been characterized .let ^{\mathsf{t}} ] .we show by induction that , , and .note that the statements are trivially verified for .suppose that they are verified up to , then we need to show that , , and .we now show that , which is equivalent to note that by the induction hypothesis we have , and hence .therefore , we need to show that let , and notice that due to the properties of the pseudoinverse operation .suppose that .since , the vector can be written as , where and , .then , it holds , and hence , which contradicts the hypothesis .finally .we now show that . because of the consistency of the system of linear equations , and because by the induction hypothesis , there exists a vector such that , and hence that .we conclude that , and finally that .we first show that .recall from that .let be such that , then , so that .we now show that .recall that .let be such that , then , so that , which concludes the proof .the first property follows directly from ( cfr . page 427 ) . to show the second property ,observe that , so that for the theorem to hold , we need to verify that or , equivalently , that and consider equation .after simple manipulation , we have so that we need to show only that recall that for a matrix it holds .then the term equals because .we conclude that equation holds .consider now equation .observe that . because has full row rank , and , simple manipulation yields ^\dag ( i - hh^\dag ) b= h^{\mathsf{t}}(bb^{\mathsf{t}})^{-1 } ( i - hh^\dag ) b,\end{aligned}\ ] ] and hence ^\dag \right\ } ( i - hh^\dag ) b=0.\end{aligned}\ ] ] since , we obtain ^\dag ( i - hh^\dag ) b = 0.\end{aligned}\ ] ] a sufficient condition for the above equation to be true is ^\dag\right)^{\mathsf{t}}b^{\mathsf{t}}(bb^{\mathsf{t}})^{-1 } h=0.\end{aligned}\ ] ] from lemma [ lemma_ker ] we have .^\dag\right)^{\mathsf{t}}\right)={\operatorname{ker}}((i - aa^\dag)b).\end{aligned}\ ] ] since we have that ^\dag ( i - hh^\dag ) b = 0,\end{aligned}\ ] ] and that equation holds .this concludes the proof .y. liu , m. k. reiter , and p. ning .false data injection attacks against state estimation in electric power grids . in _acm conference on computer and communications security _ , pages 2132 , chicago , il , usa , november 2009 .f. pasqualetti , r. carli , a. bicchi , and f. bullo .distributed estimation and detection under local information . in _ifac workshop on distributed estimation and control in networked systems _ ,pages 263268 , annecy , france , september 2010 .
this work presents a distributed method for control centers to monitor the operating condition of a power network , i.e. , to estimate the network state , and to ultimately determine the occurrence of threatening situations . state estimation has been recognized to be a fundamental task for network control centers to ensure correct and safe functionalities of power grids . we consider ( static ) state estimation problems , in which the state vector consists of the voltage magnitude and angle at all network buses . we consider the state to be linearly related to network measurements , which include power flows , current injections , and voltages phasors at some buses . we admit the presence of several cooperating control centers , and we design two distributed methods for them to compute the minimum variance estimate of the state given the network measurements . the two distributed methods rely on different modes of cooperation among control centers : in the first method an _ incremental _ mode of cooperation is used , whereas , in the second method , a _ diffusive _ interaction is implemented . our procedures , which require each control center to know only the measurements and structure of a subpart of the whole network , are computationally efficient and scalable with respect to the network dimension , provided that the number of control centers also increases with the network cardinality . additionally , a _ finite - memory _ approximation of our diffusive algorithm is proposed , and its accuracy is characterized . finally , our estimation methods are exploited to develop a distributed algorithm to detect corrupted data among the network measurements . , ,
transmission control protocol ( tcp ) is commonly used by most of internet applications and becomes one of the two original components of the internet protocol suite , complementing the internet protocol ( ip ) , thus the entire suite is known as tcp / ip .tcp provides stable and reliable delivery of data packets without relying on any explicit feedback from the underlying network .however , it relies only on the two ends of the connection which are sender and receiver . that is why tcp is known as end - to - end or host - to - host protocol . in the last couple of years , tcpis profusely used by major internet applications such as file transfer , email , world - wide - web and remote administration .the first idea of tcp had been presented by .thereafter , tcp has been implemented in several operating systems and examined in real environment . with the advancement in network technology ,tcp faced many new scenarios and problems , such as network congestion , under utilization of bandwidth , unfair share , unnecessary retransmission , out of order delivery , non - congestion loss .all of these problems encouraged researchers to review the behavior of tcp . in order to solve these problems ,many tcp variants have been developed .each tcp variant has been designed to solve certain problems , some try to survive over a very slow and congested connections , and some try to achieve higher throughput to fully utilize the high - speed bandwidths , while some try to be more fair. in fact , they are mostly different from each other so that categorizes them into high - speed , wireless , satellite and low priority .indeed , a particular tcp variant which is proper for wireless networks , may not fit for high - bdp wired networks and vice versa .therefore , it is necessary to conduct a comparison between tcp variants that are designed for high - speed networks to show the advantages and disadvantages of each tcp variant . in this paper , scalable tcp , hs - tcp , bic ,h - tcp , cubic , tcp africa , tcp compound , tcp fusion , newreno , tcp illinois and yeah have been evaluated using ns2 network simulator .this performance evaluation presents the advantages and disadvantages of the compared tcp variants and shows the differences between them in terms of throughput , loss - ratio and fairness over high - bdp networks . as well as, it presents and explains the behaviors of the compared tcp variants , shows the impacts of the used approaches , and arranges the thoughts .thus , this paper may help the researchers to improve the performance of the existing tcp variants by cutting down the effort of comparing the existing protocols in order to improve it to fit the new generation of the networks .the rest of this paper is organized as follows : section [ mot ] presents the motivations behind this work , challenges and previous works . while , section [ pe ] presents the performance evaluation of high - speed tcp variants and explains the experiments setup , network topology , performance metrics , results and discussion .finally , section [ conc ] concludes the paper with some final comments .the rapid growth of network technologies reduces the ability of tcp to fully utilize the resources of these networks . due to this problem of under - utilization of network resources , many high - speed tcp variants that aim to increase the utilization of these resourceshave been exist .these increase of tcp aggressiveness , in order to fully utilize the high - speed bandwidths , arises the severe problem of burst loss .in addition to that , the variety of these tcp protocols leads to some questions that need to be addressed : which tcp variant seems to be the best for high - speed networks ?are the current tcp variants sufficient to fully utilize the high - speed bandwidths ? in order to answer these questions , a comparative study of high - speed tcp variants is required . such comparison or performance evaluation addresses the points of tcp weaknesses and consequently supports the process of enhancing tcp performance .nowadays , tcp is struggling to deal with different network environments such as wireless or lossy networks , high - speed networks and highly congested networks .each type of these networks has its own problems and limitations that are different from one to another networks .consequently , there are many tcp variants designed for each certain type of networks .as shown in figure [ taxonomy ] , provided an excellent evolutionary graph of most tcp variants based on the problem of which they are trying to solve and how they are behaving . in this paper , high - speed linux tcp variants that are available for research is presented and explained , as shown in table [ history ] , along the following subsections . [ cols="<,<,<,<,<,<",options="header " , ] [ comparison ]this work was supported by the ministry of higher education of malaysia under the fundamental research grant frgs/02/01/12/1143/fr for financial support .+ beheshti , n. , ganjali , y. , rajaduray , r. , blumenthal , d. , mckeown , n. , 2006 .buffer sizing in all - optical packet switches . in : optical fiber communication conference .optical society of america , pp .cavendish , d. , kumazoe , k. , ishizaki , h. , ikenaga , t. , tsuru , m. , oie , y. , 2012 . on tuning tcp for superior performance on high speed path scenarios .in : internet 2012 , the fourth international conference on evolving internet .. 1116 .jain , r. , chiu , d .-m . , hawe , w. r. , 1984 .a quantitative measure of fairness and discrimination for resource allocation in shared computer system .eastern research laboratory , digital equipment corporation .kaneko , k. , fujikawa , t. , su , z. , katto , j. , 2007 .tcp - fusion : a hybrid congestion control algorithm for high - speed networks . in : proc .pfldnet , isi , marina del rey ( los angeles ) , california .. 3136 .king , r. , baraniuk , r. , riedi , r. , 2005 .tcp - africa : an adaptive and fair rapid increase rule for scalable tcp . in : infocom 2005 .24th annual joint conference of the ieee computer and communications societies .proceedings ieee .. 111 .legrange , j. d. , simsarian , j. e. , bernasconi , p. , neilson , d. t. , buhl , l. , gripp , j. , 2009 .demonstration of an integrated buffer for an all - optical packet router . in : optical fibercommunication - incudes post deadline papers , 2009 .conference on .ieee , pp .13 . , nov .2009 . test - bed based comparison of single and parallel tcp and the impact of parallelism on throughput and fairness in heterogenous networks . in : icctd 09 ,ieee proceedings of the 2009 international conference on computer technology and development .vol . 1 .kota kinabalu , malaysia , pp .332335 . , nov . 2013 .performance evaluation of parallel tcp , and its impact on bandwidth utilization and fairness in high - bdp networks based on test - bed . in : micc13 ,2013 ieee 11th malaysia international conference on communications .kuala lumpur , malaysia .tan , k. , song , j. , 2006 .compound tcp : a scalable and tcp - friendly congestion control for high - speed networks . in : in 4th international workshop on protocols for fast long - distance networks ( pfldnet ) , 2006 .vishwanath , a. , sivaraman , v. , 2008 .routers with very small buffers : anomalous loss performance for mixed real - time and tcp traffic . in : quality of service , 2008 .iwqos 2008 .16th international workshop on .ieee , pp .8089 .vishwanath , a. , sivaraman , v. , rouskas , g. n. , 2011 .anomalous loss performance for mixed real - time and tcp traffic in routers with very small buffers .ieee / acm transactions on networking 19 ( 4 ) , 933946 .xu , l. , harfoush , k. , rhee , i. , 2004 .binary increase congestion control ( bic ) for fast long - distance networks . in : infocom 2004 .twenty - third annual joint conference of the ieee computer and communications societies .vol . 4 .
transmission control protocol ( tcp ) has been profusely used by most of internet applications . since 1970s , several tcp variants have been developed in order to cope with the fast increasing of network capacities especially in high bandwidth delay product ( high - bdp ) networks . in these tcp variants , several approaches have been used , some of these approaches have the ability to estimate available bandwidths and some react based on network loss and/or delay changes . this variety of the used approaches arises many consequent problems with different levels of dependability and accuracy . indeed , a particular tcp variant which is proper for wireless networks , may not fit for high - bdp wired networks and vice versa . therefore , it is necessary to conduct a comparison between the high - speed tcp variants that have a high level of importance especially after the fast growth of networks bandwidths . in this paper , high - speed tcp variants , that are implemented in linux and available for research , have been evaluated using ns2 network simulator . this performance evaluation presents the advantages and disadvantages of these tcp variants in terms of throughput , loss - ratio and fairness over high - bdp networks . the results reveal that , cubic and yeah overcome the other high - speed tcp variants in different cases of buffer size . however , they still require more improvement to extend their ability to fully utilize the high - speed bandwidths , especially when the applied buffer is or less than the bdp of the link . linux tcp , high - bdp , congestion control , throughput , loss ratio , fairness index .
by constraining the amount of classical information which can be reliably encoded into a collection of quantum states , the holevo bound sets a limit on the rates that can be achieved when transferring classical messages in a quantum communication channel . even though , for finite number of channel uses , the bound in general is not achievable , it is saturated in the asymptotic limit of infinitely many channel uses .consequently , via proper optimization and regularization , it provides the quantum analog of the shannon capacity formula , i.e. the _ classical capacity _ of the quantum channel ( e.g. see refs . ) . starting from the seminal works of ref . several alternative versions of the asymptotic attainability of the holevo bound have been presented so far ( e.g. see refs . and references therein ) .the original proof was obtained extending to the quantum regime the typical subspace encoding argument of shannon communication theory . in this context an explicit detection scheme ( sometime presented as the _ pretty good measurement _ ( pgm ) scheme ) was introduced that allows for exact message recovery in the asymptotic limit infinitely long codewords .more recently , ogawa and nagaoka , and hayashi and nagaoka proved the asymptotic achievability of the bound by establishing a formal connection with quantum hypothesis testing problem , and by generalizing a technique ( the information - spectrum method ) which was introduced by verd and han in the context of classical communication channel . in this paperwe analyze a new decoding procedure for classical communication in a quantum channel . herewe give a formal proof using conventional methods , whereas in we give a more intuitive take on the argument .our decoding procedure allows for a new proof of the asymptotic attainability of the holevo bound . as in refs . it is based on the notion of typical subspace but it replaces the pgm scheme with a sequential decoding strategy in which , similarly to the quantum hypothesis testing approach of ref . , the received quantum codeword undergoes to a sequence of simple yes / no projective measurements which try to determine which among all possible inputs my have originated it . to prove that this strategy attains the bound we compute its associated _ average _ error probability and show that it converges to zero in the asymptotic limit of long codewords ( the average being performed over the codewords of a given code _ and _ over all the possible codes ) .the main advantage of our scheme resides on the fact that , differently from pgm and its variants , it allows for a simple intuitive description , it clarifies the role of entanglement in the decoding procedure , its analysis avoids some technicalities , and it appears to be more suited for practical implementations . the paper is organized as follows : in sec .[ int ] we set the problem and present the scheme in an informal , non technical way . the formal derivation of the procedure begins in the next section .specifically , the notation and some basic definitions are presented in sec .next the new sequential detection strategy is formalized in sec .[ sec2 ] , and finally the main result is derived in sec .[ sec3 ] . conclusions and perspectives are given in sec .[ seccon ] .the paper includes also some technical appendixes .the transmission of classical messages through a quantum channel can be decomposed in three logically distinct stages : the _ encoding _ stage in which the sender of the message ( say , alice ) maps the classical information she wish to communicate into the states of some quantum objects ( the quantum information carriers of the system ) ; the _ transmission _ stage in which the carriers propagate along the communication line reaching the receiver ( say , bob ) ; and the _ decoding _ stage in which bob performs some quantum measurement on the carriers in order to retrieve alice s messages . for explanatory purposeswe will restrict the analysis to the simplest scenario where alice is bound to use only unentangled signals and where the noise in the channel is memorylessa similar formulation of the problem holds also when entangled signals are allowed : in this case however the defined in the text represents ( possibly entangled ) states of -longs _ blocks _ of carriers : for each possible choice of , and for each possible coding / decoding strategy one define the error probability as in eq .( [ ff1prova ] ) . the optimal transmission rate ( i.e. the capacity of the channel ) is also expressible as in the rhs term of eq . ( [ capa ] ) via proper regularization over ( this is a consequence of the super - additivity of the holevo information ) .finally the same construction can be applied also in the case of quantum communication channels with memory , e.g. see ref . . ] . under this hypothesisthe coding stage can be described as a process in which alice encodes classical messages into factorized states of quantum carriers , producing a collection of quantum codewords of the form where are symbols extracted from a classical alphabet and where we use different vectors . due to the communication noise, these strings will be received as the factorized states ( the output codewords of the system ) , where for each we have being the completely positive , trace preserving channel that defines the noise acting on each carrier .finally , the decoding stage of the process can be characterized by assigning a specific positive valued operator measurement ( povm ) which bob applies to to get a ( hopefully faithful ) estimation of the value . indicating with the elements which compose the selected povm , the average error probability that bob will mistake a given sent by alice for a different message ,can now be expressed as , e.g. see ref . , )\;.\end{aligned}\ ] ] in the limit infinitely long sequences , it is known that can be sent to zero under the condition that scales as with being bounded by the optimized version of the holevo information , i.e. where the maximization is performed over all possible choices of the inputs and over all possible probabilities , and where for a given quantum output ensemble we have with ] is the von neumann entropy of ( as in the classical case , the states defined above can be thought as those which , in average , contain the symbol almost times ) .identifying with the set of those vectors which satisfies eq .( [ sat ] ) , the projector on can then be expressed as while the average state is clearly given by by construction , the two operators satisfy the inequalities furthermore , it is known that the probability that will emit a message which is not in is exponentially depressed .more precisely , for all it is possible to identify a sufficiently large such for all we have < \epsilon \;.\label{bbhd1 } \end{aligned}\ ] ] typical subsets can be defined also for each of the product states of eq .( [ eq1 ] ) , associated to each codeword at the output of the channel . in this casethe definition is as follows : first for each we define the spectral decomposition of the element , i.e. where are the eigenvectors of and the corresponding eigenvalues ( notice that while for all and , in general the quantities are a - priori undefined ) .now the spectral decomposition of the codeword is provided by , where for one has notice that for fixed the vectors are an orthonormal set of ; notice also that in general such vectors have nothing to do with the vectors of eq .( [ ffd ] ) .now the typical subspace of is defined as the linear subspace of spanned by the whose associated satisfy the inequality , with being the holevo information of the source .the projector on can then be written as where identify the set of the labels which satisfy eq .( [ def1 ] ) .we notice that the bounds for the probabilities do not depend on the value of which defines the selected codeword : they are only function of the source only ( this of course does not imply that the subspace will not depend on ) .it is also worth stressing that since the vectors in general are not orthogonal with respect to the label , there will be a certain overlap between the subspaces .the reason why they are defined as detailed above stems from the fact that the probability that will not be found in ( averaged over all possible realization of ) , can be made arbitrarily small by increasing , e.g. see ref .more precisely , for fixed , one can show that for all there exists such that for all integer one has , < \epsilon\;,\end{aligned}\ ] ] where is the probability ( [ rpob1 ] ) that the source has emitted the codeword .the goal in the design of a decoding stage is to identify a povm attached to the code * c * that yields a vanishing error probability as increases in identifying the codewords .how can one prove that such a povm exists ?first of all let us remind that a povm is a collection of positive operators .the probability of getting a certain outcome when measuring the codeword is computed as the expectation value ] corresponds to the case in which the povm is not able to identify any of the possible codewords ) .then , the error probability ( averaged over all possible codewords of * c * ) is given by the quantity )\;.\end{aligned}\ ] ] proving that this quantity is asymptotically null will be in general quite complicated .however , the situation simplifies if one averages with all codewords * c * that the source can generate , i.e. being the probability defined in eq .( [ ll ] ) .proving that nullifies for implies that at least one of the codes * c * generated by allows for asymptotic null error probability with the selected povm ( indeed the result is even stronger as _ almost all _ those which are randomly generated by will do the job ) . in refs . the achievability of the holevo bound was proven adopting the pretty good measurement detection scheme , i.e. the povm of elements ^{-\tfrac{1}{2 } } \ ; p p_{\vec{j } } p\ ; \big [ \sum_{\vec{h}\in{\cal c } } p p_{\vec{h } } p \big]^{-\tfrac{1}{2 } } , \\x_0 & = & \openone - \sum_{\vec{j}\in{\cal c } } x_{\vec{j}}\;,\end{aligned}\ ] ] where is the projector ( [ proj ] ) on the typical subspace of the average state of the source , for the are the projectors ( [ gh ] ) associated with the codeword . with this choiceone can verify that , for given there exist sufficiently large such that eq .( [ ff12 ] ) yields the inequality this implies that as long as is smaller than one can bound the ( average ) error probability close to zero .in this section we formalize our detection scheme .as anticipated in the introduction , the idea is to determine the value of the label associated with the received codeword , by checking whether or not such state pertains to the typical subspace of the element of the selected code .specifically we proceed as follows * first we fix an ordering of the codewords of yielding the sequence with for all ( this is not really relevant but it is useful to formalize the protocol ) ; * then bob performs a yes / no measurement that determines whether or not the received state is the typical subspace of the first codeword ; * if the answer is yes the protocol stops and bob declares to have identified the received message as the first of the list ( i.e. ) ; * if the answer is no bob , performs a yes / no measurement to check whether or not the state is in the typical sub of ; * the protocol goes on , testing similarly for all possibilities . in the end we will either determine an estimate of the transmitted or we will get a null result ( the messages has not been identified , corresponding to an error in the communication ) .we now better specify the yes / no measurements .indeed , as mentioned earlier , we have to `` smooth '' them to account for the disturbance they might introduce in the process . for this purpose ,each of such measurements will consist in two steps in which first we check ( via a von neumann projective measurement ) whether or not the incoming state is in the typical subspace of the average message .then we apply a von neumann projective measurement on the typical subspace of the -th codeword of bob s list ( see fig .[ fig0 ] ) .hence , the povm elements are defined as follows. the first element tests if the transmitted state is in , so it is described by the ( positive ) operator where for any operator the symbol stands for being the projector of eq .( [ proj ] ) .similarly the remaining elements can be expressed as follows ( see appendix [ apovm ] for an explicit derivation ) .a compact expression can be derived by writing where with being the orthogonal complement of , i.e. with such definitions the associated average error probability ( [ ff12 ] ) can then be expressed as , )\nonumber \\ & = & 1 - \frac{1}{n } \ ; \sum_{\vec{j}}\ ; p_{\vec{j}}\ ; \sum_{\ell=0}^{n-1}\ ; \mbox{tr } [ p_{\vec{j } } \;\ ; \phi^\ell(\rho_{\vec{j}})]\;,\end{aligned}\ ] ] where we used the fact that the summations over the various are independent . in writing the above expression we introduced the following super - operator which is completely positive and trace decreasing , and we use the notation to indicate the -fold concatenation of super - operators , e.g. .it is worth noticing that the possibility of expressing in term of a single super - operator follows directly from the average we have performed over all possible codes * c*. for future reference we find it useful to cast eq .( [ ff10 ] ) in a slightly different form by exploiting the the definitions of eqs .( [ eq2 ] ) and ( [ gh ] ) .more precisely , we write \nonumber \\ & & = \sum_{\ell=0}^{n-1 } \sum_{\vec{j},\vec{j}_1,\cdots , \vec{j}_\ell}\ ; \sum_{\vec{k } } \sum_{\vec{k}'\in{\cal k}_{\vec{j } } } \ ; \lambda_{\vec{k}}^{(\vec{j } ) } \ ; \frac{p_{\vec{j } } p_{\vec{j}_1 } \cdots p_{\vec{j}_\ell } } { n } \ ; \nonumber\\&&\qquad\qquad\times \left| \langle e_{\vec{k } ' } ^{(\vec{j } ) } | \bar{q}_{\vec{j}_1 } \cdots \bar{q}_{\vec{j}_\ell } |e_{\vec{k } } ^{(\vec{j})}\rangle\right|^2\;. \label{ff122}\end{aligned}\ ] ]in this section we derive an upper limit for the error probability ( [ ff10 ] ) which will lead us to the prove the achievability of the holevo bound .specifically , we notice that \big|^2\;,\end{aligned}\ ] ] where the first inequality follows by dropping some positive terms ( those with ) , the first identity simply exploits the fact that the are normalized probabilities when summing over all , and the second inequality follows by applying the cauchy - schwarz inequality . replacing this into eq .( [ ff122 ] ) we can write \right|^2\;.\end{aligned}\ ] ] this can be further simplified by invoking again the cauchy - schwarz inequality this time with respect to the summation over the , i.e. \right|^2 \ ; \nonumber \\ & & \geqslant \big| \sum_{\vec{j},\vec{j}_1,\cdots , \vec{j}_\ell}\ ; p_{\vec{j } } p_{\vec{j}_1 } \cdots p_{\vec{j}_\ell } \ ; \mbox{tr } [ p_{\vec{j } } \rho_{\vec{j } } p_{\vec{j } } \ ; \ ; \bar{q}_{\vec{j}_1 } \cdots \bar{q}_{\vec{j}_\ell } ] \big|^2 \nonumber \\ & & \qquad \qquad \qquad = \left ( \mbox{tr } [ w_1 \ ; { \cal q}^\ell ] \right)^2 \ ; , \end{aligned}\ ] ] where for integer we defined ( notice that _ is not _ , e.g. see eq .( [ prima ] ) ) .therefore one gets \right|^2 \;.\end{aligned}\ ] ] to proceed it is important to notice that the quantity is always positive and smaller than , i.e. both properties simply follow from the identity p \;,\end{aligned}\ ] ] and from the fact that .we also notice that where the last inequality is obtained by observing that the typical eigenvalues are lower bounded as in eq .( [ def1 ] ) . from the above expressionswe can conclude that the quantity in the summation that appears on the lhs of eq .( [ ff144rffdd ] ) is always smaller than one and that it is decreasing with .an explicit proof of this fact is as follows = \mbox{tr } [ \sqrt{w_1 } \ ; { \cal q}^{\frac{\ell-1}{2 } } \ ; { \cal q } \ ; { \cal q}^{\frac{\ell-1}{2 } } \ ; \sqrt{w_1 } ] \nonumber \\ & \leqslant & \mbox{tr } [ \sqrt{w_1 } \ ; { \cal q}^{\frac{\ell-1}{2 } } \ ; \openone \ ; { \cal q}^{\frac{\ell-1}{2 } } \ ; \sqrt{w_1 } ] = \mbox{tr } [ w_1 \ ; { \cal q}^{\ell-1 } ] \ ; , \nonumber \end{aligned}\ ] ] where we used the fact that the square root of a non negative operator can be taken to be non negative too ( for a more detailed characterization of see appendix [ appb ] ). a further simplification of the bound can be obtained by replacing the terms in the summation of eq .( [ ff144rffdd ] ) with the smallest addendum .this yields where , using the fact that , we defined = \sum_{z=0}^{n-1 } \tiny { \left ( \begin{array}{c } n-1 \\ z \end{array } \right ) } \ ; ( -1)^z\ ; f_z ,\\ f_z & : = & \mbox{tr } [ w_1 \ ; p\ ; \bar{w}_0^z ] \;. \end{aligned}\ ] ] it turns out that the quantities defined above are positive , smaller than one , and decreasing in .indeed as shown in the appendix [ appa ] they satisfy the inequalities and , for each given , there exists a sufficiently large such that for using these expressions , we can derive the following bound on , i.e. \label{jj}\;,\end{aligned}\ ] ] where in the first inequality we get a bound by taking all the terms of with the negative sign , the second from ( [ ffh ] ) .now , on one hand if is too large the quantity on the rhs side will become negative as we are taking the power of a quantity which is larger than . on the other hand ,if is small then for large the quantity in the square parenthesis will approach .this implies that there must be an optimal choice for in order to have ] is an indeterminate form .its behavior can be studied for instance using the de lhpital formula , yielding = \frac{\log x}{\log y } \ ; \lim_{n\rightarrow \infty } \ ; \left(\frac{y}{x}\right)^n \;. \end{aligned}\ ] ] this shows that if the limit exists and it is zero , i.e. .vice - versa for the limit diverges , and thus .therefore , assuming , we can conclude that as long as the quantity on the rhs of eq .( [ jj22 ] ) approaches as increases ( this corresponds to having in the function ) . reminding then eq .( [ hh ] ) we get and thus on the contrary , if , the lower bound on becomes infinitely negative and hence useless to set a proper upper bound on . to summarize, we have shown that adopting the sequential detection strategy defined in sec .[ sec2 ] we can conclude that it is possible to send messages with asymptotically vanishing error probability , for all rates which satisfy the condition ( [ condition2 ] ) . summarize : the above analysis provides an explicit upper bound for the averaged error probability of the new detection scheme ( the average being performed over all codewords of a given code , and over all possible codes ) .specifically , it shows that the error probability can be bound close to zero for codes generated by sources which have strictly less than elements . in other words , our new detection scheme provides an alternative demonstration of the achievability of the holevo bound .an interesting open question is to extend the technique presented here to a decoding procedure that can achieve the quantum capacity of a channel .vg is grateful to p. hayden , a. s. holevo , k. matsumoto , j. tyson and a. winter for comments and discussions .vg acknowledges support from the firb - ideas project under the contract rbid08b3fm and support of institut mittag - leffler ( stockholm ) , where he was visiting while part of this work was done .sl was supported by the wm keck foundation , darpa , nsf , and nec .lm was supported by the wm keck foundation .here we provide an explicit derivation of the povm ( [ ffd3 ] ) associated with our iterative measurement procedure .it is useful to describe the whole process as a global unitary transformation that coherently transfers the information from the codewords to some external memory register .consider , for instance , the first step of the detection scheme where bob tries to determine whether or not a given state corresponds to the first codeword of his list .the corresponding measurement can be described as the following ( two - step ) unitary transformation where represents a two - qubit memory register which stores the information extracted from the system .specifically , the first qubit records with a `` 1 '' if the state belongs to the typical subspace of the average state of the source ( instead it will keep the value `` 0 '' if this is not the case ) .similarly , the second qubit of records with a `` 1 '' if the projected component is in the typical subspace of . accordingly the joint probability of success of finding in and _ then _ in is given by in agreement with the definition of given in eq .( [ e1 ] ) .vice - versa the joint probability of finding the state in in and _ then not _ in is given by and finally the joint probability of _ not _ finding in in is .let us now consider the second step of the protocol where bob checks wether or not the message is in the typical subspace of .it can be described as a unitary gate along the same lines of eq .( [ twostep ] ) with replaced by , and with a new two - qubit register .notice however that this gate only acts on that part of the global system which emerges from the first measurement with in .this implies the following global unitary transformation , \nonumber \\ & & \qquad\qquad + ( \openone -p ) |\psi\rangle |0 0 \rangle_{b_1 } |00\rangle_{b_2 } , \label{twostepb}\end{aligned}\ ] ] which shows that the joint probability of finding in ( after having found it in , not in , and again in ) is in agreement with the definition of given in eq .( [ gg ] ) . reiterating this procedure for all the remaining steps onecan then verify the validity of eq .( [ ffd3 ] ) for all .moreover , it is clear ( e.g. from eq . and )that it is a quite different povm from the conventionally used pretty good measurement .in this section we derive a couple of inequalities which are not used in the main derivation but which allows us to better characterize the various operators which enter into our analysis .first of all we observe that which follows by the following chain of inequalities , where we used eq .( [ def1 ] ) .we can also prove the following identity which follows by using eq .( [ use ] ) . notice that due to eq .( [ prima ] ) this also gives start deriving the inequalities of eq .( [ hh ] ) first .to do we observe that for all positive we can write \leqslant \sum_{\vec{j } } \ ; p_{\vec{j } } \ ; \mbox{tr } [ \rho_{\vec{j } } ( \openone - p_{\vec{j } } ) ] < \epsilon ' \ ; , \nonumber \end{aligned}\ ] ] where the first inequality follows by simply noticing that is positive semidefinite ( the two operators commute ) , while the last is just eq .( [ impo ] ) which holds for sufficiently large . reorganizing the terms and using eq .( [ bbhd1 ] ) this finally yields & > & \sum_{\vec{j } } \ ; p_{\vec{j } } \ ; \mbox{tr } [ \rho_{\vec{j } } p ] - \epsilon ' \nonumber \\ & = & \mbox{tr } [ \rho^{\otimes n } p ] -\epsilon ' > 1 -2 \epsilon ' \;,\end{aligned}\ ] ] which corresponds to the lefttmost inequality of eq .( [ hh ] ) by setting .the rightmost inequality instead follows simply by observing that \leqslant \mbox{tr } [ w_1 ] = \sum_{\vec{j } } p_{\vec{j } } \mbox{tr } [ p_{\vec{j } } \rho_{\vec{j } } ] \leqslant 1\;. \end{aligned}\ ] ] to prove the inequality ( [ ffh ] ) we finally notice that for we can write = \mbox{tr } [ w_1 \ ; \bar{w}_0^z ] \nonumber \\ & = & \mbox{tr } [ \sqrt{w_1 } \ ; \bar{w}_0^{\frac{z-1}{2 } } \bar{w_0 } \bar{w}_0^{\frac{z-1}{2 } } \ ; \sqrt{w_1 } ] \nonumber \\ & \leqslant & \mbox{tr } [ \sqrt{w_1 } \ ; \bar{w}_0^{\frac{z-1}{2 } } p\ ; \bar{w}_0^{\frac{z-1}{2 } } \sqrt{w_1 } ] 2^{-n(\chi({\cal e } ) - 2\delta ) } \nonumber \\ & \leqslant & \mbox{tr } [ \sqrt{w_1 } \ ; \bar{w}_0^{\frac{z-1}{2 } } \bar{w}_0^{\frac{z-1}{2 } } \sqrt{w_1 } ] 2^{-n(\chi({\cal e } ) - 2\delta ) } \nonumber \\ & = & \mbox{tr } [ w_1\bar{w}_0^{z-1 } ] \ ; 2^{-n(\chi({\cal e } ) - 2\delta ) } = f_{z-1 } \;2^{-n(\chi({\cal e } ) - 2\delta ) } \nonumber \;,\end{aligned}\ ] ] where we used the fact that the operators operators , are non negative .the expression ( [ ffh ] ) then follows by simply reiterating the above inequality times .a. s. holevo , `` the capacity of the quantum channel with general signal states , '' _ ieee trans .inform . theory _ * 44 * , 269 - 273 , ( 1998 ) .schumacher and m. westmoreland , `` sending classical information via noisy quantum channels , '' _ phys .a _ * 56 * , 131 - 138 ( 1997 ) ; p. hausladen , r. jozsa , b.w .schumacher , m. westmoreland , and w.k .wootters , `` classical information capacity of a quantum channel , '' _ phys .a _ * 54 * , 1869 - 1876 ( 1996 ) .t. ogawa , _ a study on the asymptotic property of the hypothesis testing and the channel coding in quantum mechanical systems _ , ph.d .dissertation ( in japanese ) , univ .electro - communications , tokyo , japan , 2000 ; t. ogawa and h. nagaoka , `` a new proof of the channel coding theorem via hypothesis testing in quantum information theory '' , in _ proc .2002 ieee int .information theory , lausanne , switzerland _ , june / july 2002 , p. 73 ; `` strong converse to the quantum channel coding theorem , '' _ieee trans.info.theor._ * 45 * 2486 - 2489 , ( 1999 ) .m. hayashi and h. nagaoka , `` general formulas for capacity of classical - quantum channels , '' _ ieee trans .inform . theory _* 49 * , 1753 - 1768 ( 2003 ) .m. hayashi , `` error exponent in asymmetric quantum hypothesis testing and its application to classical - quantum channel coding '' , _ phys .a _ * 76 * , 062301 ( 2007 ) ; m. hayashi , `` universal coding for classical - quantum channel '' , _ commun .phys . _ * 289 * , 1087 - 1098 ( 2009 ) .f. hiai and d. petz , `` the proper formula for relative entropy and its asymptotics in quantum probability , '' _ commun .* 143 * , 99 - 114 ( 1991 ) ; t. ogawa and h. nagaoka , `` strong converse and stein s lemma in quantum hypothesis testing , '' _ ieee trans .inf . theory _ * 46 * , 2428 - 2433 ( 2000 ) .s. verd and t. s. han , `` a general formula for channel capacity '' , _ ieee trans .theory , _ * 40 * , 1147 - 1157 ( 1994 ) ; t. s. han , _ information - spectrum methods in information theory _( springer , berlin , 2002 ) . j. tyson , `` two - sided estimates of minimum - error distinguishability of mixed quantum states via generalized holevo - curlander bounds '' , _ j. math .phys . _ * 50 * , 032106 ( 2009 ) ; `` error rates of belavkin weighted quantum measurements and a converse to holevo s asymptotic optimality theorem '' , _ phys .rev . a _ * 79 * , 032343 ( 2009 ) .v. p. belavkin , `` optimal multiple quantum statistical hypothesis testing , '' _ stochastics _ * 1 * , 315 - 345 ( 1975 ) ; p. belavkin , radio eng .phys . * 20 * , 39 ( 1975 ) ; v. p. belavkin and v. maslov , _ in mathematical aspects of computer engineering _ , edited by v. maslov , mir , moscow , ( 1987 ) .m. jzek , j. ehek , and j. fiurek , `` finding optimal strategies for minimum - error quantum - state discrimination , '' _ phys .rev . a _ * 65 * , 060301(r ) ( 2002 ) ; z. hradil , j. ehek , j. fiurek , and m. jzek , `` maximum - likelihood methods in quantum mechanics , '' _ lect. notes phys . _* 649 * , 163 - 172 ( 2004 ) .p. hayden , d. leung , and g. smith , `` multiparty data hiding of quantum information , '' _ phys .rev . a _ * 71 * , 062339 ( 2005 ) . p. w. shor , `` the quantum channel capacity and coherent information '' , unpublished lecture notes .online at http://www.msri.org/publications/ln/msri/2002/ quantumcrypto / shor/1/ ; msri workshop on quantum information , berkeley , 2002 .
we present a new decoding procedure to transmit classical information in a quantum channel which , saturating asymptotically the holevo bound , achieves the optimal rate of the communication line . differently from previous proposals , it is based on performing a sequence of ( projective ) yes / no measurements which in steps determines which codeword was sent by the sender ( being the number of the codewords ) . our analysis shows that as long as is below the limit imposed by the holevo bound the error probability can be sent to zero asymptotically in the length of the codewords .
multiple - choice tests are a common way of testing someone s knowledge , and in italy they are usually employed , among the many possible fields of application , during the admission tests for _ numerus clausus _university courses like medicine .this led to the foundation of many small enterprises that have the sole business of training students for getting a good rank by compiling those tests in the best possible way . in order to cope with the necessity ofautomate the correction process for large numbers of hand - compiled multiple - choice tests , the junior enterprise of catania ( ject ) - the non profit students association to whom the two authors belong to - developed ject - omr , the system described in this article . optical mark recognition ( omr )is a process for the detection and recognition of marks written on paper , that is often employed for the recognition of the answers checked in multiple - choice tests .the ject - omr system takes as input a digital image containing the scan of a paper test compiled by a student , and returns as output a vector containing , for each question , the answer that the student selected by tracing a cross in the chosen square , the blank squares and ( if any ) symbols with special meanings ( like the ones described in section [ subsec : cancel ] ) .moreover , the system also returns the values of the barcodes present in the test , so that the it can be contextualized without manual intervention simply by interacting with the existing databases .the programming language used for the implementation of ject - omr is python , and the heart of the system is the gamera framework for documents analysis and recognition , that proved to be a valuable tool in various image analysis contexts and gave us a nice set of building blocks for our system .the article is structured as following .the structure of the tests processed by the system is explained in section [ sec : test - structure ] .section [ sec : diagram ] describes the system by analyzing its components and section [ sec : performance ] presents the results of an accuracy test .there are two kinds of tests that ject - omr can analyze , that from now on will be called _kind a _ and _ kind b_. the two models share the number of columns of multiple - choice questions ( 4 ) and the presence of two barcodes . hereare the main differences between the two kinds of document : * * number of questions per column : * the system expects a document composed by four columns of questions and a given number of questions per column : the number of questions for each column changes from kind a to kind b ; * * size of the elements in the paper test : * ject - omr works with fixed - resolution images , and as described in section [ subsec : segmentation ] it uses a fixed set of dimensions for each kind of document for the segmentation of the image into its different components . * * ways of canceling answers : * the system lets the user cancel already given answers , and , as described in section [ subsec : cancel ] , the answers can be canceled in two different ways , one for each kind of test .figure [ fig : doc - structure ] shows an annotated example document .the figure shows in light gray an example document with the highlighted regions found by the system , and in black there are some annotations that explains the structure of the document .of course the annotations point out only some elements of a given class ( e.g. : only few recognized marked answers are pointed out ) .annotated example document ]in this section the system will be described , and its basic blocks will be analyzed in detail .figure [ fig : block - diagram ] shows the block diagram of the system .[ sec : diagram ] block diagram of the recognition system ] the pre - processing phase consists in a set of operations that make the scanned image more suitable for the further phases .the first operation performed to the image is the conversion to gray scale ; then the image is converted into black and white format using the global otsu thresholding method , which is gamera s default binarization method .next the system does a compensation of rotation effects induced by the scanning operation .the rotation is estimated and then it is corrected by rotating the image in the opposite direction by the same angle .the rotation correction is performed using a projection profile based algorithm built into gamera . in this phaseit is clear that using the gamera framework is a good choice , because it offers abstractions that allow the programmer to concentrate on the problem domain .for example , the operations described above are done by as little code as shown in listing [ lst : pre - processing ] . ....image = image.to_greyscale ( ) image = image.otsu_threshold ( ) angle = image.rotation_angle_projections(-15 , 15)[0 ] image = image.rotate(angle , 0 ) .... finally , the system finds the upper left black square , marked as _ origin _ in figure [ fig : doc - structure ] , by performing a connected component analysis and finding the glyph whose coordinates ( ` offset_x ` and ` offset_y ` ) are the nearest to the point .those coordinates are called the _ origin _ of the test , and are used as the reference point for any subsequent elaboration .after the pre - processing phase , ject - omr extracts from the image the regions containing the barcodes and the compiled questions , using empirically determined dimensions relative to the origin of the test .the dimensions are expressed in pixels as the scanning resolution of the input images is fixed .figure [ fig : doc - structure ] shows some of the regions identified in this phase .the four regions that contain the questions are the four columns that will be processed by the algorithm described in section [ subsec : answer - recognition ] . as explained in section [ sec : test - structure ] , the two kinds of test are structurally slightly different .this means that the program uses two different sets of dimensions , one for each kind of test .each column of questions is split into equal - height rows , that are analyzed one by one .each row is an image region that represents a question , from which the chosen answer ( if any ) is extracted .figure [ fig : question-42 ] shows an example row .example row ] here is how the answer classification algorithm works .first of all , the image is split into connected components , that are sorted according to horizontal position , from left to right .if there are no errors , the system finds 5 components relative to the 5 squares available for choosing the answer , of which one might be checked .if the analysis returns a number of component different than 5 , the analysis is revised and repeated : using an adaptive algorithm , that changes the thresholds until it does not find an acceptable number of components , the system classifies each glyph using two features : the number of black pixels ( ` black_area ` , natively available in gamera ) and the squareness ( computed as ` abs(num_rows - num_columns ) ` ) .once the glyphs corresponding to squares are found , they are analyzed in order to classify them as _ chosen _ squares or _empty _ squares .gamera provides a statistical knn classifier , but for this specific problem a simple heuristic approach based on threshold values was sufficient .the features used in this classification are the number of black pixels and the number of holes inside the image .if a square was marked by the student , and thus must be classified as _ chosen _ , the number of holes ( white regions surrounded by black regions ) will be bigger than if the square were not marked .this can be clearly seen in figure [ fig : question-42 ] , where the fourth square would labeled by the system as _ chosen _ and the other four as _empty_. the condition used to classify each square was found empirically , and it is composed by 3 sub - conditions : 1 .threshold on holeness 2 .threshold on black_area 3 .combined threshold over the weighted sum of holeness and black_area using this algorithm , the system analyzes each row and extracts a 5-elements vector where the -th position in 1 if the corresponding square is _ chosen _ , and 0 if it is _empty_. the output of this phase is a matrix , where is the number of questions in the test . when the student is compiling the test , he can undo his actions by canceling his answers in the following ways : if the test is of kind a , the student can cancel an answer by making the selected square completely black ; if the test is of kind b , the student can mark the circle that is drawn to the left of the question , and the system will not consider the answer that the student selected .the circle used in the kind b is simply the leftmost glyph in the row , and it is recognized as full or empty using the ` black_area ` feature .figure [ fig : canceled ] shows an example of each of the two cancellation methods .examples of canceled questions ( top : kind a ; bottom : kind b ) ] the barcodes recognized by the system are discrete two - width barcodes that encode binary strings .the information is encoded only in the bars and not in the spaces , differently from many higher - density barcodes like interleaved 2 of 5 and code 128 .the binary string is 26 bits wide , and contains 2 start bits ( the first two , both holding the value 1 ) and two end bits ( holding respectively the values 1 and 0 ) .the bits from 3 to 22 contain the binary representation of the number encoded in the barcode .the bits 23 and 24 contain the parity of the number of ones and zeros used for the representation of the number .this format was used because the software that prints the paper tests recognized by ject - omr uses it , so the system must be able to understand it . the rest of this section describes how the barcode recognition algorithm works .each region containing a barcode is first processed by the rotation correction algorithm described in section [ subsec : preprocessing ] , because the recognition algorithm requires that the barcode lines are vertical .the algorithm is executed for each barcode because some of them are not printed directly into the test , but are applied by the student using a sticker .this means that usually they will not be parallel to the rest of the test , and that they could need another adjustment in order to be suitable for the recognition .next , the region is vertically split in 5 sub - regions , each of them is analyzed by the same recognition algorithm .this algorithm does as its first step a connected component analysis , and sorts the resulting glyphs by horizontal position .next the algorithm measures the -extension of each glyph . if its value is higher than a given threshold , the glyph represents the value 1 , otherwise it represents a 0 . from each sub - regionthe algorithm computes a number in binary form , that is then converted to decimal and compared to the values obtained from the other sub - regions .the algorithm returns the number that is obtained from the highest number of sub - regions .this implies that if the barcode is slightly damaged it can be read regardless of the damage .sometimes it happens that the system can not perform its tasks because of some unknown error , maybe because of low - quality scans or unexpected pen strokes made by the students .when error conditions , like more or less than five squares in a row , are detected by the system , it adds the number of the question being analyzed to a list of rows that could not be processed , and shows to the user a simple graphical user interface ( gui ) for error correction .the gui was written using wxpython , the python bindings of the cross - platform user interface toolkit wxwidgets , and shows to the user an image of the scanned test where the results of the elaboration are marked using different colors for the different regions of the image , and lets him correct manually the recognition errors detected earlier by the system .in order to evaluate the performance of the system , we measured its accuracy in two crucial tasks : the recognition of squares related to answers in multiple - choice questions and the recognition of barcodes .the results of the accuracy tests are shown in table [ tab : results ] ..results of the performance evaluation [ cols="<,^,^,^",options="header " , ]this work presented ject - omr , a recognition system for multiple - choice tests based on the gamera framework .the results of the tests show that the application is quite mature , and it has been used for more than two years by a small enterprise whose mission is to prepare students for multiple - choice tests used in the admission test of numerus clausus university courses .although the recognition approach based on fixed thresholds proved to work fairly well , in future we might evaluate a recognition algorithm based on the knn classifier built into gamera , as it would probably be easier to use and more robust to unforeseen usage conditions or marks .any other future work will probably be done for improving the usability of the error correction gui and for the implementation of new features as the italian education ministry changes the mechanism of the test for numerus clausus courses .h. kubo , h , ohashi , m. tamamura , t. kowata , i. kaneko : _ shared questionnaire system for school community management ._ international symposium on applications and the internet workshops , pp . 439 - 445 ( 2004 )
this article describes ject - omr , a system that analyzes digital images representing scans of multiple - choice tests compiled by students . the system performs a structural analysis of the document in order to get the chosen answer for each question , and it also contains a bar - code decoder , used for the identification of additional information encoded in the document . ject - omr was implemented using the python programming language , and leverages the power of the gamera framework in order to accomplish its task . the system exhibits an accuracy of over 99% in the recognition of marked and non - marked squares representing answers , thus making it suitable for real world applications .
distributed compression of spatially correlated signals , e.g. , the observations of neighboring sensors in high density sensor networks , can drastically reduce the amount of data to be transmitted .the efficiency of compression , however , largely depends on the accuracy of the estimation of the correlation between the sources .the correlation is required at the encoder to determine the encoding rate ; it is also required to initialize the decoding algorithm in the slepian - wolf coding schemes that use channel codes with iterative decoding , e.g. , ldpc codes .the correlation is unknown at the encoder and is modeled by a virtual " channel .the estimation of the _ virtual correlation channel _ involves modeling it and estimating the model parameter .therefore , if this virtual correlation channel is not modeled accurately , even perfect estimation of the model parameter can not guarantee an efficient compression .the correlation between the two binary sequences and is commonly modeled by using a binary symmetric channel ( bsc ) with a crossover probability the parameter is either assumed to be known at the encoder or needs to be estimated .this model is also widely used in the compression of continuous - valued sources where slepian - wolf coding is employed to compress the sources after quantization .nevertheless , it is known that the correlation between continuous - valued sources can be modeled more accurately in the continuous domain .specifically , the gaussian distribution and its variations such as the gaussian bernoulli - gaussian ( gbg ) and the gaussian - erasure ( ge ) distributions are used for this purpose , particularly when evaluating theoretical bounds . in this paper , we first show that a single " bsc can not accurately model the correlation between continuous - valued sources , and we propose a new correlation model that exploits multiple " bscs for this purpose .the number of these channels is equal to the number of bits used in the binary representation of one sample .each channel models the bits with the same significance , i.e. , from the most significant bit ( msb ) to the least significant bit ( lsb ) , which is denoted as a bit - plane .we next focus on the implementation of the new model in the ldpc - based compression of continuous - valued sources .we modify the existing decoding algorithm for this specific model extracted from continuous - valued input sources and investigate its impact on the coding efficiency .further , by using an interleaver before feeding data into the slepian - wolf encoder , the successive bits belonging to one sample are shuffled to introduce randomness to the errors in the binary domain .numerical results , both in the binary and continuous domains , demonstrate the efficiency of the proposed scheme . the rest of the paper is organized as follows .the existing correlation models are discussed in section [ sec : oldcor ] . in section [ sec : newcor ] we introduce a new correlation model for continuous - valued sources . section [ sec : dec ] is devoted to integration of the new model to the ldpc - based slepian - wolf coding .simulation results are presented in section [ sec : sim ] .this is followed by conclusions in section [ sec : sum ] .lossless compression of correlated sources ( slepian - wolf coding ) is performed through the use of channel codes where one source is considered as a noisy version of the other one .this requires knowing the correlation between the sources at the decoder . the correlation and virtual communication channel between the binary sequences and are the same and are usually modeled by a bsc with crossover probability .the parameter of this channel is defined by .equivalently , one can obtain by averaging the hamming weight of for a long run of input data and side information , i.e. , then , using binary channel coding , near - lossless compression with a vanishing probability of error can be achieved provided that the length of the channel code goes to infinity . in general ,the correlation between the two analog sources and can be defined by where is a real - valued random variable .specifically , for the gaussian sources we usually have in which and .this model contains several well - known models which are suited for video coding and sensor networks . for example , for or the gaussian correlation is obtained , which is broadly used in the literature when and are gaussian .further , for the gbg and for , the ge models are realized .the latter two models are more suitable for video applications .these models are also used for evaluating theoretical bounds and performance limits .although the correlation between continuous - valued sources can be modeled more accurately in the continuous domain , practically it is usually modeled in the binary domain .this is due to the fact that , even for continuous - valued sources , compression is mostly done through the use of binary channel codes .to do so , the two sources are quantized and their correlation is modeled by a virtual bsc in the binary domain , as shown in fig .[ fig : model_subfig1 ] . in the next section , however , we show that this assumption is not very accurate , and we propose an alternative , more accurate model .let and be two continuous - valued sources . when using binary channel codes for compression , and need to be quantized before compression.then , as shown in fig .[ fig : model_subfig1 ] , the correlation between and ( the binary representation of and ) is defined in the binary domain by means of a bsc .we observe that this model is not very accurate .this is because the bits resulting from quantization of a sample and its corresponding side information are not independent .for example , if ( a sample of ) and its counterpart are the same , then all bits resulted from those samples will be identical .that is , the correlation between these bits can not be modeled independently .a more quantitative example is obtained by considering the model in and with .hence , and .now if , where is the quantization step size , we will have .this means that in ( the binary representation of ) , most probably only the first two lower significant bits will be affected . in other words ,higher significant bits of and are similar with high probability .numerical results in fig .[ fig : bscs ] verifies this observation .[ fig : model ] ) . and defined by , where and .quantization is done using a -bit scalar uniform quantizer . ]the above discussion indicates that at low channel - error - to - quantization noise ratios ( ) the higher significant bits of ( error in the binary domain ) are 0 , with high probability .therefore , correlation parameters differ depending on the bit position ( bit - plane ) ; i.e. , an independent error in the sample ( continuous ) domain can not be translated to an i.i.d .error in the binary domain .conversely , a bitwise correlation with a same parameter for all bit positions is not suited for continuous - valued sources . in the remaining of this paper , a novel approachis proposed to deal with this problem .the key is to find a way to effectively model and implement the aforementioned dependency .= [ rectangle , draw = black , fill = white , text centered , text = black , text width=2.8mm , rectangle split , rectangle split parts=#1 , draw , anchor = center , minimum height=5.2mm , font= ] = [ rectangle , draw = white , fill = white , text centered , text = black , text width=2.2mm , rectangle split , rectangle split parts=#1 , draw , anchor = center , minimum height=5.2mm , font= ] = [ circle , inner sep=0pt , minimum size=0mm , fill = black , draw = black ] = [ thick ] = [ draw = black!40,rounded corners ] = [ draw = black , rounded corners , rounded corners=3 mm ] = [ rectangle , minimum size=6mm , text width=.08,rounded corners=2mm , text centered , thick , draw = black!60,top color = white , bottom color = blue!15 ] = [ rectangle , minimum size=6mm , text width=.08,rounded corners=2mm , text centered , thick , draw = black!60,top color = blue!10,bottom color = blue!30 ] ( c)[row sep=0.3cm , column sep=1cm , xshift=1 mm ] & & & ( b1 ) ldpc block 1 ; & & & & ( b2 ) ldpc block 2 ; & ( b3 ) ldpc block 3 ; + ; ( cp)[row sep=0.3cm , column sep=.1mm , yshift=-2.5 cm ] ( p1 ) [ rect1=6,rectangle split horizontal ] ; & ( p2 ) [ rect1=6,rectangle split horizontal , fill = blue!5 , ] ; & ( pblank1)[rect1=1,rectangle split horizontal , fill = blue!10,text width=6 mm ] ; & ; & ( pblank2)[rect1=1,rectangle split horizontal , fill = blue!20,text width=6 mm ] ; & ( p2n ) [ rect1=6,rectangle split horizontal , fill = blue!25 ] ; & ( pblank3)[rect1=1,rectangle split horizontal , fill = blue!30,text width=6 mm ] ; + ; ( c)[row sep=0.3cm , column sep=.1mm , yshift=-1.2 cm ] ( ldpc1 ) ; & ( ldpc2 ) ; & & ( ldpcn ) ; & & ( ldpc2n ) ; + ( x1 ) [ rect1=6,rectangle split horizontal ] ; & ( x2 ) [ rect1=6,rectangle split horizontal , fill = blue!5 , ] ; & ( xblank1)[rect1=1,rectangle split horizontal , fill = blue!10,text width=6 mm ] ; & ; & ( xblank2)[rect1=1,rectangle split horizontal , fill = blue!20,text width=6 mm ] ; & ( x2n ) [ rect1=6,rectangle split horizontal , fill = blue!25 ] ; & ( xblank3)[rect1=1,rectangle split horizontal , fill = blue!30,text width=6 mm ] ; + ; ( ldpc1 ) + ( 0,-0.5 cm ) -| ( x1.north west ) ; ( ldpc1 ) + ( 0,-0.5 cm ) -| ( x1.north east ) ; ( ldpc2 ) + ( 0,-0.5 cm ) -|( x2.north west ) ; ( ldpc2 ) + ( 0,-0.5 cm ) -| ( x2.north east ) ; ( ldpcn ) + ( 0,-0.5 cm ) -|( xn.north west ) ; ( ldpcn ) + ( 0,-0.5 cm ) -|( xn.north east ) ; ( ldpc2n ) + ( 0,-0.5 cm ) -| ( x2n.north west ) ; ( ldpc2n ) + ( 0,-0.5 cm ) -|( x2n.north east ) ; ( ) rectangle ( ) ; ( ) rectangle ( ) ; ( ) rectangle ( ) ; ( p1.one north ) ( x1.one south ) ; ( p1.two north ) ( x1.two south ) ; ( p1.three north ) ( x1.three south ) ; ( p1.four north ) ( x1.four south ) ; ( p1.five north ) ( x1.five south ) ; ( p1.six north ) ( x1.six south ) ; ( p2.one north ) ( x2.one south ) ; ( p2.two north ) ( x2.two south ) ; ( p2.three north ) ( x2.three south ) ; ( p2.four north ) ( x2.four south ) ; ( p2.five north ) ( x2.five south ) ; ( p2.six north ) ( x2.six south ) ; ( p2n.one north ) ( x2n.one south ) ; ( p2n.two north ) ( x2n.two south ) ; ( p2n.three north ) ( x2n.three south ) ; ( p2n.four north ) ( x2n.four south ) ; ( p2n.five north ) ( x2n.five south ) ; ( p2n.six north ) ( x2n.six south ) ; ( pn.one north ) ( xn.one south ) ; ( pn.two north ) ( xn.two south ) ; ( pn.three north ) ( xn.three south ) ; ( pn.four north ) ( xn.four south ) ; ( pn.five north ) ( xn.five south ) ; ( pn.six north ) ( xn.six south ) ; ( pblank1.six north ) ( xblank1.six south ) ; ( pblank2.six north ) ( xblank2.six south ) ; ( pblank3.six north ) ( xblank3.six south ) ; it is clear that the bits generated from different samples of a source ( say and ) are independent as long as these samples are generated independently .also , considering the correlation in continuous domain , it can be seen that the same argument is valid for the binary representation of and .that is , and are independent if they are generated from different samples .this is because is related to ( through ) but it is independent from for any .this indicates that , using a -bit quantizer , bscs are enough to efficiently model the correlation between the two correlated continuous - valued sources ; each of these channels is used to model the correlation between bits corresponding to one bit - plane .for one thing , is used to model the correlation between the msb s of and in the binary domain .this is shown in fig .[ fig : model_subfig2 ] . numerical results , presented in fig .[ fig : bscs ] , confirm that these channels have different parameters .moreover , with high probability , at low and moderate channel noises we have where the indices to , respectively , represent the channel corresponding to the lsb to msb .this is intuitively appealing because even a small error in continuous domain ( ) can invert the lsb while the msb is affected only with large errors .note that the parameter of the conventional single bsc model is obtained by we next discuss the incorporation of this new model into the dsc framework that uses ldpc codes for compression .in this section , we present three different implementations of the introduced correlation model in the slepian - wolf coding based on ldpc codes .these are named parallel , sequential , and hybrid decoding .a first idea is to divide the input sequence into sub - streams each of which contains only the bits with the same significance .now each channel can be modeled by one bsc with its own parameter .hence , we can implement _ parallel _ ldpc decoders each corresponding to one correlation channel .this implies ldpc decoders at the decoding center , which increases the complexity .particularly , effective compression requires codes with different rates , as the parameter of bsc channel for different bit - planes is different .then , the code corresponding to the msb , for example , will have the highest rate , as it has the smallest . on the other hand ,given a same code for all channels the msb will be decoded with the lowest ber . given a same ldpc code for all channels , the complexity increases times , in the new approach ; the delay is the same assuming that the input of all decoders are available at the reciever . by using _sequential _ decoding , the number of decoders can be reduced to one at the cost of increased delay .to do so , we let the decoder decode different sub - streams sequentially .note that each time the ldpc decoder is initialized with the corresponding .it can be seen that , compared to the parallel decoding , the complexity reduces times while the delay increases times .the latter is due to the fact that in order for decoder to reconstruct one sample of , it must wait for the output of ldpc blocks .a yet more efficient integration of the new correlation model into the ldpc - based dsc can be achieved just by using a single ldpc encoder / decoder .this is done in two steps , as explained in the following .the parameters of the multiple - bsc correlation model can be incorporated into the ldpc - based dsc by judiciously setting the llr sent from ( to ) the variable nodes .the idea is to take into account the bit - plane to which each bit belongs .this requires a slight change in the standard ldpc decoding algorithm .specifically , using the notation in , we just need to adjust the llr sent from ( to ) the variable nodes .that is , equation ( 1 ) in will be modified as }{\mathrm{pr}[x_i=1|y_i]}=(1 - 2y_i)\log \frac{1-p_k[i]}{p_k[i]},\end{aligned}\ ] ] in which , \in \{p_1 , \dotsc , p_b \}$ ] , and represents the bit - plane to which ( or ) belongs .this is illustrated in fig .[ fig : ldpc ] . for example ,if is the lsb , in its corresponding sample , then . note that if , where is the code length , then .since the initial llr s become more accurate in this method , the number of iterations required to achieve a same performance reduces .however , the performance gap is still noticeable . to bridge this gap , we propose to interleave the input data ( and side information ) in the binary domain . as we discussed in section[ sec : newcor ] , the bits corresponding to each error sample , which are located in a row , are correlated . by interleaving and before feeding them into the slepian - wolf encoder and decoder , these successive bits can be shuffled to introduce randomness to the errors .then , it makes better sense to encode data belonging to all bit - planes altogether as in the conventional approach .the longer the permutation block input , the more accurate the model and the better the performance .interleaving , however , can increase the delay at the receiver side since we need deinterleaving after the slepian - wolf decoder . to avoid excessive delay , we set the length of interleaving block equal to the length of the ldpc code .the improvement in the ber and mse , only due to interleaving , is remarkably high .obviously , we can use interleaving and llr s manipulation simultaneously ; this requires applying interleaving to the crossover probabilities , depicted in fig .[ fig : ldpc ] , as well .another important advantage of this approach is that it can be used to combat the bursty correlation channels , as a perfect interleaver transforms a bursty channel into an independently distributed channel .the bursty correlation channel model is capable of addressing the bursty nature of the correlation between sources in applications such as sensor networks and video coding , since it takes the memory of the correlation into account .in this section , we numerically compare the new decoding algorithm with the conventional approach which considers just one bsc for the correlation model .we use irregular ldpc code of rate with the degree distribution the frame length is and the bit error rate ( ber ) and corresponding mean - squared error ( mse ) are measured after 50 itinerations in both schemes . the source is a zero mean , unit variance gaussian . also the correlation between and is defined by ge channel with , in , and channel - error - to - quantization - noise ratio ( ) varies as shown in fig .[ fig : subfig2 ] .both sources are quantized with a 6-bit scalar uniform quantizer .simulation results are presented in fig .[ fig : subfig1]-fig .[ fig : subfig3 ] . in these figures ,the actual data " refers to the case where binary sequences and are obtained from quantizing and .we also compute the ber for the case that side information is generated by passing through a virtual bsc with parameter , which is conventional in practical slepian - wolf coding .this is labeled as artificial data . "the fact that actual " and artificial " side information result in very different bers , by itself , indicates that a single bsc is not an appropriate model for correlation between continuous - valued sources . on the contrary , the ber resulted from hybrid decoding with actual side information is significantly better than that of the conventional approach which shows the suitability of the new model .figure [ fig : subfig2 ] represents the corresponding mse . from these figures, it can be seen that the new scheme ( hybrid decoding ) greatly outperforms the existing method , for actual data .furthermore , as shown is fig .[ fig : subfig3 ] , the number of iterations required to achieve such a performance is much smaller than the existing method , owing to more accurate initial llrs . [fig : simulations ] the performance of parallel and sequential decoding , for a same code , are the same .these schemes benefit from the advantage of working over data belonging to separate bit - planes . hence , one bsc can effectively approximate the corresponding correlation for each bit - plane .simulation results verify that separate compression of data belonging to different bit - planes that uses actual data is as effective as the case that uses artificial side information .moreover , there is no need for interleaving .however , an efficient compression , in parallel and sequential decoding , requires codes with different rates for each bit - plane .alternatively , this can be implemented through the use of rate - adaptive ldpc codes .we have introduced an improved model for the virtual correlation between the continuous - valued sources in the binary domain .this model exploits multiple bscs rather than the conventional single - bsc model so that it can deal with the dependency among the bits resulting from quantization of each error sample by converting the error sequence into multiple i.i.d. sequences .an efficient implementation of the new model is realized just by using a single ldpc decoder but judiciously setting the llr sent from ( to ) the variable nodes .the number of iterations required to achieve the same performance reduces noticeably as a result of this prudent setting of initial llrs . besides ,by interleaving the data and side information the bits belonging to one error sample are shuffled which increases the performance of the decoding to a great extent .this significant improvement in the ber and mse is achieved without any increase in the complexity or delay .the new scheme can also be used to combat the bursty nature of the correlation channel in practical applications .n. cheung , h. wang , and a. ortega , `` sampling - based correlation estimation for distributed source coding under rate and complexity constraints , '' _ ieee transactions on image processing _ ,17 , pp .21222137 , nov .2008 .f. bassi , m. kieffer , and c. weidmann , `` source coding with intermittent and degraded side information at the decoder , '' in _ proc .ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp . 29412944 , 2008 .h. shu , h. huang , and s. rahardja , `` analysis of bit - plane probability for generalized gaussian distribution and its application in audio coding , '' _ ieee transactions on audio , speech , and language processing _29 , pp . 11671176 , may 2012 .v. stankovic , a. liveris , z. xiong , and c. georghiades , `` on code design for the slepian - wolf problem and lossless multiterminal networks , '' _ ieee transactions on information theory _ , vol .52 , pp . 14951507 , april 2006 .e. dupraz , f. bassi , t. rodet , and m. kieffer , `` distributed coding of sources with bursty correlation , '' in _ proc .ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .29732976 , 2012 .
accurate modeling of the correlation between the sources plays a crucial role in the efficiency of distributed source coding ( dsc ) systems . this correlation is commonly modeled in the binary domain by using a single binary symmetric channel ( bsc ) , both for binary and continuous - valued sources . we show that one " bsc can not accurately capture the correlation between continuous - valued sources ; a more accurate model requires multiple " bscs , as many as the number of bits used to represent each sample . we incorporate this new model into the dsc system that uses low - density parity - check ( ldpc ) codes for compression . the standard slepian - wolf ldpc decoder requires a slight modification so that the parameters of all bscs are integrated in the log - likelihood ratios ( llrs ) . further , using an interleaver the data belonging to different bit - planes are shuffled to introduce randomness in the binary domain . the new system has the same complexity and delay as the standard one . simulation results prove the effectiveness of the proposed model and system .
entanglement has been identified as a resource central to much of quantum information processing . to date , progress in the quantification of entanglement for mixed states has resided primarily in the domain of bipartite systems . for multipartite systems in pure and mixed statesthe characterziation and quantification of entanglement presents even greater challenges . even for multipartite pure statesit is not clear whether there exists a finite minimal reversible entanglement generating set ( mregs ) and , if it exists , what the set is .this complicates the task of extending measures such as entanglement of distillation and formation to multipartite systems .moreover , the characterization of multipartite entanglement remains incomplete . on the other hand ,quantifying multipartite entanglement via other measures , such as relative entropy of entanglement , is still a challenging task , even for _ pure _ states .one reason for the difficulty is the absence , in general , of schmidt decompositions for multipartite pure states .this implies that for multipartite pure states the entropies of the reduced density matrices can differ , in contrast to bipartite pure states , as the following example shows .consider a three - qubit pure state , where .the reduced density matrices for parties a , b , and c are , respectively , which , in general , have different entropies .thus , for a multipartite pure state the entropy of the reduced density matrix does not give a consistent entanglement measure .however , even in the case in which all parties have the identical entropy , e.g. , , it is in general nontrivial to obtain the relative entropy of entanglement for the state . more generally , for pure multipartite states , it is not yet known how to obtain their relative entropy of entanglement analytically .the situation is even worse for _ mixed _ multipartite states .recently , a multipartite entanglement measure based on the geometry of hilbert space has been proposed . for pure states ,this geometric measure of entanglement depends on the maximal overlap between the entangled state and unentangled states , and is easy to compute numerically .the measure has been applied to several bipartite and multipartite pure and mixed states , including two distinct multipartite bound entangled states . in the present paper, we explore connections between this measure and the relative entropy of entanglement . for certain pure states , some bipartite and some multipartite , this lower bound is saturated , and thus their relative entropy of entanglement can be found analytically , in terms of their known geometric measure of entanglement . for certain mixed states , upper bounds on the relative entropy of entanglement are also established .numerical evidence strongly suggests that these upper bounds are tight , i.e. , they are actually the relative entropy of entanglement . these results , although not general enough to solve the problem of calculating the relative entropy of entanglement for arbitrary multipartite states , may offer some insight into , and serve as a testbed for , future analytic progress related to the relative entropy of entanglement .the structure of the present paper is as follows . in sec .[ sec : measures ] we review the two entanglement measures considered in the paper : the relative entropy of entanglement and the geometric measure of entanglement . in sec .[ sec : connection ] we explore connections between the two , in both pure- and mixed - state settings .examples are provided in which bounds and exact values of the relative entropy of entanglement are obtained . in sec .[ sec : summary ] we give some concluding remarks .[ sec : measures]in this section we briefly review the two measures considered in the present paper : the relative entropy of entanglement and the geometric measure of entanglement . the relative entropy between two states and is defined via which is evidently not symmetric under exchange of and , and is non - negative , i.e. , .the relative entropy of entanglement ( re ) for a mixed state is defined to be the minimal relative entropy of over the set of separable mixed states : where denotes the set of all separable states .in general , the task of finding the re for arbitrary states involves a minimization over all separable states , and this renders the computation of the re very difficult . for bipartite pure states , the re is equal to entanglements of formation and of distillation .but , despite recent progress , for mixed states even in the simplest setting of two qubits no analog of wootters formula for the entanglement of formation has been found .things are even worse in multipartite settings . even for pure states , there has not been a systematic method for computing relative entropies of entanglement .it is thus worthwhile seeking cases in which one can explicitly obtain an expression for the re .a trivial case arises when there exists a schmidt decomposition for a multipartite pure state : in this case , the re is the usual expression where the s are schmidt coefficients ( with ) .we shall see that there exist cases in which the re can be determined analytically , even though there is no schmidt decomposition .we remark that an alternative definition of re is to replace the set of separable states by the set of postive partial transpose ( ppt ) states .the re thus defined , as well as its regularized version , gives a tighter bound on distillable entanglement .there has been important progress in calculating the re ( and its regularized version ) with respect to ppt states for certain bipartite mixed states ; see refs . for more detailed discussions . formultipartite settings one could also use this definition , and define the set of states to optimize over to be the set of states that are ppt with respect to all bipartite partitionings .however , we shall use the first definition , i.e. , optimization over the set of completely separable states , throughout the discussion of the present paper .we continue by briefly reviewing the formulation of this measure in both pure - state and mixed - state settings .let us start with a multipartite system comprising parts , each of which can have a distinct hilbert space .consider a general -partite pure state ( expanded in the local bases ) : satisfies the stationarity conditions [ eqn : eigen ] in which the eigenvalues are associated with the lagrange multiplier enforcing the constraint , and lie in ] and that has the same reduction as for every party . furthermore ,suppose ( diagonal in the basis ) represents the separable state that gives the conjectured value of re : where the s can be obtained by finding the convex hull of the function in eq .( [ eqn : f ] ) . now consider any separable state in the hilbert space _orthogonal _ to the subspace spanned by .we need to show that the separable state , for any ] , we immediately obtain from conjecture 1 that , for , + \frac{k}{n}\log_2\frac{k}{n}+\frac{n\!-\!k}{n}\log_2\frac{n\!-\!k}{n}\\ & = & e_{\rm r}\left(\ket{s(n , k)}\right)- s\left(\rho_{n\!-\!1;k\!-\!1,k}(k / n)\right).\end{aligned}\ ] ] therefore , the bound in eq .( [ eqn : cre ] ) is saturated for .a major challenge is to extend the ideas contained in the present paper from the relative entropy of entanglement to its regularized version , the latter in fact being of wider interest than the former .the alternative way of defining the relative entropy via the optimization over ppt states may also been used , in view of the recent progress on the bipartite regularized relative entropy of entanglement .we now explore the possibility that the geometric measures can provide lower bounds on yet another entanglement measure the entanglement of formation . if the relationship between the two measures of entanglement the relative entropy of entanglement and the entanglement of formation continue to hold for _ multipartite _ states ( at least for pure states ) , and if should remain a convex hull construction for mixed states , then we would be able to construct a lower bound on the entanglement of formation : where and are such that .thus , is a lower bound on . by using the inequality ( for ), one further has has that .we remark that has been shown to be an entanglement monotone , i.e. , it is not increasing under local operations and classical communication ( locc ) .however , is _ not _ a monotone , as the following example shows . consider the bipartite pure state with , for which .suppose that one party makes the following measurement : with probability the output state becomes ; with probability the output state becomes , for which .for to be a monotone it would be necessary that putting in the corresponding values for the s and s , we find that this inequality is equivalent to as this is violated for certain values of with , as exemplified in fig .[ fig : violate ] for the plot of , we arrive at the conclusion that is , in general , not a monotone . _ note added ._ certain results reported in the present paper have recently been applied by vedral to the macroscopic entanglement of -paired superconductivity .we thank vlatko vedral and pawel horodecki for many useful discussions . this work was supported by nsf eia01 - 21568 and doe defg02 - 91er45439 , as well as a harry g. drickamer graduate fellowship . m.e .acknowledges support from wenner - gren foundations .000 see , e.g. , m. nielsen and i. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 2000 ) . for a review , see m. horodecki , quantum inf .* 1 * , 3 ( 2001 ) , and references therein . c. h. bennett , s. popescu , d. rohrlich , j. a. smolin , and a. v. thapliyal , phys .a * 63 * , 012307 ( 2001 ) . c. h. bennett , h. j. berstein , s. popescu , and b. schumacher , phys .rev . a * 53 * , 2046 ( 1996 ) . c. h. bennett , d. divincenzo , j. smolin , and w. k. wootters , phys .a * 53 * , 3824 ( 1996 ) .w. k. wootters , phys .80 * , 2245 ( 1998 ) .m. b. plenio and v. vedral , j. phys .a * 34 * , 6997 ( 2001 ) .v. vedral , m. b. plenio , m. a. rippin , and p. l. knight , phys .lett . * 78 * , 2275 ( 1997 ) .a. peres , phys .a * 202 * , 16 ( 1995 ) . as we shall show later, this state has , exactly the value cited in ref . from a numerical result .a. shimony , ann .sci . * 755 * , 675 ( 1995 ) . h.barnum and n. linden , j. phys .a * 34 * , 6787 ( 2001 ) .wei and p. m. goldbart , phys .a * 68 * , 042307 ( 2003 ) .wei , j. b. altepeter , p. m. goldbart , and w. j. munro , to appear in phys .a * 70 * ( 2004 ) ; quant - ph/0308031 .s. ishizaka , phys .a * 67 * , 060301 ( 2003 ) .k. audenaert , j. eisert , e. jan , m. b. plenio , s. virmani , and b. de moor , phys .lett . * 87 * , 217902 ( 2001 ) ; k. audenaert , m. b. plenio , and j. eisert , phys .* 90 * , 027901 ( 2003 ) .we thank an anonymous referee for pointing out the alternative in defining the relative entropy of entanglement , as well as the above references .s. bravyi , phys .rev . a * 67 * , 012313 ( 2003 ) .a. acn , g. vidal , and j. i. cirac , quantum inf . and comput . * 3 * , 55 ( 2003 ) .g. vidal and j. i. cirac , phys .* 86 * , 5803 ( 2001 ) .we thank an anonymous referee for pointing out these references .j. k. stockton , j. m. geremia , a. c. doherty , and h. mabuchi , phys .rev . a * 67 * , 022112 ( 2003 ) .w. j. munro , d. f. v. james , a. g. white , and p. g. kwiat , phys .a * 64 * , 030302 ( 2001 ) .wei , k. nemoto , p. m. goldbart , p. g. kwiat , w. j. munro , and f. verstraete , phys .rev . a * 67 * , 022110 ( 2003 ) . v. vedral and m. b. plenio , physa * 57 * , 1619 ( 1998 ) .k. g. h. vollbrecht and r. f. werner , phys .a * 64 * , 062307 ( 2001 ) .s. ishizaka , j. phys ., * 35 * , 8075 ( 2002 ) .s. wu and y. zhang , phys .a * 63 * , 012308 ( 2000 ) .e. f. galvao , m. b. plenio , and s. virmani , j. phys .a * 33 * , 8809 ( 2000 ) .we thank an anonymous referee for raising this question . v. vedral , quant - ph/0405102 .
as two of the most important entanglement measures the entanglement of formation and the entanglement of distillation have so far been limited to bipartite settings , the study of other entanglement measures for multipartite systems appears necessary . here , connections between two other entanglement measures the relative entropy of entanglement and the geometric measure of entanglement are investigated . it is found that for arbitrary pure states the latter gives rise to a lower bound on the former . for certain pure states , some bipartite and some multipartite , this lower bound is saturated , and thus their relative entropy of entanglement can be found analytically in terms of their known geometric measure of entanglement . for certain mixed states , upper bounds on the relative entropy of entanglement are also established . numerical evidence strongly suggests that these upper bounds are tight , i.e. , they are actually the relative entropy of entanglement .
in recent years , many methods have been used to reduce the degree of bzier curves with constraints ( see , e.g. , ) .most of these papers give methods of multi - degree reduction of a _ single _ bzier curve with constrains of endpoints ( parametric or geometric ) continuity of arbitrary order with respect to -norm .observe , however , that degree reduction schemes often need to be combined with the subdivision algorithm , i.e. , a high degree curve is replaced by a number of lower degree curve segments , or a _ composite _ bzier curve , and continuity between adjacent lower degree curve segments should be maintained .intuitively , a possible approach in such a case is applying the multi - degree reduction procedure to one segment of the curve after another with properly chosen endpoints continuity constraints .however , in general , the obtained solution does not minimize the distance between two composite curves . in this paper , we give the optimal least - squares solution of multi - degree reduction of a composite bzier curve with the parametric continuity constraints at the endpoints of the segments .more specifically , we consider the following approximation problem .[ -constrained multi - degree reduction of the composite bzier curve ] [ p : main ] let be a partition of the interval ] ) in that in the interval ] ) of degree , i.e. , where , and are the bernstein basis polynomials of degree .find a composite bzier curve ( ] ( ) is exactly represented as a bzier curve ( {\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}t\displaystyle\binom{m}{j}^{\!\!-1}\!\! ] and {\rm\,d}{\rm\,d}u{\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}u{\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}u{\rm\,d}{\rm\,d}t{\rm\,d}{\rm\,d}u\bm{\lambda} ] .in particular , we have q_{i+1,0}=\kappa_i,\quad q_{i+1,1}=\kappa_i+a_{i+1,1}{\mbox{}}_{i,1 } , \end{array}\\[1ex ] & \begin{array}{l } q_{i , m_i-2}=\kappa_i-2a_{i,1}{\mbox{}}_{i,1}+a_{i,2}{\mbox{}}_{i,2 } , \\[2ex ] q_{i+1,2}=\kappa_i+2a_{i+1,1}{\mbox{}}_{i,1}+a_{i+1,2}{\mbox{}}_{i,2 } , \end{array}\\[1ex ] & \begin{array}{l } q_{i , m_i-3}=\kappa_i-3a_{i,1}{\mbox{}}_{i,1}+3a_{i,2}{\mbox{}}_{i,2}-a_{i,3}{\mbox{}}_{i,3 } , \\[2ex ] q_{i+1,3}=\kappa_i+3a_{i+1,1}{\mbox{}}_{i,1}+3a_{i+1,2}{\mbox{}}_{i,2}+a_{i+1,3}{\mbox{}}_{i,3}. \end{array } \end{aligned}\ ] ] coming back to the problem of constrained multi - degree reduction of a composite bzier curve ( see problem [ p : main ] ) , let us observe that for any the formulas with _ fixed parameters _ and constitute constraints of the form demanded in lemma [ l : modred ] .now , by applying this lemma for to , with , , and , we obtain the set of the segments of the composite bzier curve with control points depending on the parameters . in order to solve problem [ p : main ] , we have to determine the _ optimum values _ of the parameters ( cf . remark [ r : oldnew ] ) .let us denote ^t= [ \kappa^1_{i},\ldots,\kappa^d_i;\lambda^1_{i,1},\ldots,\lambda^d_{i,1 } ; \ldots;\lambda^1_{i , r_{i}},\ldots,\lambda^d_{i , r_{i}}]^t \in{\mathbb r}^{\rho_i},\ ] ] where . the _ optimum values _ of the parameters can be obtained by minimizing the error function , & = e_1({\mbox{}}_1 ) + \sum_{i=2}^{s-1}e_i({\mbox{}}_{i-1};{\mbox{}}_i ) + e_s({\mbox{}}_{s-1}),\end{aligned}\ ] ] where ,\ ] ] , , , with , being the vectors of coordinates of points and , respectively . for a minimum of ,it is necessary that the derivatives of with respect to the parameters and are zero , which yields a system of _ linear _ equations with unknowns , where .hence , for , we have \label{e : sys2 } \frac{\partial e}{\partial \lambda_{i , j}^h}&=\frac{\partial } { \partial \lambda_{i ,j}^h}(e_i+e_{i+1})=0 \qquad ( j=1,2,\ldots , r_i;\ ; h=1,2,\ldots , d).\end{aligned}\ ] ] now , we summarise the whole idea in algorithm [ a : alg ] .note that for , the use of lemma [ l : modred ] requires computation of -table ( see algorithm [ a : ctab ] ) .the entries of -table are denoted by .[ a : alg ] + ` input ` : , , ( ) , + , + ` output ` : solution of problem [ p : main ] ` step 1 ` .: : for , compute using algorithm [ a : ctab ] , assuming that , , . `: : compute by . `: : solve the system of linear equations , . `: : compute by . `: : compute by . `: : compute and by . `: : for , set , , , , , , and compute the control points using lemma [ l : modred ] . `: : return .[ r : copalg ] in case , where interpolation conditions are imposed at the endpoints of the original segments ( see remark [ r : probvar ] ) , algorithm [ a : alg ] must be slightly modified .note that the parameter is the meeting point of the consecutive segments and , i.e. , .therefore , by setting ( cf . ) , and by removing the subsystem from the system , the goal is easily achieved .this section provides of the application of our algorithm . we give the squared -errors ( ) and ( see , ) as well as the maximum errors } \|p(t )- q(t)\|,\\ & e^{\infty } : = \max_{1 \le i \le s } e_i^{\infty},\end{aligned}\ ] ] where with . results of the experiments have been obtained in maple13 , using -digit arithmetic .the system of linear equations , is solved using maple ` fsolve ` procedure .[ ex : s ] assuming that and , the composite curve `` squirrel '' is formed by four bzier segments of degrees , , , and , respectively .control points are given at http://www.ii.uni.wroc.pl/~pgo/squirrel.txt .for the results of degree reduction , see table [ tab : sqr ] as well as the corresponding figures [ fig:0a ] and [ fig:0b ] .this example shows that algorithm [ a : alg ] may result in a lower error than multiple application of ( * ? ? ?* algorithm 1 ) .furthermore , the larger are s , the bigger are differences in errors because we have more degrees of freedom , i.e. , parameters ( see remark [ r : oldnew ] ) .c [ ex : l ] for and , we consider the composite curve `` l '' which consists of two bzier segments of degrees and having the control points , and , respectively .the comparison of the results of algorithm [ a : alg ] , algorithm described in remark [ r : copalg ] , and ( * ? ? ?* algorithm 1 ) is given in table [ tab : l ] ( see also figure [ fig:2 ] ) .once again , we observe that the new approach may lead to better results than the older one . moreover , we note that in some cases it is useful to interpolate the endpoints of the original segments ( see remarks [ r : probvar ] and [ r : copalg ] ) .( blue solid line ) and degree reduced composite bzier curves computed using algorithm [ a : alg ] ( red dashed line ) , algorithm described in remark [ r : copalg ] ( green dash - dotted line ) , and ( * ? ? ? * algorithm 1 ) ( black dotted line ) .parameters are specified in table [ tab : l].,width=238 ] [ ex : g ] let there be given three bzier curves of degrees , , and , defined by the control points , , and , respectively .note that these bzier curves are not joined ( see figure [ fig:3a ] ) . in spite of that , we set , and apply algorithm [ a : alg ] with and . as a result, we obtain the -continuous composite bzier curve `` g '' illustrated in figures [ fig:3b ] and [ fig:3c ] ( errors : , , , , , , , ) .in addition , the degrees of the segments are reduced .this example shows that our algorithm can serve as a tool for merging of several unconnected bzier curves into a smooth composite bzier curve .furthermore , in case of -continuous input curves , the algorithm can eliminate possible _rough edges _ and _corners_. cwe propose a novel approach to the problem of multi - degree reduction of composite bzier curves .in contrast to other methods , ours minimizes the -error for the whole composite curve instead of minimizing the -errors for each segment separately .the main idea is to connect consecutive segments of the searched composite curve using the conditions . as a result ,an additional optimization is possible .the new problem is solved efficiently using the properties of constrained dual bernstein polynomial basis .examples [ ex : s ] and [ ex : l ] show that the new method gives much better results than multiple application of the degree reduction of a single bzier curve . furthermore , we observe that a slight modification of the method allows for interpolation of endpoints of the original segments ( see remarks [ r : probvar ] and [ r : copalg ] ) , which may be useful in some cases ( see example [ ex : l ] ) .moreover , merging of several unconnected bzier curves into a smooth composite bzier curve is also possible ( see example [ ex : g ] ) .let us mention that we have studied also the extended version of problem [ p : main ] where the parametric continuity conditions are replaced by geometric continuity constraints .we have observed that in this case the error function becomes a high degree polynomial function of many variables , even for modest values of s .consequently , we have to deal with constrained nonlinear programming problem in order to find optimal values for the parameters .experiments show that calculations may be painfully long .so far , we have not been able to give an efficient algorithm of solving this task .lee , y. park , j. yoo , constrained polynomial degree reduction in the -norm equals best weighted euclidean approximation of bzier coefficients , comput .aided geom . des . 21 ( 2004 ) , 181191 .
this paper deals with the problem of multi - degree reduction of a composite bzier curve with the parametric continuity constraints at the endpoints of the segments . we present a novel method which is based on the idea of using constrained dual bernstein polynomials to compute the control points of the reduced composite curve . in contrast to other methods , ours minimizes the -error for the whole composite curve instead of minimizing the -errors for each segment separately . as a result , an additional optimization is possible . examples show that the new method gives much better results than multiple application of the degree reduction of a single bzier curve . composite bzier curve , multi - degree reduction , parametric continuity constraints , least - squares approximation , constrained dual bernstein basis .
optical wireless communication , where information is conveyed through optical radiations in free space in outdoor and indoor environments , is emerging as a promising complementary technology to rf wireless communication . while communication using infrared wavelengths has been in existence for quite some time , , more recent interest centers around indoor communication using visible light wavelengths , . a major attraction in indoor visible light communication ( vlc )is the potential to simultaneously provide both energy - efficient lighting as well as high - speed short - range communication using inexpensive high - luminance light - emitting diodes ( led ) .several other advantages including no rf radiation hazard , abundant vlc spectrum at no cost , and very high data rates make vlc increasingly popular .orthogonal frequency division multiplexing ( ofdm ) which is popular in both wired and wireless rf communications is attractive in vlc as well .when ofdm is used in rf wireless communications , baseband ofdm signals in the complex domain are used to modulate the rf carrier .ofdm can be applied to vlc in context of intensity modulation and direct detection ( im / dd ) , where im / dd is non - coherent and the transmit signal must be real and positive .this can be achieved by imposing hermitian symmetry on the information symbols before the inverse fast fourier transform ( ifft ) operation .several papers have investigated ofdm in vlc - , which have shown that ofdm is attractive in vlc systems .a 3 gbps single - led vlc link based on ofdm has been reported in .several techniques that generate vlc compatible ofdm signals in the positive real domain have been proposed in the literature - .these techniques include dc - biased optical ( dco ) ofdm , asymmetrically clipped optical ( aco ) ofdm - , flip ofdm , , and non - dc biased ( ndc ) ofdm . in the above works , dco ofdm , aco ofdm , and flip ofdm are studied for single - led systems .the ndc ofdm in uses two leds . in , it has been that ndc ofdm performs better compared with dco ofdm and aco ofdm that use two leds .use of multiple leds is a natural and attractive means to achieve increased spectral efficiencies in vlc .our study in this paper focuses on multiple led ofdm techniques to vlc .our new contribution is the proposal of a scheme which brings in the advantage of ` spatial indexing ' to ofdm schemes for vlc .in particular , we propose a _ ` indexed ndc ( i - ndc ) ofdm ' _ scheme , where information bits are not only conveyed through the modulation symbols sent on the active led , but also through the index of the active led .this brings in the benefit of higher rate and better performance .our simulation results show that , for the same spectral efficiency , the proposed i - ndc ofdm outperforms ndc ofdm in the low - to - moderate snr regime .this is because , to achieve the same spectral efficiency , i - ndc ofdm can use a smaller - sized qam .however , in the high - snr regime , ndc ofdm performs better .we find that this is because of the high error rates witnessed by the index bits in i - ndc ofdm due to high channel correlation in multiple led settings . in order to alleviate this problem and improve the reliability of the index bits at the receiver, we propose to use coding on the index bits alone .this proposed scheme is called _ ` coded i - ndc ofdm ' ( ci - ndc ofdm ) _ scheme .our simulation results show that , for the same spectral efficiency , the proposed ci - ndc ofdm with ldpc coding on the index bits performs better than ndc ofdm in vlc systems . the remainder of this paper is organized as follows .[ sec2 ] gives an overview of dco ofdm , aco ofdm , flip ofdm , and ndc ofdm schemes .the proposed ci - ndc ofdm and performance results and discussions are presented in section [ sec3 ] .conclusions are presented in section [ sec5 ] .here , we present an overview of the existing ofdm schemes for vlc reported in the literature .figure [ ofdmsys ] shows the block diagram of a general single - led ofdm system with subcarriers for vlc . in this system ,a real ofdm signal is generated by constraining the input vector to the transmit -point ifft to have hermitian symmetry , so that the output of the ifft will be real .the output of the ifft , though real , can be positive or negative .it can be made positive by several methods , namely , 1 ) adding dc bias ofdm , 2 ) clipping at zero and transmitting only positive part ofdm - , and 3 ) transmitting both positive and negative parts after flipping the negative part ofdm . while the block diagram in fig .[ ofdmsys ] is for ofdm for vlc in general , the transmit and receive processing and achieved rates in bits per channel use ( bpcu ) can differ in the ofdm schemes listed above .these are highlighted below . in dco ofdm , data bits are mapped to qam symbols , where is the qam constellation size .the dc subcarrier ( i.e. , ) is set to zero .the qam symbols are mapped to subcarriers 1 to , i.e., .hermitian symmetry is applied to the remaining subcarriers , i.e. , complex conjugates of the symbols on the first subcarriers are mapped on the second half subcarriers in the reverse order , where the subcarrier is set to zero .that is , the input to the -point ifft is given by ^t.\ ] ] this hermitian symmetry ensures that the ifft output will be real and bipolar .these bipolar ofdm symbols , at the ifft output are converted into unipolar by adding a dc bias , .let be the bipolar ofdm signal without dc bias .then the unipolar ofdm signal that drives the transmit led is given by where .we define this as a bias of db .note that corresponds to the case of no dc bias .for the dc bias to be not excessive , the negative going signal peaks must be clipped at zero .the performance of dco ofdm depends on the amount of dc bias , which depends upon the size of the signal constellation .for example , large qam constellations require high snrs for acceptable bers , and therefore the clipping noise must be kept low , which , in turn , requires the dc bias to be large . as the dc bias increases , the required transmit power also increases .this makes the system power inefficient .due to hermitian symmetry , the number of independent qam symbols transmitted per ofdm symbol is reduced from to .thus , the achieved rate in dco ofdm is at the receiver side , the output of the photo detector ( pd ) , , is digitized using an analog - to - digital converter ( adc ) and the resulting sequence , , is processed further .the dc bias is first removed and the sequence after dc bias removal is fed as input to the -point fft . the fft output sequence is ^t ] , is fed as input to the sm detector .the sm detector , for example , can be zero forcing ( zf ) detector . that is , the sm detector output , denoted by , is where z_2(n ) \end{bmatrix } & = & \begin{bmatrix } \big({{\bf h}}_1^{t}{{\bf h}}_1\big)^{-1}{{\bf h}}_1^{t}\ { { \bfy}}\\[0.25em ] \big({{\bf h}}_2^{t}{{\bf h}}_2\big)^{-1}{{\bf h}}_2^{t}\ { { \bf y}}\end{bmatrix},\end{aligned}\ ] ] and is the column of channel matrix . the sm detector output is then fed to the -point fft . from the -point fft output ,the subcarriers 1 to are demodulated to get back the transmit data . here, we illustrate a ber performance comparison between dco ofdm , aco ofdm , flip ofdm , and ndc ofdm . the indoor vlc system set upis shown in fig .the system parameters of the indoor vlc system considered in the simulation are given in table [ tab1 ] .all systems use leds , pds .the pds are kept symmetrical on top of a table with respect to the center of the floor with a of 0.1 m .the leds are kept symmetrical with respect to the center of the room at 1 m apart and at 3 m height(i.e . , m and m ) .the channel gain between led and pd is calculated as where is the angle of emergence with respect to the source ( led ) and the normal at the source , is the mode number of the radiating lobe given by , is the half - power semiangle of the led , is the angle of incidence at the photo detector , is the area of the detector , is the distance between the source and the detector , fov is the field of view of the detector , and if and if ..[tab1 ] system parameters in the considered indoor vlc system .[ cols= " < , < , < " , ] figure [ fig4 ] shows the ber performance achieved by dco ofdm , aco ofdm , flip ofdm , and ndc ofdm for bpcu , and .the parameters considered these systems are : 1 ) dco ofdm : , , 7 db bias , 2 ) aco ofdm : , , 3 ) flip ofdm : , , and 4 ) ndc ofdm : , . in acoofdm , flip ofdm , and dco ofdm , there are two parallel transmitting ofdm blocks , each drives one led simultaneously .zf detection is used for dco ofdm , aco ofdm , and flip ofdm .the hypothesis testing based detection method presented in sec .[ sec2d ] is used for ndc ofdm . from fig .[ fig4 ] , it can be seen that dco ofdm has poor performance compared to other systems , and this is due to the dc over - biasing .also , aco ofdm and flip ofdm have the same performance . among the ofdm schemes discussed above, ndc ofdm achieves better performance compared to other ofdm schemes .this is because of the spatial interference experienced by the other ofdm schemes , i.e. , while two leds are active simultaneously in dco ofdm , aco ofdm , and flip ofdm , only one led will be active at a time in ndc ofdm .bpcu , .,width=321,height=240 ]motivated by the advantages of multiple leds and spatial indexing to achieve increased spectral efficiency , here we first propose a multiple led ofdm scheme called ` indexed ndc ofdm ( i - ndc ofdm ) ' . in this scheme ,additional bits are conveyed through the index of the active led .then , realizing the need to protect the index bits better in this scheme , we propose to use coding on the index bits .this scheme is called coded index ndc ofdm ( ci - ndc ofdm ) .the block diagram of the proposed i - ndc ofdm transmitter is illustrated in fig .[ indctx ] .i - ndc ofdm is an -subcarrier ofdm system with pairs of leds and photo detectors , where the total number of leds .we consider , i.e. , there are 2 pairs of leds . in fig .[ indctx ] , the pair forms block 1 and the pair forms block 2 . in each channel use , only one led in either block 1 or block 2 will be activated .the choice of which block has to be activated in a given channel use is made based on indexing . in a generalsetting , index bits can select one block among blocks . in the considered system , and .therefore , the block selection is done using one index bit per channel use .the led pair in the selected block will be driven as per the standard ndc ofdm scheme described in sec .[ sec2d ] .the i - ndc ofdm transmitter operation is described below ._ transmitter : _ as in ndc ofdm , in i - ndc ofdm also , incoming data bits are first mapped to qam symbols , and the input to the -point ifft is given by ^t.\ ] ] this ensures real and bipolar ifft output .let , be the ifft output . for large , ( e.g. , ), can be approximated as i.i.d .real gaussian with zero mean and variance .therefore , has an approximately half - normal distribution with mean and variance . the ifft output sequence is input to a block selector switch . for each ,the switch decides the block to which has to be sent .this block selection in a given channel use is done using index bits .let denote the index bit for the channel use , .the block selector switch performs the following operation : in the selected block , the polarity separator separates positive and negative parts of ; can be written as if the selected block is block 1 , drives led1 and drives led2. similarly , if block 2 is selected , drives led3 and drives led4 .so , the light intensity emitted by each led is either or 0 . since ,the intensity is such that __ a__chieved data rate : in i - ndc ofdm , qam symbols are sent per ofdm symbol .in addition , number of bits are used to select the active block per channel use .therefore , the achieved data rate in i - ndc ofdm is _ receiver : _ the block diagram of i - ndc receiver is illustrated in fig .[ indcrx ] .we assume perfect channel state information at the receiver . assuming perfect synchronization , the received signal vector at the receiver is given by where is the transmit vector, is the responsivity of the detector , and is the noise vector of dimension .each element in the noise vector can be modeled as i.i.d .real awgn with zero mean and variance .note that the transmit vector has only one non - zero element , and the remaining elements are zeros .the non - zero element in represents the light intensity emitted by the active led , where .the average received signal - to - noise ratio ( snr ) is given by , where \ = \ \frac{\sigma^2_x}{2n_r}\sum_{i=1}^{n_r}\sum_{j=1}^{n_t}h_{ij}^2,\end{aligned}\ ] ] and is the row of .the received optical signals are converted to electrical signals by the pds .the output of these pds are then fed to the adcs .the output of the adcs is given by the vector ^t$ ] , which is fed to the sm detector .the bipolar output of the sm detector is fed to the -point fft .the sm detector can be a zf detector . that is , the sm detector output , denoted by , is where z_2(n)\\[0.1em ] z_3(n)\\[0.1em ] z_4(n ) \end{bmatrix } & = & \begin{bmatrix } \big({{\bf h}}_1^{t}{{\bf h}}_1\big)^{-1}{{\bf h}}_1^{t}\ { { \bfy}}\\[0.1em ] \big({{\bf h}}_2^{t}{{\bf h}}_2\big)^{-1}{{\bf h}}_2^{t}\ { { \bf y}}\\[0.1em ] \big({{\bf h}}_3^{t}{{\bf h}}_3\big)^{-1}{{\bf h}}_3^{t}\ { { \bf y}}\\[0.1em ] \big({{\bf h}}_4^{t}{{\bf h}}_4\big)^{-1}{{\bf h}}_4^{t}\ { { \bf y}}\end{bmatrix},\end{aligned}\ ] ] and is the column of channel matrix .the sm detector output is fed to the -point fft .the subcarriers 1 to at the fft output and demodulated to get back the transmit data .the index bits are detected as if or 2 . if or 4 .leds in a grid with m . ] here , we present the ber performance of the proposed i - ndc ofdm scheme for various system parameters .we fix the number of leds in i - ndc ofdm to be ( see fig . [ indctx ] ) , and the number of pds to be .the placement of leds in a square grid is shown in fig .[ indcplac ] .we also compare the performance of the proposed i - ndc ofdm with that of ndc ofdm .led2 and led3 are used for ndc ofdm .bpcu , .,width=321,height=240 ] figure [ fig5 ] presents the ber performance comparison of i - ndc ofdm and ndc ofdm for bpcu .we fix the number of pds to be for both i - ndc ofdm and ndc ofdm .the parameters considered in these systems for bpcu are : ) ndc ofdm : , , , and ) i - ndc ofdm : , .similarly , the system parameters considered for bpcu are : ) ndc ofdm : , , , and ) i - ndc ofdm : , . from fig .[ fig5 ] , it can be seen that the i - ndc ofdm outperforms ndc ofdm at low snrs .this is because , to achieve the same spectral efficiency , i - ndc ofdm uses a smaller - sized qam compared to that in ndc ofdm .but , as the snr increases , the ndc ofdm outperforms i - ndc ofdm .this is because , as the number of leds is increased , the channel correlation increases which affects the detection performance .note that , though only one led will be active at a time in both ndc ofdm as well as i - ndc ofdm , ndc ofdm has 2 leds whereas i - ndc ofdm has 4 leds . in fig .[ fig6dtx ] , we present the ber performance of i - ndc ofdm as a function of the spacing between the leds ( ) by fixing other system parameters .the parameters considered are : , and bpcu , and snrs = 25 , 35 , 45 db .it is observed from fig .[ fig6dtx ] that there is an optimum which achieves the best ber performance .the optimum spacing is found to be 3.4 m in fig .[ fig6dtx ] .the ber performance get worse at values those are above and below the optimum spacing .this happens due to opposing effects of the channel gains and the channel correlations .that is , as increases , the channel correlation reduces and which improves the ber performance . on the other hand ,the the channel gains get weaker as the increases and this degrades the ber . , bpcu , and .,width=321,height=240 ] bpcu , .,width=321,height=240 ] _ motivation for ci - ndc ofdm : _ while investigating the poor performance of i - ndc ofdm at high snrs , we observed from the simulation results that reliability of the index bits is far inferior compared to the reliability of the modulation bits .this is illustrated in fig .[ figx1 ] . as can be seen, the reliability of the index bits is so poor relative to the that of the modulation bits , the overall performance is dominated by the performance of the index bits .this is because while the modulation bits have the benefit of ofdm signaling to achieve good performance , the index bits did not have any special physical layer care .this has motivated the need to provide some physical layer protection in the form of coding , diversity , etc . indeed , as can be seen from fig .[ figx1 ] , in the ideal case of error - free reception of index bits , the i - ndc ofdm has the potential of outperforming ndc - ofdm even at high snrs ; see the plots of i - ndc ofdm ( error - free index bits ) and ndc ofdm .motivated by this observation , we propose to use coding to improve the reliability of index bits ._ ldpc coding for index bits : _ we propose to use a rate- ldpc code to encode uncoded index bits and obtain coded index bits , . at the transmitter , uncoded index bits are accumulated to obtain ldpc coded index bits .now , the coded index bits are used to select the index of the active led block .thus , one ldpc codeword of size is transmitted in channel uses . therefore , the overall spectral efficiency achieved by the ci - ndc scheme is where is the size of the qam alphabet used in ci - ndc ofdm .the proposed ci - ndc ofdm transmitter and receiver are shown in figs .[ cindctx ] and [ cindcrx ] , respectively . in fig .[ figx2 ] , we compare the performance of the proposed c - indc ofdm with that of ndc ofdm .we match the spectral efficiencies of both the schemes by using the following configurations : ) ndc ofdm : , , , , bpcu , and ) c - indc ofdm : , , , , , , , bpcu . from fig .[ figx2 ] , we observe that , for the same spectral efficiency of about 3.8 bpcu , the proposed ci - ndc ofdm performs better than ndc ofdm .for example , to achieve a ber of , ci - ndc ofdm requires about 1.3 db less snr compared to ndc ofdm .this is because of the improved reliability of the index bits achieved through coding of index bits .we proposed an efficient multiple led ofdm scheme , termed as coded index non - dc - biased ofdm , for vlc .the proposed scheme was motivated by the high spectral efficiency and performance benefits of using multiple leds and spatial indexing . in the proposed scheme ,additional information bits were conveyed through indexing in addition to qam bits . the channel correlation in multiple led settings was found to significantly degrade the reliability of index bits recovery . to overcome this , we proposed coding of index bits .this was found to serve the intended purpose of achieving better performance compared to other ofdm schemes for vlc .investigation of the proposed signaling architecture for higher - order index modulation using multiple pairs of leds can be a topic of further study .j. barry , j. kahn , w. krause , e. lee , and d. messerschmitt , `` simulation of multipath impulse response for indoor wireless optical channels , '' _ ieee j. sel .areas in commun .367 - 379 , apr . 1993 .a. h. azhar , t. a. tran , and d. obrien , `` a gigabit / s indoor wireless transmission using mimo - ofdm visible - light communications,''_ieee photonics tech . letters _ , vol .171 - 174 , dec .2013 .d. tsonev , h. chun , s. rajbhandari , j. j. d. mckendry , d. videv , e. gu , m. haji , s. watson , a. e. kelly , g. faulkner , m. d. dawson , h. haas , and d. obrien , `` a 3-gb / s single - led ofdm - based wireless vlc link using a gallium nitride , '' _ ieee photonics tech . lett .637 - 640 , jan . 2014 .j. armstrong , b. j. schmidt , d. kalra , h. suraweera , and a. j. lowery , `` performance of asymmetrically clipped optical ofdm in awgn for an intensity modulated direct detection system,''_proc .ieee globecom 2006 _ , pp. 1 - 5 , nov . 2006 .
use of multiple light emitting diodes ( led ) is an attractive way to increase spectral efficiency in visible light communications ( vlc ) . a non - dc - biased ofdm ( ndc ofdm ) scheme that uses two leds has been proposed in the literature recently . ndc ofdm has been shown to perform better than other ofdm schemes for vlc like dc - biased ofdm ( dco ofdm ) and asymmetrically clipped ofdm ( aco ofdm ) in multiple leds settings . in this paper , we propose an efficient multiple led ofdm scheme for vlc which uses _ coded index modulation_. the proposed scheme uses two transmitter blocks , each having a pair of leds . within each block , ndc ofdm signaling is done . the selection of which block is activated in a signaling interval is decided by information bits ( i.e. , index bits ) . in order to improve the reliability of the index bits at the receiver ( which is critical because of high channel correlation in multiple leds settings ) , we propose to use coding on the index bits alone . we call the proposed scheme as ci - ndc ofdm ( coded index ndc ofdm ) scheme . simulation results show that , for the same spectral efficiency , ci - ndc ofdm that uses ldpc coding on the index bits performs better than ndc ofdm .
suppose the linear regression model is used to relate to the predictors , where is an unknown intercept parameter , is an vector of ones , is an design matrix , and is a vector of unknown regression coefficients .in the error term , is an unknown scalar and has a spherically symmetric distribution , where is the probability density , ={\bm{0}}_n ] .we assume that the columns of have been centered so that for .we also assume that and are linearly independent , which implies that the class of error distributions we study includes the class of ( spherical ) multivariate- distributions , probably the most important of the possible alternative error distributions .it is often felt in practice that the error distribution has heavier tails than the normal and the class of multivariate- distributions is a flexible class that allows for this possibility .they are also contained in the class of scale mixture of normal distributions and thus , by de finetti s theorem , represent exchangeable distributions regardless of the sample size . in this paperwe consider estimation of ] .the best equivariant estimator is the unbiased estimator given by where rss is residual sum of squares given by in the gaussian case , the stein effect in the variance estimation problem has been studied in many papers including . showed that dominates . for smooth ( generalized bayes ) estimators, gave the improved estimator where is a smooth increasing function given by and is the coefficient of determination given by proposed another class of improved generalized bayes estimators .the proofs in all of these papers seem to depend strongly on the normality assumption .so it seems then , that it may be difficult or impossible to extend the dominance results to the non - normal case .also many statisticians have thought that estimation of variance is more sensitive to the assumption of error distribution compared to estimation of the mean vector , where some robustness results have been derived by .note that we use the term `` robustness '' in this sense of distributional robustness over the class of spherically symmetric error distributions .we specifically are not using the term to indicate a high breakdown point .the use of the term `` robustness '' in our sense is however common ( if somewhat misleading ) in the context of insensitivity to the error distribution in the context of shrinkage literature . in this paper, we derive a class of generalized bayes estimators relative to a class of separable priors of the form and show that the resulting generalized bayes estimator is independent of the form of the ( spherically symmetric ) sampling distribution .additionally , we show , for a particular subclass of these separable priors , , that the resulting robust generalized bayes estimator has the additional robustness property of being minimax and dominating the unbiased estimator simultaneously , for the entire class of scale mixture of gaussians .a similar ( but somewhat stronger ) robustness property has been studied in the context of estimation of the vector of regression parameters by .they gave separable priors of a form similar to priors in this paper for which the generalized bayes estimators are minimax for the entire class of spherically symmetric distributions ( and not just scale mixture of normals ) .we suspect that the distributional robustness property of the present paper also extends well beyond the class of scale mixture of normal distributions but have not been able to demonstrate just how much further it does extend .our class of improved estimators utilizes the coefficient of determination in making a ( smooth ) choice between ( when is large ) and ( when is small ) and reflects the relatively common knowledge among statisticians , that overestimates when is small .see remark [ rem : rsq ] for details .the organization of this paper is as follows . in section [ sec : gb ]we derive generalized bayes estimators under separable priors and demonstrate that the resulting estimator is independent of the ( spherically symmetric ) sampling density . in section [ sec : minimax ] we show that a certain subclass of estimators which are minimax under normality remains minimax for the entire class of scale mixture of normals .further , we show that certain generalized bayes estimators studied in section [ sec : gb ] have this ( double ) robustness property .some comments are given in section [ sec : cr ] and an appendix gives proofs of certain of the results .in this section , we show that the generalized bayes estimator of the variance with respect to a certain class of priors is independent of the particular sampling model under stein s loss . also we will give an exact form of this estimator for a particular subclass of `` ( super)harmonic '' priors that , we will later show , is minimax for a large subclass of spherically symmetric error distributions .[ thm : indep ] the generalized bayes estimator with respect to under stein s loss is independent of the particular spherically symmetric sampling model and hence is given by the generalized bayes estimator under the gaussian distribution .see appendix .now let and .this is related to a family of ( super)harmonic functions as follows .if , in the above joint prior for , we make the change of variables , , the joint prior of becomes the laplacian of is given by which is negative ( i.e. super - harmonic ) for and is zero ( i.e. harmonic ) for .[ thm : harmonic ] under the model with spherically symmetric error distribution and stein s loss , the generalized bayes estimator with respect to for is given by where see appendix .in this section , we demonstrate robustness of minimaxity under scale mixture of normals for a class of estimators which are minimax under normality .[ thm : main ] assume where is monotone nondecreasing , improves on the unbiased estimator , , under normality and stein s loss .then also improves on the unbiased estimator , , under scale mixture of normals and stein s loss .let be a scale mixture of normals where the scalar satisfies =1 ] with is decreasing in if is increasing .by lemma [ lem : non - chi ] and the covariance inequality , \tau^2 |v \right ] \\ & \qquad \qquad \qquad \geq e[\tau^2 ] e\left [ 1-\phi(\{1+v / u\}^{-1})| v \right ] .\end{split}\ ] ] hence we get \\ & = \int_0^\infty \left\{r_g\left(\{\alpha,{\bm{\beta}},\tau^2\ } , \delta_u\right ) -r_g\left(\{\alpha,{\bm{\beta}},\tau^2\ } , \delta_\phi \right)\right\ } g(\tau^2)d\tau^2 \\ & \geq 0 , \end{split}\ ] ] where is the risk function under the gaussian assumption . under the normality assumption , showed that the estimator with nondecreasing dominates the unbiased estimator if , where is given by . demonstrated that the generalized bayes estimator of theorem [ thm : harmonic ] with satisfies this condition .hence our main result shows that the generalized bayes estimator of theorem [ thm : harmonic ] with , is minimax for the entire class of variance mixture of normal distributions .[ thm : main2 ] let . under steins loss , the estimator given by where is minimax and generalized bayes with respect to the harmonic prior for the entire class of scale mixture of normals .[ rem : rsq ] the coefficient of determination is given by and = \sigma^2\{\xi+p\ } , \\ & e\left[\|{\bm{y}}-\bar{y}{\bm{1}}_n\|^2\right]=\sigma^2\{\xi+n-1\ } , \end{split}\ ] ] where .hence the smaller corresponds to the smaller .our class of improved estimators utilizes the coefficient of determination in making a ( smooth ) choice between ( when and are large ) and ( when and are small ) and reflects the relatively common knowledge among statisticians , that overestimates when is small .the estimator is not the only minimax generalized bayes estimator under scale mixture of normals . in theorem [ thm : harmonic ] , we also provided the generalized bayes estimator with respect to superharmonic prior given by . in , we show that for with is minimax in the normal case with a monotone . hence for in this range is also minimax and generalized bayes for the entire class of scale mixture of normals .the bound has a somewhat complicated form and we omit the details ( however , see for details ) .since and correspond to , respectively , we conjecture that with is minimax . under the normality assumption, gave a subclass of minimax generalized bayes estimators with the particularly simple form for where has a slightly complicated form , which we omit ( see for details ) .under spherical symmetry , this estimator is not necessarily derived as generalized bayes ( see the following remark ) , but is still minimax under scale mixture of normals .interestingly , when , the generalized bayes estimator with respect to is given by for the entire class of spherically symmetric distributions ( see for the technical details ) .hence when is minimax and generalized bayes for the entire class of scale mixture of normals .unfortunately , numerical calculations indicate that , for in the range , the inequality is only satisfied for for odd and and for even . for theorems [ thm : main ] and[ thm : main2 ] , the choice of the loss function is the key .many of the results introduced in section [ sec : intro ] are given under the quadratic loss function . under the gaussian assumption , the corresponding results can be obtained by replacing by . on the other hand ,the generalized bayes estimator with respect to depends on the particular sampling model and hence robustness results do not hold under non - gaussian assumption .in this paper , we have studied estimation of the error variance in a general linear model with a spherically symmetric error distribution . we have shown , under stein s loss , that separable priors of the form have associated generalized bayes estimators which are independent of the form of the ( spherically symmetric ) sampling distribution .we have further exhibited a subclass of `` superharmonic '' priors for which these generalized bayes estimators dominate the usual unbiased and best equivariant estimator , , for the entire class of scale mixture of normal error distributions .we have previously studied a very similar class of prior distributions in the problem of estimating the regression coefficients under quadratic loss ( see ) . in that studywe demonstrated a similar double robustness property , to wit , that the generalized bayes estimators are independent of the form of the sampling distribution and that they are minimax over the entire class of spherically symmetric distributions .the main difference between the classes of priors in the two settings are a ) in the present study , the prior on is proportional to while it is proportional to in the earlier study ; and b ) in this paper , the prior on is also separable with being uniform on the real line and having the `` superharmonic '' form , while in the earlier paper jointly had the superharmonic form .the difference a ) is essential since a prior on proportional to gives the best equivariant and minimax estimator , while such a restriction is not necessary when estimating the regression parameters .the difference in b ) is inessential , and either form of priors on the regression parameters will give estimators with the double robustness properties in each of the problems studied .the form of the estimators , of course , will be somewhat different . in the case of the present paper, the main difference would be to replace by and to replace by as a consequence , the results in these papers suggest that separable priors , and in particular the `` harmonic '' prior given , are very worthy candidates as objective priors in regression problems .they produce generalized bayes minimax procedures dominating the classical unbiased , best equivariant estimators of both regression parameters and scale parameters simultaneously and uniformly over a broad class of spherically symmetric error distributions .the ( generalized ) bayes estimator with stein s loss is given by \}^{-1} ] and =\bm{i}_n$ ] , as well as satisfies and hence we have and hence the generalized bayes estimator is given by which is independent of . by the simple relation where mean the mean of and , we have the pythagorean relation , since has been already centered. then we have next we consider the integration with respect to .note the relation of completing squares with respect to where and is the coefficient of determination .hence we have next we consider integration with respect to . by , we have finally we consider integration with respect to . by wehave the second equality follows from the change of variables . by using the relation , is written as .
we consider the problem of estimating the error variance in a general linear model when the error distribution is assumed to be spherically symmetric , but not necessary gaussian . in particular we study the case of a scale mixture of gaussians including the particularly important case of the multivariate- distribution . under stein s loss , we construct a class of estimators that improve on the usual best unbiased ( and best equivariant ) estimator . our class has the interesting double robustness property of being simultaneously generalized bayes ( for the same generalized prior ) and minimax over the entire class of scale mixture of gaussian distributions .
a real option is the right , but not the obligation , to undertake certain business initiatives , such as deferring , abandoning , expanding , staging or contracting a capital investment project .real options have three characteristics : the decision may be postponed ; there is uncertainty about future rewards ; the investment cost is irreversible .there are several types of real options and different kinds of questions , but in all of them there is the issue regarding the optimal timing to undertake the decision ( for instance , investment , abandon or switching ) .consult for a survey on real options analysis . in this paperwe propose a two - fold contribution to the state of the art of real options analysis .on one hand we assume that the uncertainty involving the decision depends on two stochastic processes : the demand and the investment cost . on the other handwe allow the dynamics of these processes to be a combination of a diffusion part ( with continuous sample path ) with a jump part ( introducing discontinuities in the sample paths ) .so , the model presented here expands the usual setting of options and adds new features , including the ability to model jumps in the demand process , and including stochastic investment costs . in the early works in finance , regarding both finance options and investment decisions , the underlying stochastic process modeling the uncertaintywas frequently assumed to be a continuous process , as a geometric brownian motion ( gbm , for short ) .there are many reasons for this , but this is largely due to the fact that the normal distribution , as well as the continuous - time process it generates have nice analytic properties .observations from real situations show that this assumption does not hold .the bar chart in figure [ fig : apple ] shows the global sales numbers of iphones from the quarter of 2007 to the end of 2014 , with data aggregated by quarters ( source : statista 2014 ) .quarter 2007 to quarter 2014 ( in million units ) ] it is clear that in this case upward jumps occur with a certain frequency , that may be related with the launching of new versions of the iphone . nowadays , with the world global market , there are exogenous events that may lead to a sudden increase or decrease in the demand for a certain product .clearly , these shocks will considerably affect the decisions of the investors .also the investment costs depend highly on other factors , such as the oil , raw materials or labour prices .therefore , when we assume the investment to be constant and known , we are simplifying the model .this problem is even more important when one considers large investments that take several years to be completed ( e.g. construction of dams , nuclear power plants , or high speed railways ) .the example regarding the number of sales of iphones shows the existence of jumps in the demand process .in fact , empirical analysis shows there is a need to contemplate models that can take into account discontinuous behavior for a quite wide range of situations .for example , regarding investment decisions , in particular in the case of high - technology companies - where the fast advances in technology continue to challenge managerial decision - makers in terms of the timing of investments - there is a strong need to consider models that take into account sudden decreases or increases in the demand level .morever , examples regarding jumps in the investment are also easy to find .the jumps may represent uncertainties about the arrival and impact ( on the underlying investment ) of new information concerning technological innovation , competition , political risk , regulatory effects and other sources .it is particularly important to consider jumps process in investment problems related with r&d investments , as suggest .furthermore , when one is deciding about investments in energy , which are long - term investments , one should accommodate the impact of climate change policies effect on carbon prices , as points out in their report .for instance , the horizon 2020 energy challenge is designed to support the transition to a secure , clean and efficient energy system for europe .so , if a company decides to undertake any kind of such investment during this time period , probably will apply to special funds and thus the investment cost will likely be smaller .after 2020 , there are not yet rules about the european funding of such projects .therefore , not only the investment cost is random , but also will suffer a jump due to this new regulatory regime ( see http://ec.europa.eu/easme/en/energy for more details ) . following this reasoning , in this paper we assume that the decision regarding the investment depends on two stochastic factors : one related with the demand process , that we will denote by , and another one related with the investment cost , .we allow that both processes may have discontinuous paths , due to the possibility of jumps driven by external poisson processes .later we will introduce in more detail the mathematical model and assumptions .we also claim that we extend the approach provided in , in chapter 6.5 . in this book, the authors consider that there are two sources of uncertainty ( the revenue flow and the investment cost ) , both following a gbm , with ( possible ) correlated brownian motions .then the authors propose a change of variable that will turn this problem with two variables into a problem with one variable , analytically solvable . herewe follow the same approach , but with two different features : first the dynamics of the processes involved are no longer a gbm and , secondly , we prove analytically that the solution we get is in fact exactly the solution of the original problem with two dimensions . in order to prove this result we derive explicitly the quasi - variational inequalities emerging from the principle of dynamic programming , obtaining the corresponding hamilton - jacobi - bellman ( hjb ) equationmoreover , using a verification theorem , we prove that the proposed solution is indeed the solution of the original problem .the remainder of the paper is organized as follows . in section 2we present the framework rationale and the valuation model , and in section 3 we formalize the problem as an optimal stopping one , we present our approach and provide the optimal solution . in section 4 we derive results regarding the behavior of the investment threshold as a function of the most relevant parameters and , finally , section 5 concludes the work , presenting also some recommendations for future extensions .in this section we introduce the mathematical model along with the assumptions that we use in order to derive the investment policy . for that , we start by presenting the dynamics of the stochastic processes involved , namely , the demand and the investment expenditures processes .we also introduce the valuation model we use to derive the optimal investment problem that we address in this paper .we assume that there are two sources of uncertainty , that we will denote by and , representing the demand and the investment , respectively . moreover , we consider that both have continuous sample paths almost everywhere but occasionally jumps may occur , causing a discontinuity in the path. generally speaking , let denote a jump - diffusion process , which has two types of changes : the `` normal '' vibration , which is represented by a gbm , with continuous sample paths , and the `` abnormal '' , modeled by a jump process , which introduce discontinuities .the jumps occur in random times and are driven by an homogeneous poisson process , hereby denoted by , with rate . moreover , the percentages of the jumps are themselves random variables , that we denote by , i.e. , although they may be of different magnitude , they all obey the same probability law and are independent from each others .we let , allowing for positive and for negative jumps .thus we may express as follows : } } \prod_{i=1}^{n_t}{\left(1+u_i\right)},\ ] ] where is the initial value of the process , i.e. , , and represent , respectively , the drift and the volatility of the continuous part of the process and is a brownian motion process .furthermore , the processes and are independent and are also both independent of the sequence of random variables .we use the convention that if for some , then the product in equation is equal to 1 . note that considering the parameters associated with the jumps equal to zero , we obtain the usual gbm . as both the brownian motion and the poisson processesare markovian , it follows that is also markovian .it is also stationary and its order moments are provided in the next lemma ( see appendix ( [ moments ] ) for the proof ) .[ propey ] for , with defined as in , and , } = y_t^k \ ; \exp { \left\ { { { \left[{\left(\mu+(k-1)\frac{\sigma^2}{2}-\lambda m\right ) } k + \lambda { \left({\mathrm{i\!e}}{\left[{\left(1+u\right)}^k\right]}-1\right ) } \right]}s}\right\}},\ ] ] where } ] , which happens if and only if where }-1\right)}\ ] ] since } = \kappa_{0 } x_0^{\theta } \int_{0}^{+\infty } e^{-{h}t } dt.\ ] ] from now on , we will be assuming such restriction in the parameters , to avoid trivialities .next we state and prove a result related with the strong markov property of the involved processes , that we will use subsequently in the optimization problem .the performance criterion can be re - written as follows : },\ ] ] with and applying a change of variable in the performance criterion , we obtain } .\ ] ] then , using conditional expectation and the strong markov property , we get } - i_{\tau } \right\ } } { \chi _ { { \left\{\tau<+\infty\right\}}}}\right]}.\ ] ] by fubini s theorem and the independence between and , it follows that : } = \int_{0}^{+\infty } { \mathrm{i\!e}}{\left[{\left .d(x_{\tau + t + n})\right | } { x_\tau}\right ] } e^{-\rho ( t + n ) } dt.\ ] ] from lemma [ propey ] and from simple calculations , we have : } = { \left(\kappa_1-\kappa_0\right ) } x_\tau^{\theta } \ ; e^{(\rho - { h})(t+n)}.\ ] ] plugging on results in : } = { \left(\kappa_1-\kappa_0\right ) } x_\tau^{\theta } \frac { e^{-{h}n } } { { h}}.\ ] ] therefore , in view of the definition of and , we get the pretended result .the goal of the firm is to find the optimal time to invest in this new project , i.e. , for every and , we want to find and such that : where is the set of all stopping times adapted to the filtration generated by the bidimensional stochastic process .the function is called the _ value function _ , which represents the optimal value of the firm , and the stopping time is called an _ optimal stopping time_. using standard calculations from optimal stopping theory ( see ) , we derive the following hjb equation for this problem : where denotes the infinitesimal generator of the process , which in view of , it is equal to } + \lambda_i { \left[{\mathrm{i\!e}}_{u^i}{\left(v(x , i(1+u^i))\right ) } - v(x , i)\right]}.\end{aligned}\ ] ] note that in this equation we used the notation and to emphasize the meaning of such expected values ; for instance means that we are computing the expected value with respect to the random variable .whenever the meaning is clear from the context , we will simplify the notation and will use just instead .the hjb equation represents the two possible decisions that the firm has .the first term of the equation corresponds to the decision to continue with the actual project and therefore postpone the investment decision .the second term corresponds to the decision to invest and hence the end of the decision problem .thus is the set of values of demand and investment cost such it is optimal to continue with the actual project and , for that reason , is usually known as _ continuation region _ , whereas in the _ stopping region _ , , the value of the firm is exactly the gain if one decides to exercise the option .the sets and are complementary on .we define which is the first time the state variables corresponding to the demand and the investment cost are outside the continuation region .note that means that moreover , whereas the solution of the hjb equation , , must satisfy the following initial condition , which reflects the fact that the value of the firm will be zero if the demand is also zero .furthermore , the following value - matching and smooth - pasting conditions should also hold for , where denotes the boundary of , which we call _ critical boundary _ , and is the gradient function . in this particular case , with two sources of uncertainty ,the critical boundary is a threshold curve separating the two regions .so in order to solve the investment problem , we need to find the solution to the following partial differential equation : in the continuation region and , simultaneously , to identify the continuation and stopping regions .both questions are challenging : on one hand the solution to can not be found explicitly , and on the other hand we need to guess the form of the continuation region and prove that the solution proposed for the hjb equation satisfies the verification theorem .hereafter , we propose an alternative way to solve this problem , that circumvent the difficulties posed by the fact that we have two sources of uncertainty . in this sectionwe start by guessing the shape of the continuation region and latter we will prove indeed that our guess is the correct one , using the verification theorem .our guess about the continuation region comes from intuitive arguments : high demand and low investment cost should encourage the firm to invest , whereas in case of low demand and high investment cost it should be optimal to continue , i.e. , postpone the investment decision . see figure [ fig : initialspace ] for the general plot of both regions .( 5,0 ) node[anchor = north west ] ; ( 0,0 ) ( 0,4 ) node[left ] ; at ( 4,1 ) ; at ( 1,3.5 ) ; but we need to define precisely the boundary of and for that we use the conditions derived from the hjb equation .we consider the set which , taking into account the expression of ( the infinitesimal generator ) and the function , can be equivalently expressed ( after some simple calculations ) as by propositions 3.3 and 3.4 of , we know that .further , if then and .this case would mean that it is optimal to invest right away .so we need to check under which conditions this trivial situation does not hold .we have if and only if , as the denominator is positive ; as , this would imply that and .therefore , if the firm should invest right away , which is coherent with a financial interpretation : if the drift of the investment cost is higher than the discount rate , the rational decision is to invest as soon as possible . to avoid such case, we assume that this does not hold , i.e. , next we guess that the boundaries of and have the same shape , and therefore may be written as follows : where is a trigger value that satisfies in figure [ fig : u ] the thick line represents the boundary of the set and the other dashed lines are some possible boundaries of the continuation region , ( as , which would be the case ) .( 5,0 ) node[anchor = north west ] ; ( 0,0 ) ( 0,4 ) node[left ] ; at ( 1,3.5 ) ; plot ( , ( 1/1.5 ) * ) ; plot ( , ( 1/3 ) * ) ; plot ( , ( 1/5 ) * ) ; it follows from the last section that the decision between continuing and stopping depends on the demand level and investment cost only through a function of the two of them : then the function present in the performance criteria can also be written in terms of this new variable , as , with and , in view of , we propose that where is an appropriate function , that we need to derive still . in chapter 6.5 of discussed a problem with two sources of uncertainty following a gbm . using economical arguments, they propose a transformation of the two state variables into a one state variable , and thus reducing the problem to one dimension , that can be solved using a standard approach . in the problem that we address we do nt use any kind of economical arguments but we rely on the definition of the set .furthermore , the state variables that we consider in this paper follow a jump - diffusion process , and thus the dynamics is more general than the one proposed by .therefore , we may see our result as a generalization of the results provided by .following the approach proposed in the previous section , we write the original hjb equation now for the case of just one variable : where the infinitesimal generator of the unidimensional process is } q f^\prime ( q)\\ \nonumber & & + { \left[{\left(\mu_i-\lambda_i m_i\right)}-{\left(\lambda_x + \lambda_i \right)}\right ] } f(q ) \\\label{lq } & & + \lambda_x { \mathrm{i\!e}}{\left [ f{\left(q { \left(1+u^x\right)}^{\theta}\right)}\right ] } + \lambda_i { \mathrm{i\!e}}{\left[(1+u^i ) f { \left(q { \left(1+u^i\right)}^{-1}\right)}\right]},\end{aligned}\ ] ] where this equation comes from definition , the relationship between and , and simple but tedious computations . the corresponding continuation and stopping regions ( hereby denoted by and , respectively )can now be written as depending only on : and thus is the boundary between the continuation and the stopping regions , as it is usually the case in a problem with one state variable .we note that now we are in a standard case problem of investment with just one state variable ( see , for instance , chapter 5 of ) but with a different dynamics than the usual one ( the process is not a gbm ) .moreover , the following conditions should hold next , using derivations familiar to the standard case , we are able to derive the analytical solution to equation . [ prop1source ] the solution of the hjb equation verifying the conditions , hereby denoted by , is given by where and is the positive root of the function : } r^2 + { \left[{\left(\mu_x -\frac{\sigma_x^2}{2}-\lambda_x m_x\right ) } \theta-{\left(\mu_i - \frac{\sigma_i^2}{2 } -\lambda_i m_i\right)}\right ] } r \nonumber \\ & & + { \left(\mu_i-\lambda_i m_i\right)}-{\left(\lambda_x + \lambda_i \right)}-\rho + \lambda_x { \mathrm{i\!e}}{\left[(1+u^x)^{r \theta}\right]}+\lambda_i { \mathrm{i\!e}}{\left[(1+u^i)^{1-r}\right]}. \label{j}\end{aligned}\ ] ] for the case ( corresponding to the stopping region ) , the value function is equal to , the investment cost , defined on , and thus the result is trivially proved .we then must prove the result for the continuation region , for which . in this case, we propose that the solution of the hjb in the continuation region is of the form , where and are to be derived . as ( from the condition ), it follows that needs to be positive .furthermore , in view of the definition of , if we apply it to the proposed function , it follows that : where is defined on .in the continuation region meaning that is a positive root of .next we show that has one and only one positive root and thus is unique .the second order derivative of is equal to }^2 ( 1+u^x)^{r \theta } \right ] } + \lambda_i { \mathrm{i\!e}}{\left[{\left[\ln(1+u^i)\right]}^2 ( 1+u^i)^{1-r}\right]}.\ ] ] given that the volatilities and the elasticity are positive , a.s . and a.s ., it follows that .thus , is a strictly convex function .further , and ( by assumption ) , which means that . then , since is continuous , it has an unique positive root .it remains to derive .for that we use conditions presented in , and straightforward calculations lead to : where must be given by . in order to ensure that this leads to an admissible solution for ( as must be positive ) , one must have . the remainder of the proof is to check that is the solution of the hjb equation .for that we need * to prove that in the continuation region . in the continuation region .so , we define the function with derivatives since , is a strictly convex function .as , then is the unique minimum of the function with .+ summarizing , the function is continuous and strictly convex , has an unique minimum at and , then .therefore , , for . * to prove that in the stopping region .+ in the stopping region , then given that and using the inequality , we conclude that therefore , we conclude that the function is indeed the solution of the hjb equation .we have already seen that in order to avoid the trivial case of investment right away , one must have . moreover , from the previous proof of the proposition , one must also assume that , which implies that , condition equivalent to , where is defined in . [ ass ]we assume that the following conditions on the parameters hold : in this section we prove that the solution of the original problem can be obtained from the solution of the modified one .the solution of the hjb equation verifying the initial condition and the boundary conditions , hereby denoted by , is given by }^{r_0 } { \left(\frac{r_0 - 1}{i}\right)}^{r_0 - 1 } & 0 < \frac{x^{\theta}}{i } < q^\star \\ { \left(\kappa_1 -\kappa_0\right ) } a x^{\theta } - i & \frac{x^{\theta}}{i } \geq q^\star \end{cases}\ ] ] for , where is defined on and is the positive root of the function , defined on . considering the relations of the functions and with the function , it follows that the function proposed in equation ( [ v ] ) is the obvious candidate to be the solution of the hjb equation .so next we prove that indeed this is the case . in the following we use some results already presented in proof of proposition [ prop1source ] .let and .using some basic calculations one proves that and } ] , is positive and we previously proved that . finally , from the initial and boundary conditions of the transformed problem , and for , i.e. , , we obtain the boundary conditions of the original problem .in this section we study the behavior of the investment threshold as a function of the most relevant parameters , in particular the volatilities and , the intensity rates of the jumps , and , and the magnitude of the jumps , and . in order to have analytical results , we assume that the magnitude of the jumps ( both in the demand as in the investment ) are deterministic and equal to and , respectively . before we proceed with the mathematical derivations , we comment on the meaning of having monotonic with respect to a specific parameter , as we are dealing with a boundary in a two dimensions space .for example , assuming that increases with some parameter , this would mean that , for ( leaving out the other parameters ) .consider then the corresponding continuation regions : and with and . in figure[ fig : twotriggers ] we can observe the graphic representation of the boundaries which divide the stopping and the continuation regions , for each case , and , as expected , we would have .it follows from the splitting of the space into continuation and stopping , that you in this illustration , we would stop earlier for smaller values of the parameter .( 5,0 ) node[anchor = north west ] ; ( 0,0 ) ( 0,4 ) node[left ] ; at ( 2.7,3.5 ) ; at ( 4.6,3.5 ) ; plot ( , ( 1/1.5 ) * ) ; plot ( , ( 1/5 ) * ) ; in the previous sections we consider only as a function of . as in this sectionwe study the behavior of with respect to the parameters of both processes involved , we change slightly the notation , in order to emphasize the dependency of in such parameters .therefore , we define the vectors , , and finally , that we use in the following definition : } r^2 + { \left[{\left(\mu_x -\frac{\sigma_x^2}{2}-\lambda_x m_x\right ) } \theta-{\left(\mu_i - \frac{\sigma_i^2}{2 } -\lambda_i m_i\right)}\right ] } r \\ & & + { \left(\mu_i-\lambda_i m_i\right)}-{\left(\lambda_x + \lambda_i \right)}-\rho + \lambda_x ( 1+m_x)^{\theta r}+\lambda_i ( 1+m_i)^{1-r}.\end{aligned}\ ] ] furthermore , we let denote the positive root of and we change accordingly the definition of as follows : and finally with defined on . we start by proving the following lemma : [ lemam ] the investment threshold changes with the parameters according to the following relation : with . furthermore , as ,then the sign of the derivative of depends only on the derivatives of and with respect to the study parameter . as the result follows from the implicit derivative relation .it remains to prove that . as a function of only , is continuous and strictly convex , with , and , as we have already stated .since is its positive root , in view of the properties of , it follows that it must be increasing in a neighborhood of , and thus . in the following sections , we study the sign of each derivative , whenever possible . our goal is to inspect if the trigger value , , increases or decreases with each parameter in and . in this sectionwe state and prove the results concerning the behavior of as a function of the demand parameters . before the main proposition of this subsection , we present and prove a lemma that provide us the admissible values for the parameters .[ lema : restrictions ] in view of the assumption [ ass ] , the following restrictions on the demand parameters hold .the drift is upper bounded , i.e.,}.\ ] ] the domain of the volatility parameter depends on . if , is lower bounded , , otherwise is upper bounded , , where }\right\}}}.\ ] ] relatively to the rate of the jumps , the domain also depends on .if , is lower bounded , , otherwise is upper bounded , , where regarding the jump sizes , we have the following : let and be the negative and positive solutions of the equation respectively , when there are solutions. moreover , let for , * if then ; * if then ; * if then . for ,* if then ; * if then , is equivalent to , and therefore assumption [ ass ] does not hold .] and if and , then . the sets of admissible values for the parameters and come from straightforward manipulation of the second restriction of assumption [ ass ] .the results regarding the jumps parameters ( and ) rely on the properties of the function with and with and .the graph of the function is a tangent line of the graph of the function on the point . also , as , the function is convex if and it is concave otherwise .then , for , if and if . then it follows that if and if . with respect to , in view of the restriction , andtherefore the restriction on this parameter follows . proceeding with the analysis with respect to the jump size , , we start by noting that , and . from these properties we know the shape of the function for each .the restriction is equivalent to have and therefore standard study of functions lead us to the results presented in the lemma .now we are in position to state the main properties of as a function of the demand parameters . from lemma [ lemam ] , we can only get results about the behavior of as a function of the parameters if and have the same sign .therefore we are not able to present a comprehensive study of for all values of . for , the investment threshold : * increases with the demand volatility , and jump intensity , ; * is not monotonic with respect to the magnitude of the jumps in the demand , .in fact , * * if , then increases for ; * * if , then decreases for and increases for ; * * if , then decreases for and increases for , + where is defined on and and are , respectively , the negative and positive solutions of the equation , when there are solutions .the proof for is straightforward knowing the derivatives continuing with , we calculate the derivatives with given by .taking into account the properties of such function , we conclude the result for .finally , for we also calculate the derivatives}=\theta \lambda_x \phi_\theta(m_x)\\ { \left .{ \frac{\partial j(r,\boldsymbol{\varrho})}{\partial m_x}}\right |}_{r = r_0 } & = - \theta r_0 ( \boldsymbol{\varrho } ) \lambda_x { \left[1 -(1+m_x)^{\theta r_0 ( \boldsymbol{\varrho})-1}\right]}=-\theta r_0 ( \boldsymbol{\varrho } ) \lambda_x \phi_{\theta r_0 ( \boldsymbol{\varrho})}(m_x)\end{aligned}\ ] ] where with and . as , is decreasing if , constant and equal to zero if , and increasing if .moreover , as , then : * if , is positive when and is negative for ; * if , is negative when and positive for .given these results , the proof regarding is concluded .numerical examples show us that the investment threshold is not monotonic with respect to the demand s drift , .we can not have analytical results for the behavior of with respect to as the signs of the involved derivatives are opposite : and .now we state and prove the results concerning the behavior of as a function of the investment parameters . in view of lemma [ lemam ] , we just need to assess the sign of the derivative of with respect to each one of the parameters . for values of investment parameterssuch that assumption [ ass ] holds , the investment threshold * decreases with the investment drift ; * increases with the investment volatility , and jumps intensity , ; * is not monotonic with respect to the magnitude of the jumps in the intensity ; it decreases when and increases when .the proof for and is trivial . in order to pove the result for , we study the behavior of the function : we compute the second derivative , }^2 ( 1+m_i)^{1-r } > 0,\ ] ] whence is a strictly convex function .furthermore , , which lead us to conclude that , for .therefore , , and the result follows . finally , in order to study the behavior of with respect to , we need to assess the sign of the following derivative : }.\end{aligned}\ ] ] taking into account the function defined on , we notice that . as and in view of the properties of function , the result follows .in this paper we proposed a new way to derive analytically the solution to a decision problem with two uncertainties , both following jump - diffusion processes .this method relies on a change of variable that reduces the problem from a two dimensions problem to a one dimension , which we can solve analytically .moreover , we also presented an extensive comparative statics . from such analysis ,we have concluded that the behavior of the investment threshold with respect to demand and investment parameters is similar , i.e. , the impact of changing one parameter in the demand process is qualitatively the same as changing the same parameter in the investment process .this result holds for all except for the drift : the investment threshold is monotonic with respect to the investment drift but this does not hold for the demand drift . for some parameters ,the result is as expected , and similar to the one derived in earlier works ( like the behavior of the investment threshold with the volatility of both the demand and the investment ) , whereas in other cases the results are not so straightforward .for instance , the impact of the jump sizes of the demand depend on some analytical condition whose economical interpretation is far from being obvious .our method may be extended for more general profit functions . in our research agendawe plan to study properties of the functions such that the approach proposed in this paper will still be valid and useful .applying expectation and re - writting , we have } = y_t^k \exp { \left[{\left(\mu-\frac{\sigma^2}{2}-\lambda m\right ) } k s\right ] } \ ; { \mathrm{i\!e}}{\left[{\left . \exp{ \left[\sigma k { \left(w_{t+s } - w_t\right)}\right ] } \prod_{i = n_t+1}^{n_{t+s}}{\left(1+u_i\right)}^k\right | } { y_t}\right]}.\ ] ] using the fact that the brownian motion and the poisson process have independent and stationary increments , and that the processes are independent , we obtain } = y_t^k \exp { \left[{\left(\mu-\frac{\sigma^2}{2}-\lambda m\right ) } k s\right ] } { \mathrm{i\!e}}{\left[\exp { \left(\sigma k w_s\right)}\right ] } { \mathrm{i\!e}}{\left[\prod_{i=1}^{n_s}{\left(1+u_i\right)}^k\right]}.\ ] ] as , then using the probability generating function of a poisson distribution and the tower property of the conditional expectation , we conclude that } = \exp { \left[\lambda s { \left({\mathrm{i\!e}}{\left[{\left(1+u\right)}^k\right]}-1\right)}\right]}.\ ] ] the result follows using the moment generating function of the normal distribution , and replacing in .
we derive the optimal investment decision in a project where both demand and investment costs are stochastic processes , eventually subject to shocks . we extend the approach used in , chapter 6.5 , to deal with two sources of uncertainty , but assuming that the underlying processes are no longer geometric brownian diffusions but rather jump diffusion processes . for the class of isoelastic functions that we address in this paper , it is still possible to derive a closed expression for the value of the firm . we prove formally that the result we get is indeed the solution of the optimization problem . finally , we derived comparative statistics for the investment threshold with respect to the relevant parameters .
ising - type models have been reviewed and used by physicists in many different areas , such as sociology , politics , marketing , and finance . in 2000 , an agent - based model proposed by sznajd - weron _ et al . _ was successfully applied to the dynamics of a social system . in particular , the model reproduced certain properties observed in a real community . at the focus of the sznajd model ( sm )is the emergence of social collective ( macroscopic ) behavior due to the interactions among individuals , which constitute the microscopic level of a social system .this model has been extensively studied since the introduction of the original one - dimensional model in 2000 .modifications were proposed in many works , like the consideration of different types of lattices such as square , triangular and cubic , the increase of the range of the interaction and the number of variable s states and the possibility of diffusion of agents .the original sm consists of a chain of sites with periodic boundary conditions where each site ( individual opinion ) could have two possible states ( opinions ) represented in the model by ising spins ( `` yes '' or `` no '' ) .a pair of parallel spins on sites and forces its two neighbors , and , to have the same orientation ( opinion ) , while for an antiparallel pair , the left neighbor ( ) takes the opinion of the spin and the right neighbor ( ) takes the opinion of the spin . in this first formulation of the sm two types of steady states are always reached : complete consensus ( ferromagnetic state ) or stalemate ( anti - ferromagnetic state ) , in which every site has an opinion that is different from the opinion of its neighbors . however , the transient displays a interesting behavior , as pointed by stauffer et al . . defining the model on a square lattice ,the authors in considered not a pair of neighbors , but a plaquette with four neighbors .considering that each plaquette with all spins parallel can convince all their eight neighbors ( we will call this stauffer s rule ) , a phase transition was found for an initial density of up spins .it is more realistic to associate a probability of persuasion to each site .the sm is robust with respect to this choice : if one convinces the neighbors only with some probability , and leaves them unchanged with probability , still a consensus is reached after a long time .models that consider many different opinions ( using potts spins , for example ) or defined on small - world networks were studied in order to represent better approximations of real communities behavior ( see and references therein ) . in another work , in order to avoid full consensus in the system and makes the model more realistic , schneider introduced opportunists and persons in opposition , that are unconvinced by their neighbors . in the real world , however , the dynamics of social relationships is more complex .even when such more structured topologies as small - world networks are adopted to bring the sm closer to reality , a large number of details is often neglected . in order to advance toward realism ,we recently considered a reputation mechanism .we believe that the reputation of agents who hold the same opinion is an important factor in opinion propagation across the community . in other words, it is realistic to believe that the individuals will change their opinions under the influence of highly respected persons .the reputation limits the agents power of persuasion , and we can expect the model in to be more realistic than the standard one .in fact , we showed that simple microscopic rules are sufficient to generate a democracy - like state , ferromagnetically ordered with only partial polarization . in this work , we revise and extend our previous results by allowing reputations to increase and decrease , depending on whether the agents are or are not persuaded .this generalization is based on the behavior of real social networks : certain persons tend to be skeptical if the persuaders have low reputation , in which case their best strategy is to keep their opinions . in this sense, including reputation makes the sm more realistic . to be thorough, we will consider two different protocols .in the first case , the agents reputations increase for each persuaded neighbor , whereas in the second case , the agents reputations rise in case of persuasion and decrease whenever the agents fail to convince their neighbors .this paper is organized as follows : in section [ model ] we present the model and define their microscopic rules .the numerical results as well as the finite - size scaling analysis are discussed in section [ results ] .finally , in section [ conclusions ] we summarize our conclusions .we have considered our model defined on a square lattice with agents and periodic boundary conditions .similar to stauffer s rule ( rule ia of ) , we choose at random a plaquette of four neighbors and if all central spins are parallel , the neighbors may change their opinions .the difference in our model is that the neighbor s spins will be flipped depending on the plaquette reputation .an integer number ( ) labels each player and represents its reputation across the community , in analogy to the naming game model considered by brigatti .the reputation is introduced as a score for each player and is time dependent .the agents start with a random distribution of values , and during the time evolution , the reputation of each agent changes according to its capacity of persuasion .we will consider in this work that the initial values of the agents reputation follow a gaussian distribution centered at with standard deviation .we have considered two different situations in this work : in the first case , the reputations increase following the model s rules , whereas in the second case the reputations may increase and decrease .one time step in our model is defined by the following microscopic rules : + _ case 1 _ 1 .we randomly choose a 2 2 plaquette of four neighbors ; 2 .if not all four center spins are parallel , leaves its eight neighbors unchanged .3 . on the other hand ,if the four center spins are fully polarized , we calculate the average reputation of the plaquette , where each term represents the reputation of one of plaquettes agent .we compare the reputations of each of the eight neighbors of the plaquette with the average reputation .if the reputation of a neighbor is less than the average , this neighbor follow the plaquette orientation . on the other hand , if the neighbor s reputation exceeds , no action is taken .5 . for each persuasion, the reputation of the plaquette agents is incremented by 1 , so that the average plaquette reputation is increased by 1 ._ case 2 _ in this case , steps 1 - 4 are as described above .step 5 , by contrast , is changed to the following rule : * for each persuasion , the reputation of the plaquette agents is incremented by 1 . on the other hand , for each failure , the reputations within the plaquette are decremented by 1 .thus , even in the case of fully polarized plaquettes , different numbers of agents may be convinced , namely 8 , 7 , 6 , . . ., 1 , or 0 .as pointed by stauffer in , we can imagine that each agent in the sznajd model carries an opinion that can either be up ( e.g. , republican ) or down ( e.g. , democrat ) , which represents one of two possible opinions on any question .the objective of the agents in the game is to convince their neighbors .one can expect that , if a certain group of agents convince many others , their persuasive power grows .on the other hand , the persuasive powers may drop if the agents fail to convince other individuals .the inclusion of reputation in our model captures this feature of the real world ., initial densities of up spins and and different samples ( a ) .we can see that the steady states show situations where the total consensus is not obtained , in opposition of the standard sznajd model defined on the square lattice . in figure ( b ) we show the results for and . in these cases the system reaches consensus in all samples.,title="fig:",scaledwidth=30.0% ] + ,initial densities of up spins and and different samples ( a ) .we can see that the steady states show situations where the total consensus is not obtained , in opposition of the standard sznajd model defined on the square lattice . in figure( b ) we show the results for and . in these cases the system reaches consensus in all samples.,title="fig:",scaledwidth=30.0% ] in the simulations , we considered .following the previous works on the sm , we can start studying the time evolution of the magnetization per site , where is the total number of agents and . in the standard sm defined on the square lattice , the application of the stauffer s rule , where a plaquette with all spins parellelconvince its eigth neighbors , with initial density of up spins leads the system to the fixed points with all up or all down spins with equal probability . for ( ) the system goes to a ferromagnetic state with all spins down ( up ) in all samples , which characterizes a phase transition at in the limit of large .as pointed by the authors in , fixed points with all spins parallel describe the published opinion in a dictatorship , which is not a commom situation nowadays. however , ferromagnetism with not all spins parallel corresponds to a democracy , which is very commom in our world .we show in fig .[ fig1 ] the behavior of the magnetization as a function of the simulation time in our model , for case 1 . in fig .[ fig1 ] ( a ) , we show a value of ( ) , and one can see that the total consensus with all spins up ( down ) will not be achieved in any sample . on the other hand , in fig .[ fig1 ] ( b ) we show situations where the consensus is obtained with all up ( for ) and all down spins ( for ) .these results indicate that ( i ) a democracy - like situation is possible in the model without the consideration of a mixing of different rules , or some kind of special agents , like contrarians and opportunists , and ( ii ) if a phase transition also occurs in our case , the transition point will be located somewhere at . and , obtained from samples , with agents initial reputations following a gaussian distribution with different standard deviations ( a ) .the distribution is compatible with a log - normal one for all values of , which corresponds to the observed parabola in the log - log plot .it is also shown the average relaxation time , over samples , versus latice size in the log - log scale ( b ) .the straight line has slope 5/2 .the result is robust with respect to the choice of different values.,title="fig:",scaledwidth=30.0% ] + and , obtained from samples , with agents initial reputations following a gaussian distribution with different standard deviations ( a ) .the distribution is compatible with a log - normal one for all values of , which corresponds to the observed parabola in the log - log plot .it is also shown the average relaxation time , over samples , versus latice size in the log - log scale ( b ) .the straight line has slope 5/2 .the result is robust with respect to the choice of different values.,title="fig:",scaledwidth=30.0% ] we have also studied the relaxation times of the model , i.e. , the time needed to find all the agents at the end having the same opinion .the distribution of the number of sweeps through the lattice , averaged over samples , needed to reach the fixed point is shown in fig .[ fig2 ] ( a ) .we can see that the relaxation time distribution is compatible with a log - normal one for all values of the standard deviation , which corresponds to a parabola in the log - log plot of fig .[ fig2 ] ( a ) .the same behavior was observed in other studies of the sm . in fig .[ fig2 ] ( b ) we show the average relaxation time [ also over samples , considering the relaxation times of fig . [ fig2 ] ( a ) ] versus latice size in the log - log scale .we can verify a power - law relation between these quantities in the form , for large and all values of the standard deviation , which indicates that this result is robust with respect to the choice of different values .power - law relations between and were also found in a previous work on the sm . of samples ( case 1 ) which show all spins up when the initial density of up spins is varied in the range , for some lattice sizes ( a ) .the total number of samples are ( for and ) , ( for and ) and ( for ) .it is also shown the corresponding scaling plot of ( b ) .the best collapse of data was obtained for , and .,title="fig:",scaledwidth=30.0% ] + of samples ( case 1 ) which show all spins up when the initial density of up spins is varied in the range , for some lattice sizes ( a ) .the total number of samples are ( for and ) , ( for and ) and ( for ) .it is also shown the corresponding scaling plot of ( b ) .the best collapse of data was obtained for , and .,title="fig:",scaledwidth=30.0% ] we can now analyze the phase transition of the model . for this purpose , we have simulated the system for different lattice sizes and we have measured the fraction of samples which show all spins up when the initial density of up spins is varied in the range . in other words ,this quantity give us the probability that the population reaches consensus , for a given value of .we have considered samples for and , samples for and and samples for .the results are shown in fig .[ fig3 ] ( a ) .one can see that the transition point is located somewhere in the region , as above discussed . in order to locate the critical point, we performed a finite - size scaling ( fss ) analysis , based on the standard fss equations , where is a constant and is a scaling function . the result is shown in fig .[ fig3 ] ( b ) , and we have found that of samples ( case 1 ) which show all spins up when the initial density is varied in the range , for , samples and some different values of . this result show that the increase of do not change the behavior of .,scaledwidth=30.0% ] in the limit of large l. in addition , we have obtained and .the critical point occurs at , different of the sm without reputation defined on the square lattice .this fact may be easily understood : at each time step , the randomly choosen 2 plaquette may convince 8 , 7 , 6 , ... , 1 or 0 neighbors , even if the plaquettes spins are parallel . in the standard model , if the plaquettes spins orientations are the same , 8 neighbors are convinced immediately , thus it is necessary a smaller initial density of up spins to the system reaches the fixed point with all spins up .thus , the usual phase transition of the sm also occurs in our model , in case 1 , and this transition is robust with respect to the choice of different values of ( see fig .[ fig4 ] ) . , initial densities of up spins and and different samples ( a ) .we can see differences between these steady states and those of case 1 , but we also have democracy - like situations . in figure ( b ) we show the results for and .observe in the inset that even for large values of densities like the system reaches consensus only in some samples ( analogously for ) .the dotted line in the inset is ( full consensus).,title="fig:",scaledwidth=30.0% ] + , initial densities of up spins and and different samples ( a ) .we can see differences between these steady states and those of case 1 , but we also have democracy - like situations . in figure ( b ) we show the results for and .observe in the inset that even for large values of densities like the system reaches consensus only in some samples ( analogously for ) .the dotted line in the inset is ( full consensus).,title="fig:",scaledwidth=30.0% ] as discussed in section [ model ] , in this second case the agent s reputations may increase and decrease , which defines a competition of reputations in the game .the evolution of the magnetization per site is shown in fig .[ fig5 ] . in the case of intermediary densities the system reaches steady states with , i.e. , we have democracy - like situations .however , due to the competition of reputations , that increase and decrease depending on the average reputation of the plaquettes during the time evolution , the system reaches steady states with different magnetizations .another consequence of the competition appears in the case of large and very small initial densities : even for the cases and the system reaches consensus only in some realizations of the dynamics .this fact can be observed in the inset of fig .[ fig5 ] ( b ) : the dotted line is , and we observe that just one of the three realizations reaches consensus .thus , for case 2 , the consensus is very hard to be obtained .nonetheless , the emergence of democratic steady states is favoured in this second case , in comparison with case 1 , which makes the model more realistic in this sense . and , obtained from samples , with agents initial reputations following a gaussian distribution with different standard deviations ( a ) .the distribution is compatible with a log - normal one for all values of , which corresponds to the observed parabola in the log - log plot .it is also shown the average relaxation time , over samples , versus latice size in the log - log scale ( b ) .the power - law behavior for large is , for all values of .,title="fig:",scaledwidth=30.0% ] + and , obtained from samples , with agents initial reputations following a gaussian distribution with different standard deviations ( a ) .the distribution is compatible with a log - normal one for all values of , which corresponds to the observed parabola in the log - log plot .it is also shown the average relaxation time , over samples , versus latice size in the log - log scale ( b ) .the power - law behavior for large is , for all values of .,title="fig:",scaledwidth=30.0% ] of samples ( case 2 ) which show all spins up when the initial density of up spins is varied in the range , for some lattice sizes ( a ) .the total number of samples are ( for and ) , ( for and ) and ( for ) .it is also shown the corresponding scaling plot of ( b ) .the best collapse of data was obtained for , and . to minimize the finite - size effects ,we have excluded the smaller size of the collapse ( see the inset).,title="fig:",scaledwidth=30.0% ] + of samples ( case 2 ) which show all spins up when the initial density of up spins is varied in the range , for some lattice sizes ( a ) .the total number of samples are ( for and ) , ( for and ) and ( for ) .it is also shown the corresponding scaling plot of ( b ) .the best collapse of data was obtained for , and . to minimize the finite - size effects, we have excluded the smaller size of the collapse ( see the inset).,title="fig:",scaledwidth=30.0% ] we have also studied the relaxation times for case 2 . the distribution of the number of sweeps through the lattice , averaged over samples , needed to reach the fixed point is shown in fig . [ fig6 ] ( a ) for some values of the standard deviation .we can see that , as in case 1 , the relaxation time distribution is compatible with a log - normal one for all values of .however , due to competition of reputations , the relaxation times of case 2 are greater than the corresponding relaxation times of case 1 . in fig .[ fig6 ] ( b ) we show the average relaxation time [ also over samples , considering the relaxation times of fig . [ fig6 ] ( a ) ] versus latice size in the log - log scale . in this case , we verify the power - law behavior for large and all values of the standard deviation . in other words ,the competition of reputations increases the relaxation times of the system , as above discussed , and this effect becomes stronger when we increase the number of agents of the system ( or the lattice size ) , which implies in a power - law exponent for greater than the exponent for the case 1 .following the approach of the last subsection ( case 1 ) , we have simulated the system for different lattice sizes and we have measured the fraction of samples which show all spins up when the initial density of up spins is varied in the range . we have considered the same number of samples of the last subsection , and the results are shown in fig .[ fig7 ] ( a ) .one can see that the transition point is located somewhere in the region , i.e. , the critical density in case 2 is greater than in case 1 , as expected due to the competition of reputations .we determined the critical point for this case using the above eqs .( [ fss1 ] ) and ( [ fss2 ] ) .the best collapse of data is shown in fig .[ fig7 ] ( b ) , obtained with , and .in other words , the case 2 presents a different critical density and different critical exponents , in comparison with case 1 .however , the usual phase transition of the sm also occurs in case 2 , and this transition is robust with respect to the choice of different values of ( see fig .[ fig8 ] ) . observe that , in order to minimize the finite - size effects , we excluded the smaller size for the fss process [ see the inset of fig .[ fig7 ] ( b ) ] .in fact , we can observe in fig .[ fig7 ] ( a ) that , for , the curve of the quantity presents a inflection point , which not appears in the other sizes .thus , this inflection point in the curve for is a pronounced finite - size effect of the model considered in case 2 . of samples ( case 2 ) which show all spins up when the initial density is varied in the range , for , samples and some different values of .this result show that the increase of do not change the behavior of .,scaledwidth=30.0% ]in this work we have studied a modified version of the sznajd sociophysics model . in particularwe have considered reputation , a mechanism that limits the capacity of persuasion of the agents .the reputation is introduced as a score for each player and is time dependent , varying according to the model s rules .the agents start with a random distribution of reputation values , and during the time evolution , the reputation of each agent changes according to its capacity of persuasion .we have considered in this work that the initial values of the agents reputation follow a gaussian distribution centered at with standard deviation .in addition , we have studied separately two different situations : ( i ) a case where the reputations increase due to each persuaded individual ( case 1 ) , and ( ii ) a case where the reputations increase for persuasion and decrease if a group of agents fail to convince one of its neighbors ( case 2 ) . in the first case, we observed a log - normal - like distribution of the relaxation times , i.e. , the time needed to find all the agents at the end having the same opinion .in addition , the average relaxation times grow with the linear dimension of the lattice in the form .the system undergoes the usual phase transition , that was identified by measurements of the fraction of samples which show all spins up when the initial density of up spins is varied .in other words , this quantity give us the probability that the population reaches consensus , for a given value of .we localized the transition point by means of a finite - size scaling analysis , and we found .this critical density is greater than , the value found by stauffer _ in the standard formulation of the sznajd model .this fact may be easily understood : at each time step , the randomly choosen 2 plaquette may convince 8 , 7 , 6 , ... , 1 or 0 neighbors , even if the plaquettes spins are parallel . in the standard case ,if the plaquettes spins orientations are the same , 8 neighbors are convinced immediately , thus it is necessary a smaller initial density of up spins to the system reaches the fixed point with all spins up .the simulations indicate that the observed phase transition is robust with respect to the choice of different values of .in the second case , the steady states with are favoured due to the competition of reputations , and even for large densities the system reaches consensus only in some samples .we also found that the relaxation times are log - normally distributed , but they are greater than the relaxation times of case 1 .the average relaxation times are greater than the corresponding values found in the first case , and they also depend on the linear lattice size in a power - law form , , but with a greater exponent in comparison with case 1 .the usual phase transition also occurs in case 2 , but the critical density was found to be .in addition , the second situation presents strong finite - size effects .the observed differences between the two cases are due to the competition of reputations that occurs in case 2 .n. crokidakis would like to thank the brazilian funding agency cnpq for the financial support .financial support from the brazilian agency capes at universidade de aveiro at portugal is also acknowledge .f. l. forgerini would like to thank the isb - universidade federal do amazonas for the support .
we propose a modification in the sznajd sociophysics model defined on the square lattice . for this purpose , we consider reputation a mechanism limiting the agents persuasive power . the reputation is introduced as a time - dependent score , which can be positive or negative . this mechanism avoids dictatorship ( full consensus , all spins parallel ) for a wide range of model parameters . we consider two different situations : case 1 , in which the agents reputation increases for each persuaded neighbor , and case 2 , in which the agents reputation increases for each persuasion and decreases when a neighbor keeps his opinion . our results show that the introduction of reputation avoids full consensus even for initial densities of up spins greater than . the relaxation times follow a log - normal - like distribution in both cases , but they are larger in case 2 due to the competition among reputations . in addition , we show that the usual phase transition occurs and depends on the initial concentration of individuals with the same opinion , but the critical points in the two cases are different . _ keywords : dynamics of social systems , phase transitions , cellular automata . _
a variety of systems of interacting elements can be represented as networks .a network is a collection of nodes and links ; a link connects a pair of nodes . generally speaking, some nodes play central functions , such as binding different parts of the network together and controlling dynamics in the network . to identify important nodes in a network , various centrality measures based on different criteriahave been proposed .links of many real networks such as the world wide web ( www ) , food webs , neural networks , protein interaction networks , and many social networks are directed or asymmetrically weighted .in contrast to the case of undirected networks , a link in directed networks indicates an asymmetrical relationship between two nodes , for example , the control of the source node of a link over the target node .the direction of a link indicates the relative importance of the two nodes .central nodes in a network in this sense would be , for example , executive personnels in an organizational network and top predators in a food web .generally , more ( less ) central nodes are located at an upper level ( a lower level ) in the hierarchy of the network , where hierarchy refers to the distinction between upper and lower levels in terms of the centrality value as relevant in , for example , biological and social systems .this type of centrality measure is necessarily specialized for directed networks and includes the popularity or prestige measures for social networks , ranking systems for webpages such as the pagerank and hits , adaptations of the pagerank to citation networks of academic papers and journals , and ranking systems of sports teams .we call them ranking - type centrality measures . under practical restrictions such as overwhelming network size or incomplete information about the network , it is often difficult to exactly obtain ranking - type centrality values of nodes . in such situations ,the simplest approximators are perhaps those based on the degree of nodes ( _ i.e. _ , the number of links owned by a node ) .for example , the indegree of a node can be an accurate approximator of the pagerank of websites and ranks of academic journals .however , such local approximations often fail , implying a significant effect of the global structure of networks . a ubiquitous global structure of networks that adversely affects local approximations is the modular structure . both in undirected and directed networks , nodes are often classified into modules ( also called communities ) such that the nodes are densely connected within a module and sparsely connected across different modules . in modular networks, some modules may be central in a coarse - grained network , where each module is regarded as a supernode .however , relationships between the centrality of individual nodes and that of modules are not well understood . using these relationships, we will be able to assess centralities of individual nodes only on the basis of coarse - grained information about the organization of modules or under limited computational resources . in this study, we analyze the ranking - type centrality measures for directed modular networks .we are concerned with the modular structure of the network in the meaning of partitioning of the network into parts , and not the overlapping community structure .we determine the centrality of modules , which reflects the hierarchical structure of the networks in the sense of subordination , not nestedness .then , we show that module membership is a chief determinant of the centrality of individual nodes .a node tends to be central when it belongs to a high - rank module and it is locally central by , for example , having a large degree . to clarify these points ,we analytically evaluate centrality in modular networks . on the basis of the matrix tree theorem ,the centrality value of a node is derived from the number of spanning trees rooted at the node .we use this relationship to develop an approximation scheme for the ranking - type centrality values of nodes in modular networks .the approximated value turns out to be a combination of local and global effects , _i.e. _ , the degree of nodes and the centrality of modules . for analytical tractability, we formulate our theory using the ranking - type centrality measure called the influence , but the results are also applicable to the pagerank . we corroborate the effectiveness of the proposed scheme using the _ caenorhabditis elegans _ neural network , an email social network , and the www .we consider a directed and weighted network of nodes denoted by . a set of nodesis denoted by , and is a set of directed links , _i.e. _ , node sends a directed link to node with weight if and only if . the weight represents the amplitude of the direct influence of node on node .we set when .depending on applications , different centrality measures can be used to rank the nodes in a network .we analyze the effect of the modular structure on ranking of nodes using a centrality measure called _ influence _ because it facilitates theoretical analysis . the existence of a one - to - one mapping from the influence to the pagerank and to variations of the pagerank used for ranking academic journals and articles , which we will explain in this section , enables us to adapt our results to the case of such ranking - type centrality measures . to show that our results are not specific to the proposed measure , we study the influence and the pagerank simultaneously .we define the influence of node , denoted by , by the solution of the following set of linear equations : where is the indegree of node , and provides the normalization . is large if ( i ) node directly affects many nodes ( _ i.e. _ , many terms probably with a large on the rhs of eq . ) , ( ii ) the nodes that receive directed links from node are influential ( _ i.e. _ , large on the rhs ) , and ( iii ) node has a small indegree .equation is the definition for strongly connected networks ; is defined to be strongly connected if there is a path , _i.e. _ , a sequence of directed links , from any node to any node .if is not strongly connected , there is no path from a certain node to a certain node .then , node can not influence node even indirectly , and the problem of determining the influence of nodes is decomposed into that for each strongly connected component .therefore , we assume that is strongly connected .the influence represents the importance of nodes in different types of dynamics on networks ( see appendix a for details ) .firstly , is equal to the fixation probability of a new opinion introduced at node in a voter - type interacting particle system .secondly , if all links are reversed such that a random walker visits influential nodes with high probabilities , is the stationary density of the continuous - time simple random walk .thirdly , is the so - called reproductive value used in population ecology .fourthly , is the contribution of an opinion at node to the opinion of the entire population in the consensus in the continuous - time version of the degroot model .fifthly , is equal to the amplitude of the collective response in the synchronized dynamics when an input is given to node .the influence can be mapped to the pagerank .the pagerank , denoted by for node , is defined self - consistently by where is the outdegree of node , if , and if . the second term on the rhs of eq .is present only when .note that the direction of the link in the pagerank has the meaning opposite to that in the influence ; of a webpage is incremented by an incoming link ( hyperlink ) , whereas is incremented by an outgoing link .the introduction of homogenizes and is necessary for the pagerank to be defined for directed networks that are not strongly connected , such as real web graphs .the normalization is given by . is regarded as the stationary density of the discrete - time simple random walk on the network , where is the probability of a jump to a randomly selected node .an essential difference between the two measures lies in normalization . in the influence , the total credit that node gives its neighbors is equal to , while that in the pagerank is equal to . in the pagerank , the multiplicative factor of the total credit that node gives other nodesis set to to prevent nodes with many outgoing links from biasing ranks of nodes . in the ranking of webpages ,creation of a webpage with many hyperlinks does not indicate that node gives a large amount of credit to recipients of a link .each neighbor of node receives the credit from node .we should refer to the pagerank when nodes can select the number of recipients of credit ( _ e.g. _ , the www and citation - based ranking of academic papers and journals ) .we should use the influence when the importance of all links is proportional to their weights ( _ e.g. _ , opinion formation and synchronization mentioned above ) .the pagerank is equal to the influence in a network modified from the original network ( see appendix b for derivation ) . in particular ,the pagerank in for is given by where is the influence of node for the network , which is obtained by reversing all links of .we use this relation to extend our results derived for the influence to the case of the pagerank .the influence has a nontrivial sense only in directed networks because in eq. leads to .furthermore , any network with ( ) results in .therefore , from eq . , for such a network , where is the mean degree .in this case , and are not affected by the global structure of the network . in directed or asymmetrically weighted networks , and are heterogeneous in general .the mean - field approximation ( ma ) is the simplest ansatz based on the local property of a node . by using , where , we obtain .combination of this and eqs .yields the ma for the pagerank : .we can calculate by enumerating spanning trees . to show this , note that eq .implies that is the left eigenvector with eigenvalue zero of the laplacian matrix defined by and ( ) , _ i.e. _ , the cofactor of is defined by where is an matrix obtained by deleting the -th row and the -th column of . because , ( ) , does not depend on . using eq . and the fact that is degenerate , we obtain therefore , is the left eigenvector of with eigenvalue zero , which yields from the matrix tree theorem , is equal to the sum of the weight of all possible directed spanning trees rooted at node .the weight of a spanning tree is equal to the product of the weight of links forming the spanning tree .most directed networks in the real world are more structured than those captured by the ma .a ubiquitous global structure of networks is modular structure .modular networks consist of several densely connected subgraphs called modules ( also called communities ) , and modules are connected to each other by relatively few links . as an example , a subnetwork of the _ c. elegans _ neural network containing 4 modules is shown in fig .[ fig : schem modular](a ) .modular structure is common in both undirected and directed networks. modular structure of directed networks often leads to hierarchical structure . by hierarchy, we refer to the situation in which modules are located at different levels in terms of the value of the ranking - type centrality .it is relatively easy to traverse from a node in an upper level to one in a lower level along directed links , but not vice versa .the hierarchical structure leads to the deviation of from the value obtained from the ma . as an example , consider the directed -partite network shown in fig .[ fig : example hie ] . layer ( ) contains nodes , where is divided by .the nodes in the same layer are connected bidirectionally with weight .each node in layer ( ) sends directed links to all nodes in layer with weight unity , and each node in layer ( ) sends directed links to all the nodes in layer with weight .the following results do not change if two adjacent layers are connected via just an asymmetrically weighted bridge , as shown in fig .[ fig : schem modular](b ) . because of the symmetry , all nodes in layer have the same influence . from eq . , we obtain when , a node in a layer with small is more influential than a node in layer with large . the ma yields the actual decreases exponentially throughout the hierarchy , whereas does not .we observe a similar discrepancy in the case of the pagerank .we develop an improved approximation for the influence in modular networks by combining the ma and the correction factor obtained from the global modular structure of networks .consider a network of modules ( ) .for mathematical tractability , we assume that each module communicates with the other modules via a single portal node , as illustrated in fig .[ fig : schem modular](b ) ; the network shown in fig .[ fig : schem modular](b ) is an approximation of that shown in fig .[ fig : schem modular](a ) .we denote the weight of the link by ( ) .we obtain in this modular network by enumerating spanning trees rooted at node .denote such a spanning tree by .the intersection of and is a spanning tree restricted to and rooted at node .this restricted spanning tree reaches all nodes in . enters ( ) via a directed path from node to node .this path is provided by a spanning tree in the network of modules , where each module is represented by a single node .the other nodes in are spanned by the intersection of and , which forms a spanning tree restricted to and rooted at node .therefore , is a concatenation of ( i ) an intramodular spanning tree in and rooted at node , ( ii ) intramodular spanning trees in and rooted at node ( ) , and ( iii ) a spanning tree in the network of modules rooted at node .let ( for local ) denote the number of spanning trees in with an arbitrary root , and ( for global ) denote the number of spanning trees in a network of modules with an arbitrary root .then , the number of spanning trees in rooted at node is equal to { \cal n}_g v_{m_i}^g , \label{eq : enumeration}\ ] ] where is the influence of node within and is the influence of in the network of modules .the first , second , and third factors in eq. corresponds to the numbers of spanning trees of types ( i ) , ( ii ) , and ( iii ) , respectively .therefore , we obtain for nodes , eq . yields ; the relative influence of nodes in the same module is equal to their relative influence within the module . for nodes in different modules ,i.e. _ , node in module and node in module ( ) , eq . leads to if each module is homogeneous , we approximate , and obtain ; the global structure of the network laid out by links across modules determines the influence of each node .if each module is heterogeneous in degree , we use the ma , _i.e. _ , and . by assuming that ( ) is a typical node in ( ) , we set and .then , eq . is transformed into therefore , we define an approximation scheme , called the ma - mod , for node in module as equation can be used for general modular networks in which different modules can be connected by more than one links .two crucial assumptions underlie eq . .firstly , a module is assumed to be an uncorrelated and possibly heterogeneous random network so that the ma is effective within the module .note that the degree of nodes can be heterogeneously distributed .secondly , most links are assumed to be intramodular so that the local ma is simply given by .to obtain for general networks , we define and approximate ] over the 10 neurons is equal to 3.456 ( see tab . [tab : cele best si ] in appendix c for the values for individual neurons ) .these neurons are located at upper levels of the neural network in the global sense .the conclusion remains qualitatively the same if we use .recall that the pagerank is calculated for because the meaning of the direction of the link in the influence is opposite to that in the pagerank .the average values of ( ) for sensory neurons , interneurons , and motor neurons are equal to 0.009235 ( 0.006621 ) , 0.003614 ( 0.005415 ) , and 0.001032 ( 0.001323 ) , respectively .the cumulative distributions of for different classes of neurons are shown in fig .[ fig : cele histogram ] . even though many synapses from motor neurons to interneurons and sensory neurons , and synapses from interneurons to sensory neurons exist , these numerical results indicate that the neural network is principally hierarchical . generally speaking , sensory neurons , which directly receive external stimuli , are located at upper levels of the hierarchy , motor neurons are located at lower levels , and interneurons are located in between .sensory neurons serve as a source of signals flowing to interneurons and motor neurons down the hierarchy .the relation between and the ma is shown in fig .[ fig : data](a ) by the squares .they appear strongly correlated . however , the pearson correlation coefficient ( pcc ; see appendix e for definition ) between and the ma is not large ( ) , as shown in tab . [tab : modular pcc ] , because tends to be larger than the ma for nodes with a large .note that the data are plotted in the log - log scale in fig .[ fig : data ] .the neural network has modular structure . to use the ma - mod scheme ( eq . ) , we apply a community detection algorithm to the neural network .we have selected this algorithm because a directed link in the present context indicates the flow rather than the connectedness on which a recent algorithm is based . as a result, we obtain modules , calculate , , from the network of the modules , and use eq . . is plotted against the ma - mod in fig .[ fig : data](a ) , indicated by circles .the data fitting has improved compared to the case of the ma , in particular for small values of .the pcc between and the ma - mod is larger than that between and the ma ( tab . [tab : modular pcc ] ) . in this example, this holds true for the raw data and the logarithmic values of the raw data . as a benchmark, we assess the performance of the global estimator $ ] ( node ) , which we call the mod .the mod ignores the variability of within the module and is exact for networks with completely homogeneous modules , such as the network shown in fig .[ fig : example hie ] .the performance of the mod is poor in the neural network , as indicated by the triangles in fig .[ fig : data](a ) and the pcc listed in tab . [ tab : modular pcc ] .the values of the pcc between the actual and approximated are also listed in tab .[ tab : modular pcc ] .the results for the pagerank are qualitatively the same as those for the influence . with both measures ,the module membership is a crucial determinant of centralities of individual nodes .note that , on the basis of the mod for the influence given by the mod for the pagerank is given by _i.e. _ , we approximate by because the information about local degree is unavailable for the mod .wasserman s and faust k 1994 _ social network analysis _( new york : cambridge university press ) newman m. e. j. 2003 _ siam rev . _ * 45 * 167 boccaletti s , latora v , moreno y , chavez m and hwang d - u 2006 _ phys ._ * 424 * 175 garlaschelli d , caldarelli g and pietronero l 2003 _ nature _ * 423 * 165 lagomarsino m c , jona p , bassetti b and isambert h 2007 _ proc . natl .usa _ * 104 * 5516 castellano c , fortunato s and loreto v 2009 _ rev .* 81 * 591 brin s and page l 1998 proc .world wide web conf .( brisbane , australia , 1418 april ) 107117 .berkhin p 2005 _ internet math . _* 2 * 73 kleinberg j m 1999 _ j. acm _ * 46 * 604 kleinberg j and lawrence s 2001 _ science _ * 294 * 1849 palacios - huerta i and volij o 2004 _ econometrica _ * 72 * 963 chen p , xie h , maslov s and redner s 2007 _ j. informetrics _ * 1 * 8 pinski g and narin f 1976 _ info .proc . man ._ * 12 * 297 davis p m 2008 _ j. amertech . _ * 59 * 2186 fersht a 2009 _ proc .usa _ * 106 * 6883 park j and newman m e j 2005 _ j. stat ._ p10014 fortunato s , flammini a , menczer f and vespignani a 2006 _ proc .usa _ * 103 *12684 donato d , laura l , leonardi s and millozzi s 2004 _ eur .j. b _ * 38 * 239 restrepo j g , ott e and hunt b r 2006 _ phys .lett . _ * 97 * 094102 newman m e j 2004 _ eur .j. b _ * 38 * 321 palla g , dernyi i , farkas i and vicsek t 2005 _ nature _ * 435 * , 814 palla g , barabsi l - a and vicsek t 2007 _ nature _ * 446 * 664 fortunato s 2009 _ phys . rep . _ in press .palla g , farkas i j , pollner p , dernyi i and vicsek t 2007 _ new j phys _ * 9 * 186 leicht e a and newman m e j 2008 _ phys .lett . _ * 100 * 118703 rosvall m and bergstrom c t 2008 _ proc .usa _ * 105 * 1118 everett m g and borgatti s p 1999 _ j. math ._ * 23 * 181 taylor p d 1990 _ amer ._ * 135 * 95 ravasz e , somera a l , mongru d a , oltvai z n and barabsi a - l 2002 _ science _ * 297 * 1551 guimer r and amarall a n 2005 _ nature _ * 433 * 895 sales - pardo m , guimer r , moreira a a and amaral l a n 2007 _ proc .usa _ * 104 * 15224 clauset a , moore c , newman m e j 2008 _ nature _ * 453 * 98 taylor p d 1996 _ j. math .biol . _ * 34 * 654 degroot m h 1974 _ j. am .* 69 * 118 jackson m o 2008 _ social and economic networks _( princeton : princeton university press ) olfati - saber r , fax j a and murray r m 2007 _ proc .* 95 * 215 kori h , kawamura y , nakao h , arai k and kuramoto y 2009 _ phys .e _ * 80 * 036207 biggs n 1997 _ bull .* 29 * 641 agaev r p and chebotarev p y 2000 _ autom .cont . _ * 61 * 1424 chen b l , hall d h and chklovskii d b 2006 _ proc .usa _ * 103 * 4723 http://www.wormatlas.org for neural network , ( b ) for email social network , and ( c ) for www with .the quantities placed on the horizontal axis are the ma ( _ i.e. _ , the normalized for and the normalized for ) ( red squares ) , mod ( green triangles ) , and ma - mod ( blue circles).,width=302 ]
many systems , ranging from biological and engineering systems to social systems , can be modeled as directed networks , with links representing directed interaction between two nodes . to assess the importance of a node in a directed network , various centrality measures based on different criteria have been proposed . however , calculating the centrality of a node is often difficult because of the overwhelming size of the network or the incomplete information about the network . thus , developing an approximation method for estimating centrality measures is needed . in this study , we focus on modular networks ; many real - world networks are composed of modules , where connection is dense within a module and sparse across different modules . we show that ranking - type centrality measures including the pagerank can be efficiently estimated once the modular structure of a network is extracted . we develop an analytical method to evaluate the centrality of nodes by combining the local property ( _ i.e. _ , indegree and outdegree of nodes ) and the global property ( _ i.e. _ , centrality of modules ) . the proposed method is corroborated with real data . our results provide a linkage between the ranking - type centrality values of modules and those of individual nodes . they also reveal the hierarchical structure of networks in the sense of subordination ( not nestedness ) laid out by connectivity among modules of different relative importance . the present study raises a novel motive of identifying modules in networks .
the concept of temporal persistence , which is closely related to first - passage statistics , has been used recently to study various non - markovian stochastic processes both theoretically and experimentally .another quantity of interest in the study of the statistics of spatially extended systems is its natural analog , the _ spatial persistence probability_. this idea has been investigated theoretically in the context of gaussian interfaces with dynamics described by linear langevin equations , where the variable undergoing stochastic evolution is the height of the interfacial sites ( is the lateral position along the interface and is the time ) .the spatial persistence probability of fluctuating interfaces , denoted by , is simply the probability that the height of a steady - state interface configuration , measured at a fixed time , _ does not _ return to its `` original '' value at the initial point within a distance measured from along the interface . in the long - time ,steady - state limit , the spatial persistence probability , which depends only on for a translationally invariant interface , has been shown to exhibit a power - law decay , .one of the interesting results reported in ref. is that the spatial persistence exponent can take two values determined by the initial conditions or selection rules imposed on the starting point : 1 ) , the `` steady state '' ( ss ) persistence exponent if is sampled uniformly from _ all _ the sites of a steady - state configuration ; and 2 ) , the so - called _ finite - initial - conditions _ ( fic ) persistence exponent if the sampling of is performed from a _ subset _ of steady - state sites where the height variable and its spatial derivatives are _ finite_. the spatial persistence probabilities obtained for these two different ways of sampling the initial point are denoted by and , respectively .the values of the exponents and for interfaces with dynamics described by a class of linear langevin equations have been determined in ref . using a mapping between the spatial statistical properties of the interface in the steady state and the temporal properties of stochastic processes described by a generalized random - walk equation .it turns out that for these systems , is equal to either for or for , where , is the spatial dimension , and is the standard dynamical exponent of the underlying langevin equation .the fic spatial persistence exponent is found to have the value , where is a temporal persistence exponent for the generalized random walk problem to which the spatial statistics of the interface is mapped .two exact results for are available in the literature : , corresponding to the classical brownian motion and , corresponding to the random acceleration problem .very recently , experimental measurements of the spatial persistence probability have been performed for a system ( combustion fronts in paper ) that is believed to belong to the kardar parisi zhang ( kpz ) universality class .however , the fic spatial persistence probability is not investigated at all in this work . instead, the authors analyze a `` transient '' spatial persistence ( i.e. , the probability is measured by sampling over all the sites of a transient interfacial profile obtained before the steady state is reached ) .this transient spatial persistence is completely different from the fic spatial persistence which is measured in the steady - state regime by sampling a special class of initial sites . as a consequence, additional study is required in order to understand the experimental and numerical possibilities for measuring and its associated nontrivial exponent . in this paper , we present the results of a detailed numerical study of spatial persistence in a class of one - dimensional models of fluctuating interfaces .our interest in analyzing the spatial persistence of fluctuating interfaces is motivated to a large extent by their important ( and far from completely understood ) role in the rapidly developing field of nanotechnology where the desired stability of nanodevices requires understanding and controlling thermal interfacial fluctuations . in this context , the study of first - passage statistics in general , or of the persistence probability ( both spatial and temporal ) in particular , turns out to be a very useful approach . to address this problem we consider stochastic interfaces with dynamics governed by the edwards wilkinson ( ew ) and kpz equations . for the ew equation , we consider both white noise ( uncorrelated in both space and time ) and`` colored''noise that is correlated in space but uncorrelated in time .the effect of noise in spatially distributed systems is an interesting problem by itself and has been widely studied . in this paper, we investigate the effects of noise statistics on the spatial structure of fluctuating interfaces using the conceptual tool of spatial persistence probability . using the isomorphic mapping procedure of ref . , we derive exact analytical results for the spatial persistence exponents of ew interfaces driven by power - law correlated noise .we then compare our analytical results with those obtained from numerical integrations of the corresponding stochastic equations .the use of power - law correlated noise in the ew equation allows us to explore the situation where the two spatial persistence exponents and are different . our numerical study also provide a characterization of the scaling behavior of spatial persistence probabilities as functions of the system size .information about the system - size dependence of persistence probabilities is necessary for extracting the persistence exponents from experimental and numerical data . in studies of the scaling behavior of spatial persistence probabilities, one has to consider another important length scale that always appears in practical measurements : this is the _ sampling distance _ which represents the `` nearest - neighbor''spacing of the uniform grid of spatial points where the height variable is measured at a fixed time .the sampling distance is the spatial analog of the `` sampling time '' that represents the time - interval between two successive measurements of the height at a fixed position in experimental and computational studies of temporal persistence .once the effect of a finite on the measured spatial persistence is understood , one can relate correctly the experimental and numerical results to the theoretical predictions .our study shows that the spatial persistence probabilities ( both ss and fic ) exhibit simple scaling behavior as functions of the system size and the sampling distance .in addition to the temporal persistence probability , the temporal survival probability has been shown recently to represent an alternative valuable statistical tool for investigations of first - passage properties of spatially extended systems with stochastic evolution . in the context of interface dynamics, the temporal survival probability is defined as the probability that the height of the interface at a fixed position does not cross its _ time - averaged _ value over time .in contrast to the power - law behavior of the temporal persistence probability ( which , we recall , measures the probability of not returning to the initial position ) , the temporal survival probability exhibits an exponential decay at long times , providing information about the underlying physical mechanisms and their associated time scales . in this study , we make the first attempt to analyze the behavior of the _ spatial survival probability _ , , defined as the probability of the interface height between points ( which is an arbitrarily chosen initial position ) and not reaching the average level ( rather than the original value ) .we present numerical results for that show that its spatial behavior in the ss regime is neither power - law , nor exponential , while in the fic regime , it becomes very similar to the spatial persistence probability , .the paper is organized as follows . in sec .[ models ] , we define the models studied in this paper , review existing analytical results about their spatial persistence properties , and present new analytical expressions for the spatial persistence exponents for ew interfaces with colored noise in arbitrary spatial dimension . in sec . [ methods ] , we describe the numerical methods used in our study and discuss how the spatial persistence and survival probabilities are measured in our numerical simulations .the results obtained in our ( 1 + 1)dimensional numerical investigations are described in detail and discussed in sec . [ sim ] , for both discrete stochastic solid - on - solid models ( sec . [ sim]a ) and the spatially discretized ew equation with colored noise ( sec . [ sim]b ) . sec .[ concl ] contains a summary of the main results and a few concluding remarks .we have performed a detailed numerical study of the spatial persistence of ( 1 + 1)dimensional fluctuating interfaces where the dynamics is described by the well known ew equation or alternatively by the kpz equation where and refer to spatial derivatives with respect to , and with is the usual uncorrelated random gaussian noise .the dynamical exponent for eq.([ew ] ) is , and since in our study , the variable defined in sec .[ intro ] is equal to 1 .so , we expect both and for this system to be equal to .although the kpz equation is nonlinear , characterized by , it is well - known that in the long time limit , the probability distribution of the stochastic height variable in this equation is the same as that in the ew equation ( i.e. ] represents the sign of the fluctuating quantity , and is the ensemble containing all the lattice sites in a steady - state configuration . the fic spatial persistence probability is obtained in a similar manner , except that the average is performed over a particular subensemble of the steady - state configuration sites , , characterized by _ finite _ values of the height variable and its spatial derivatives : = \,\,\hbox{constant},\,\ , ~\forall~ 0 < x^{\prime } \leq x ~ , ~\forall ~ x_{0 } \in { \cal s}_{fic}~ \rbrace .% ~\sim~ x^{-\theta_{fic}}. \label{probfic}\ ] ] since the persistence probabilities are averaged over the choice of the initial point , we omit writing explicitly in the arguments of and from now on , while stressing the important fact that the ensemble of initial sites used in the averaging process determines which one of the two persistence probabilities is obtained .we consider two different methods for measuring , depending on the type of the model ( atomistic solid - on - solid model or spatially discretized langevin equation ) being studied . in the former case where the height variables are integers ,the fic spatial persistence probability measurement involves a sampling procedure from the subset of sites characterized by a fixed integer value of the height ( measured from the average , , of the heights of all the sites at time ) which is substantially smaller than the typical value of the height fluctuations measured by the saturation width of the interface profile . in calculations using the direct numerical integration technique , the height variable can take any real value .so , the probability of finding a fixed value of the stochastic height variable is infinitesimally small . for this reason , fixing a reference level and sampling over the sites with is useless .we , therefore , consider in this case a continuous interval of height values ( symmetric with respect to the average height ) with width which is considerably smaller than the amplitude of the height fluctuations .the positions characterized by a height variable within this interval represent the subensemble of lattice positions involved in the sampling procedure necessary for measuring .the spatial survival probabilities corresponding to the ss and fic conditions are calculated similarly to the corresponding persistence probabilities , except that the stochastic variable under consideration becomes .thus , = \,\,\hbox{constant}\,\ , , ~\forall~ 0\le x^{\prime } \leq x ~ , ~\forall ~ x_{0 } \in { \cal s}_{ss}~\rbrace , \label{surv_ss}\ ] ] and = \,\,\hbox{constant}\,\ , ~\forall~ 0 \le x^{\prime } \leq x ~ , ~\forall ~ x_{0 } \in { \cal s}_{fic}~\rbrace .\label{surv_fic}\ ] ]in the solid - on - solid family and kim kosterlitz models , the interface configuration is characterized by a set of integer height variables corresponding to the lattice sites , with periodic boundary conditions .since all the measurements of the spatial persistence and survival probabilities are done in the steady - state regime ( i.e. in the regime where the interfacial roughness has reached a time - independent saturation value ) , we used relatively small systems with in order to be able to achieve the the steady state within reasonable simulation times . the resulting steady - state interfacial profile , corresponding to a final time , is used to compute the spatial persistence and survival probabilities .the calculation of is relatively simple : it involves measuring the fraction of initial lattice positions ( all possible choices of the initial point are allowed ) for which the interface height has not returned to the height of the initial point ( for persistence probability ) or to the average height level ( for survival probability ) over a distance , averaged over many independent realizations ( ) of the steady state configuration .measurements of or involve , in addition to these steps , a preliminary selection of a subensemble of lattice sites which are characterized by a fixed and small value of the height measured relative to the spatial average . only the sites that belong to this subensemble ( i.e. only the sites with ) are used as initial points in the fic measurements .the steady state spatial persistence probability , , for ( 1 + 1)dimensional ew interfaces with white noise , obtained using the discrete family model .panel ( a ) : double - log plots of vs for a fixed sampling distance , using three different values of , as indicated in the legend .panel ( b ) : double - log plots of vs for a fixed system size , , and three different values of , as indicated in the legend.,width=604,height=226 ] two distinct length scales have to be taken into consideration in the interpretation of the numerical results for the spatial persistence probability : the size of the sample used in the simulation , and the sampling distance which denotes the spacing between two successive points where the height variables are measured in the calculation of the persistence probability .the minimum value of is obviously one lattice spacing , but one can use a larger integral value of in the calculation of persistence and survival probabilities .for example , a calculation of the persistence probability with would correspond to checking the heights of only the sites with index , where is the index of the initial site and . while the importance of in the measurement of is obvious ( it sets the maximum distance for which can be meaningfully measured ) ,the effect of is rather intricate and has to be carefully investigated . in fig .[ fig1]a we start to analyze these effects by looking at for ew - type interfaces . we note that when is measured in systems with different sizes , using the smallest possible value for ( i.e. ) , the exponent associated with the power - law decay of the persistence probability does not change , but there is an abrupt downward departure from a power - law behavior near .it is not difficult to understand this behavior qualitatively : as discussed earlier , measurements of spatial correlations and persistence probabilities in a finite system of size with periodic boundary conditions are meaningful only for distances smaller than . in fig .[ fig1]b , we have shown the results for when remains fixed and is varied . since the the persistence probability is , by definition , equal to unity for ( see eq .( [ probss ] ) ) , we have plotted as a function of in this figure to ensure that the plots for different values of coincide for small values of the -coordinate . the plots for different are found to splay away from each other at large values of , with the plots for larger exhibiting more pronounced downward bending .again , the reason for this behavior is qualitatively clear : since a double - log plot of vs begins to deviate substantially from linearity as approaches ( see fig .[ fig1]a ) , the downward bending of the plots in fig .[ fig1]b ( which are all for a fixed value of ) occurs at a smaller value of for larger .a more detailed scaling analysis of the dependence of the persistence probabilities on and is described below .the spatial persistence probabilities , and , and the spatial survival probability , , obtained from simulations of the family model in ( 1 + 1 ) dimensions . in panels( a ) and ( b ) we show the data for and , while in panels ( c ) and ( d ) we display the data for .panel ( a ) : and for , .the dashed line represents the best fit of the data to a power - law form .panel ( b ) : finite - size scaling of .three probability curves are obtained for three different sample sizes with the same value for the ratio .panel ( c ) : scaling of for the same values of and as in panel ( b ) . is calculated by sampling over lattice sites with .panel ( d ) : scaling of for three different sample sizes with the same value for the ratio , sampling over two subsets of lattice sites with the same value of ( ) : ( upper plot ) and ( lower plot ) . ,width=604,height=415 ] the spatial persistence probabilities , and , for the ( 1 + 1)-dimensional kim - kosterlitz model which is in the kpz universality class . as in fig .[ fig2 ] , in panels ( a ) and ( b ) we show the data for .panels ( c ) and ( d ) display the data for .panel ( a ) : for , .panel ( b ) : finite - size scaling of .three probability curves are obtained for three different sample sizes with the same value for the ratio .panel ( c ) : scaling of , obtained by sampling over the lattice sites with , for three different values ( same as those in panel ( b ) ) of and .panel ( d ) : scaling of for three different sample sizes with the same value for the ratio , sampling over two subsets of lattice sites with the same value of ( ) : ( upper plot ) and ( lower plot ) ., width=604,height=415 ] in fig .[ fig2 ] we show the results for spatial persistence and survival probabilities for the discrete family model .it is obvious from the plots that the spatial persistence probabilities ( panel ( a ) ) and ( panel ( c ) ) exhibit power - law decays over an extended range of values .the abrupt decay to zero near is due , as discussed above , to finite size effects .the spatial persistence exponents are extracted from the power - law fits shown in the log - log plots as dashed straight lines .we find that , in good agreement with the expected value .however , it is clear that the steady state survival probability , shown in fig .[ fig2]a , does not exhibit a power - law behavior .this is similar to the qualitative behavior of the _ temporal _ survival probability in the steady state of the family model .we now return to the dependence of the persistence probabilities on the sample size and the sampling distance . since and are the only two length scales in the problem ( the lattice parameter serves as the unit of length ) , it is reasonable to expect that the persistence probabilities would be functions of the ( dimensionless ) scaling variables and .if this is true , then plots of vs. for different sample sizes should show a scaling collapse if the ratio is kept constant . a similar scaling behavior of the temporal survival probability as functions of and the sampling time ( in that case, the scaling variables are and ) was found in ref .as indicated in panels(b - d ) of fig .[ fig2 ] , we have used various values for the sampling distance in the measurement of and .we observe that when the sampling distance is increased in proportion to the system size ( so that is held fixed ) , all the curves collapse when plotted vs. ( see panel ( b ) ) .this confirms that the scaling form of the steady state persistence probability is : where the function shows a power - law decay with exponent as a function of for small values of and .let us turn our attention to . in the data shown in panel ( c ) of fig .[ fig2 ] , we have chosen the subensemble of sampling positions to contain only the lattice sites whose height is equal to the average value ( i.e. ) . obviously , in this case the definitions for persistence and survival probabilities become identical , since the probability that the height variable does not return to the original value ( i.e. ) is precisely the probability that the height variable does not reach the average level .we find that using a system with and and considering the subensemble of sites with .we note that a remarkable collapse of vs. curves for different values of is again obtained when is adjusted to be proportional to the system size , as shown in panel ( c ) .more interestingly , we observe that fixing the level to a nonzero value introduces a `` height '' scale in the problem that is related to the steady - state value of the interface width .since this width is proportional to , where is the roughness exponent , we expect the dependence of on for nonzero values of to be described by the scaling variable .we observe that if the level is chosen to be proportional to , then the calculated values of for different sample sizes , obtained using values of such that the ratio is also held constant , exhibit a perfect scaling collapse , as shown in panel ( d ) of fig .this observation leads us to the conclusion that the scaling form of the fic persistence probability with nonzero values of the level is : where exhibits a power - law behavior with exponent as a function of for small if and . as the value of increased , the range of values over which the power - law behavior is obtained decreases and a more rapid decay of the probability is noticed .the predictions concerning the scaling behavior of the spatial persistence probabilities are confirmed by the results for the atomistic kim kosterlitz model . the same discussion for fig .[ fig2 ] applies to fig .[ fig3 ] where we have shown the results for the kim - kosterlitz model .we find that ( see fig . [ fig3]a ) , in good agreement with the expected value of 1/2 , and also that , using a rather small simulation with and and sampling over the subensemble of sites with height at the average level ( see fig . [ fig3]c ) . as shown in fig .[ fig3]b , the ss persistence probability obeys the scaling form of eq .( [ f1 ] ) . in fig .[ fig3]d , we display the results for the measured for systems with different sizes and sampling distances such that remains constant and considering two different subsets of sampling sites , each subset being characterized by a fixed value of .these results are in perfect agreement with the scaling form of eq .( [ f2 ] ) .equations ( [ f1 ] ) and ( [ f2 ] ) provide a complete scaling description of the ss and fic persistence probabilities for ( 1 + 1)dimensional fluctuating interfaces belonging to two different universality classes ( i.e. ew and kpz ) , modeled using discrete solid - on - solid models .the associated spatial persistence exponents and are in good agreement with the theoretical values .however , these studies do not illustrate the interesting possibility of a dependence of the persistence exponent on the sampling procedure used in the selection of the initial sites used in the calculation of the persistence probability : the two persistence exponents and have the same value for ( 1 + 1)dimensional ew and kpz interfaces .we present and discuss below the results for a model where these two exponents have different values . in order to measure the spatial persistence and survival probabilities in this system ,we have applied the steps described above on systems of sizes , using 100400 independent realizations for averages . while the calculation of and involves the same method as the one used in the case of the solid - on - solid models , for measuring and we have selected the subensemble of lattice sites whose heights at time satisfy the condition , where is the spatial average of the height at time .the width of the sampling window has to be chosen to be much smaller than the amplitude of the interface fluctuations , but large enough to include a relatively large fraction of the total number of sites in order to ensure adequate statistics . under these circumstanceswe have computed the fraction of these selected sites which do not reach the `` original '' height ( in the case of persistence probability ) or the average height level ( in the case of survival probability ) up to a distance from the point .the numerical results for these probabilities , along with a finite - size scaling analysis of their behavior , are shown in figs .[ fig4 ] and [ fig5 ] . spatial persistence and survival probabilities for the ew equation with spatially correlated noise .panel a ) : and using a fixed system size , two values of the noise correlation parameter ( and ) and sampling distance .panel b ) : and ( inset ) , using the same parameters as in panel a ) , and sampling initial sites from a band of width centered at the average height .the straight lines drawn through the data points in these double - log plots represent power - law fits ., width=623,height=226 ] we find that both ss and fic spatial persistence probabilities for ( 1 + 1)dimensional interfaces described by the ew equation with colored noise exhibit the expected power - law behavior as a function of , as shown in fig .[ fig4 ] , while the ss survival probability shows a more complex -dependence ( see fig .[ fig4]a ) .further work is needed in order to understand the behavior of .when a relatively small system with size is used , the numerical results for the spatial persistence exponents extracted from the power - law fits shown in fig .[ fig4 ] ( for , we obtain and , while for , the exponent values are found to be and ) appear to be affected by finite - size effects . specifically , the values of extracted from fits to the numerical data are systematically larger than the theoretically expected values , for and 0.3 for ( see eq .( [ expn1 ] ) ) .similar deviations from the analytical results are also found for the usual dynamical scaling exponents and .we have checked that simulations of larger samples bring the measured values of the exponents closer to the expected values , but the convergence is rather slow .these finite - size effects become more pronounced as the noise correlation parameter is increased . in fig .[ fig4 ] we show the results for and , but we have verified from simulations with larger values of that the difference between the expected and measured values of increases as is increased .this is expected because the spatial correlation of the noise falls off more slowly with distance as is increased , thereby making finite - size effects more pronounced .another possible source of the discrepancy between the numerical and exact results for the exponent is the spatial discretization used in the numerical work .the effects of using a finite discretization scale on the observed scaling behavior of continuum growth equations in the steady state have been studied in ref . where it was found that the effective value of the roughness exponent obtained from calculations of the local width using a finite is smaller than its actual value .since , the values of obtained from our calculations with are expected to be larger than their exact values .our results are consistent with this expectation . as shown in the inset of fig .[ fig4]b , the fic survival probability behaves similarly to for both and 0.2 , exhibiting a power - law decay with an exponent ( of and for and 0.2 , respectively ) that is very close to .this is consistent with the expectation that the fic persistence and survival probabilities should become identical as the width parameter used in the selection of initial sites approaches zero ( in this limit , both persistence and survival probabilities measure the probability of not crossing the average height ) .finally , we point out that both ss and fic exponents obtained from the numerical study exhibit the correct trend , increasing in magnitude as decreases . also , the measured fic spatial persistence exponents satisfy the constraint .our numerical results also confirm the interesting theoretical prediction that the ss and fic spatial persistence exponents are different for the ew equation with spatially correlated noise .finite - size scaling of the persistence probabilities , and , and the fic survival probability for the ew equation with spatially correlated noise .the noise correlation parameter is and the sampling interval takes three different values .panel a ) : the ss persistence probability for three different sample sizes with a constant ratio .panel b ) : the fic persistence probability with fixed values of the quantities ( ) and ( ) , where inset : same as in the main figure , but for the fic survival probability .,width=623,height=226 ] we have found that the scaling forms of eqs .( [ f1 ] ) and ( [ f2 ] ) also provide a correct description of the numerically obtained persistence and survival probabilities for the ew equation with spatially correlated noise .this is illustrated in fig .[ fig5 ] . in fig .[ fig5]a , we show that the results for obtained for different values of and fall on the same scaling curve when plotted against if the ratio is held fixed .this is precisely the behavior predicted by eq .( [ f1 ] ) . as shown in fig .[ fig5]b , the data for also exhibit good finite - size scaling collapse if is varied in proportion to and the width of the sampling band is increased in proportion to .this is in perfect analogy with the scaling behavior of the fic persistence probability for the discrete stochastic models discussed in sec .[ a ] , with the variable playing the role of in eq .( [ f2 ] ) .this suggests that the scaling behavior of the fic persistence probability in the continuum ew equation is of the form where the function has the same characteristics as in eq .( [ f2 ] ) .a similar scaling description also applies to , as shown in the inset of fig .. this scaling description should be useful in the analysis of experimental data on equilibrium step fluctuations because the images obtained in experiments provide the values of a real `` height '' variable ( position of a step - edge ) at discrete intervals of a finite sampling distance .in this study , we have analyzed the spatial first - passage statistics of fluctuating interfaces using the concepts of spatial persistence and survival probabilities .specifically , we have presented the results of detailed numerical measurements of the ss and fic spatial persistence probabilities for several models of interface fluctuations .results for the spatial survival probabilities are also reported .these results confirm that the concepts of persistence and survival are useful in analyzing the spatial structure of fluctuating interfaces .the exponents associated with the power - law decay of the spatial persistence probabilities as a function of distance are valuable indicators of the universality class of the stochastic processes that describe the dynamics of surface fluctuations .our results for these exponents for ( 1 + 1)-dimensional interfaces in the ew and kpz universality classes are in good agreement with the corresponding analytic predictions .we have also obtained analytic results for the spatial persistence exponents in the ( 1 + 1)-dimensional ew equation with spatially correlated noise , and reported the results of a numerical calculation of the persistence and survival probabilities in this system .while the numerical results show strong finite - size effects , the qualitative trends predicted by the analytic treatment are confirmed in the numerical work .in particular , the numerical results show evidence for an interesting theoretically predicted difference between the persistence exponents obtained for two different ways of sampling the initial points used in the measurement of the spatial persistence probability .we also find that the steady - state survival probability has a complex spatial behavior that requires further investigations . in the past, there has been some confusion in the literature about the distinction between the persistence and survival probabilities .our study shows that these two quantities are very different in the ss situation , whereas the distinction between them essentially disappears in the fic situation .the numerical results reported here are for models that exhibit `` normal '' scaling behavior with the same local and global scaling properties of interface fluctuations .there are other models of interface growth and fluctuations that exhibit `` anomalous '' scaling , for which the global and local scaling properties are different . in such models ,the `` global '' roughness exponent that describes the dependence of the interface width in the steady state on the sample size ( for ) is different from the `` local '' exponent that describes the -dependence of the height - difference correlation function ^ 2 \rangle ^{1/2}$ ] in the steady state ( ) for small ( for ) . the exponent is greater than unity ( the steady - state interface is `` super - rough '' ) in such cases , whereas the local exponent is always less than or equal to unity .it is interesting to enquire about the behavior of the spatial persistence probabilities in such models .the numerical results reported in the preceding sections show that the steady - state persistence probability exhibits a power - law decay in only for values of that are much smaller than the sample size .since the roughness of the steady - state interface of super - rough models at length scales much smaller than is described by the local exponent , we expect the steady - state spatial persistence probability in such models to exhibit a power - law decay with exponent for .for example , the one - dimensional mullins - herring model is super - rough with and . for this model, the above argument suggests that the steady - state spatial persistence exponent is equal to 0 , which agrees with the exact result reported in ref .an important feature of our investigation is the development of a scaling description of the effects of a finite system size and a finite sampling distance on the measured persistence probabilities .we have also shown that the dependence of the fic persistence and survival probabilities on the reference level ( in atomistic models ) or the width of the band ( in continuum models ) used in the selection of the subset of sampling sites is described by a scaling form .these scaling descriptions would be useful in the analysis of experimental and numerical data on fluctuations in spatially extended stochastic systems .some of the numerical results reported here ( such as the behavior of the ss survival probability and the forms of the scaling functions that describe the dependence of the persistence probabilities on the parameters , and or ) should be amenable to analytic treatment , especially for the ew equation with white noise , whose spatial properties can be mapped to the temporal properties of the well - known random walk problem .further work along these lines would be very interesting . the spatial persistence and survival probabilities considered here should be measurable in imaging experiments on step fluctuations . such experimental investigations would be most welcome .* acknowledgments * this work is partially supported by the us - onr , the lps , and the nsf - dmr - mrsec at the university of maryland .the authors would like to thank satya n. majumdar for several useful discussions .m.c . acknowledges useful discussions with e.d .williams and d.b .dougherty .99 for a review on temporal persistence , see s. n. majumdar , curr .sci . * 77 * , 370 ( 1999 ) .s. n. majumdar , c. sire , a. j. bray , and s. j. cornell , phys .lett . * 77 * , 2867 ( 1996 ) ; b. derrida , v. hakim , and r. zeitak , _ ibid ._ * 77 * , 2871 ( 1996 ) .j. krug , h. kallabis , s. n. majumdar , s. j. cornell , a. j. bray , and c. sire , phys . rev .e * 56 * , 2702 ( 1997 ) ; h. kallabis and j. krug , europhys .* 45 * , 20 ( 1999 ) .m. marcos - martin , d. beysens , j. p. bouchaud , c. godreche , and i. yekutieli , physica a * 214 * , 396 ( 1995 ) ; w. y. tam , r. zeitak , k. y. szeto , and j. stavans , phys .* 78 * , 1588 ( 1997 ) ; b. yurke , a. n. pargellis , s. n. majumdar , and c. sire , phys . rev .e * 56 * , r40 ( 1997 ) ; g. p. wong , r. w. mair , r. l. walsworth , and d. g. cory , phys .rev . lett .* 86 * , 4156 ( 2001 ) .d. b. dougherty , i. lyubinetsky , e. d. williams , m. constantin , c. dasgupta , and s. das sarma , phys .lett * 89 * , 136102 ( 2002 ) .d. b. dougherty , o. bondarchuk , m. degawa , and e. d. williams , surf .sci . * 527 * , l213 ( 2003 ) .j. merikoski , j. maunuksela , m. myllys , and j. timonen , phys .* 90 * , 024501 ( 2003 ) .s. n. majumdar and a. j. bray , phys .lett . * 86 * , 3700 ( 2001 ) .w. feller , _ introduction to probability theory and its applications _ , 3rd ed .vol-1 ( wiley , new york , 1968 ) .t. w. burkhardt , j. phys .a * 26 * , l1157 ( 1993 ) ; y.g .sinai , theor .90 * , 219 ( 1992 ) .m. kardar , g. parisi , and y .- c .zhang , phys .* 56 * , 889 ( 1986 ). s. f. edwards and d. r. wilkinson , proc .london , ser.a * 381 * , 17 ( 1982 ) .j. garcia - ojalvo and j. m. sancho , _ noise in spatially extended systems _ springer , berlin , 1999 .s. n. majumdar , a. j. bray , and g. c. m. a. ehrhardt , phys . rev.e* 64 * , 015101 ( r ) ( 2001 ) . c. dasgupta , m. constantin , s. das sarma , and s. n. majumdar , phys .e * 69 * , 022101 ( 2004 ) .n . pang , y .- k .yu , and t. halpin - healy , phys .e. * 52 * , 889 ( 1995 ) .f. family , j. phys .a : gen . * 19 * , l441 ( 1986 ) .kim and j.m .kosterlitz , phys .* 62 * , 2289 ( 1989 ) w. h. press et al . , _ numerical recipes _( cambridge university press , cambridge , 1989 ) . j. buceta , j. pastor , m. a. rubio , and f. j. de la rubia , phys .e * 61 * , 6015 ( 2000 ) .s. das sarma , s. v. ghaisas , and j. m. kim , phys .e * 49 * , 122 ( 1994 ) .w. w. mullins , j. appl .* 28 * , 333 ( 1957 ) ; c. herring , _ ibid ._ * 21 * , 301 ( 1950 ) .
we report the results of numerical investigations of the steady - state ( ss ) and finite - initial - conditions ( fic ) spatial persistence and survival probabilities for ( 1 + 1)dimensional interfaces with dynamics governed by the nonlinear kardar parisi zhang ( kpz ) equation and the linear edwards wilkinson ( ew ) equation with both white ( uncorrelated ) and colored ( spatially correlated ) noise . we study the effects of a finite sampling distance on the measured spatial persistence probability and show that both ss and fic persistence probabilities exhibit simple scaling behavior as a function of the system size and the sampling distance . analytical expressions for the exponents associated with the power - law decay of ss and fic spatial persistence probabilities of the ew equation with power - law correlated noise are established and numerically verified .
when calculating the various ground state properties of fermionic systems , it is important to have fast and accurate ways of evaluating the density matrix . for non - interacting fermions ,this amounts to calculating the fermi function associated with the system s hamiltonian . in many applications, is either of empirical nature , or the result of a self - consistent calculation .the standard method for computing the density matrix requires diagonalizing , an operation whose computational complexity scales cubically with the number of electronic degrees of freedom . having a linear scaling scheme to obtain this quantity is a key step for modeling larger systems , thus making possible the computational study of a vast class of problems ,whose behavior can not be described by smaller models .areas in which such a technique would have a major impact include nanotechnology and biochemistry , to name but a couple .several methods have been proposed to circumvent diagonalization .these methods are based on the nearsightedness principle , which guarantees that in the limit the matrices needed to compute the fermi operator will become sparse . among the different approaches that have been proposed, we might cite divide - and - conquer schemes , density - matrix minimization , green s function , maximally localized orbitals and penalty functions methods .the use of sparse matrix algebra eventually leads to linear scaling .a second class of methods , on which we shall focus here , uses the finite - temperature fermi operator . due to the finite temperature ,the singularity at the chemical potential is smoothed , thus allowing for an expansion in simpler functions of . since orbital localization is not explicitly exploited, this class of methods can also be applied to metals .the earliest attempts in this direction were based on an expansion in chebyshev polynomials .the computational cost of this method has been analyzed by baer and head - gordon , who found that the order of the polynomial needed to achieve a accuracy depends linearly on the width of the hamiltonian spectrum and the electronic temperature , i.e. .this obviously raises some problems when considering hamiltonians with large , such as those arising from dft calculations using plane wave basis sets , or when low temperatures are required .recently it has been suggested that fast polynomial summation methods , requiring a number of multiplications , can be applied to fermi operator expansion , leading to the more favorable scaling . in this paperwe revisit a particular form for the expansion of the fermi operator , which is based on the grand - canonical formalism and developed in a series of recent papers .the grand - canonical potential for independent fermions is split into a sum of terms , containing . as a consequence of this decomposition, the fermi operator can be written exactly as a sum of terms .the larger the number of terms , the easier the evaluation of the exponential : this implies a tradeoff between the size of and the accuracy of the results . in this paper, we investigate the analytical properties of this decomposition , finding that a large number of terms are almost ideally conditioned , and that their contribution to the fermi operator can be easily and effectively computed in a single shot with a polynomial expansion .the remaining few are tackled via a newton - like iterative inversion scheme , which needs to be applied to each term individually but is very efficient in dealing with large . with this hybrid approach, large values of can be reached at a cost that is modest and independent of the system size .this result can improve significantly the prefactor of other methods using similar decompositions .moreover , using this approach , we achieve a scaling of the operations count with that is sublinear , and competitive with the result of ref. if their fast summation technique is used . in this way , accurate , low - temperature calculations can be performed .we use an expansion of the fermi operator based on grand - canonical formalism , which has been developed and employed in several recent works .we summarize the derivation and the resulting expression here , introducing a slightly different notation . to simplify the expressions, we will set the zero of energy at , and measure energies in units of .this amounts to replacing in the standard expression for the fermi operator with . using this notation , the grand - canonical potential for a system of non - interacting fermions becomes introducing the matrices , we can perform the decomposition these expressions are analogous to those introduced in ref. , apart from a change of indices ( , ) . using factorization ( [ eq : mq - factor ] ) ,the grand - canonical potential can be written in compact form as .the observables of interest for the system can be obtained as derivatives of the grand - canonical potential . in particular , the grand - canonical density matrix reads : the decomposition ( [ eq : density - matrix ] ) is exact for any value of . as increases ,the exponential is easier to approximate .however , the number of which have to be inverted increases .previous works using this approach had to find the best compromise between the length of the expansion and the errors introduced by an approximate evaluation of the matrix exponential , therefore losing the advantage of an exact expansion . in order to find a solution to this problem , it is useful to analyze the properties of the in the large limit .it turns out that matrices with small are much more difficult to handle than those having a higher index .we therefore suggest applying different strategies in the two cases .let us define the spectral radius of a matrix as the maximum modulus of its eigenvalues , , and its condition number .we then introduce the shorthands , which is a measure of the width of the hamiltonian s spectrum , and , which is of the order of the band gap in insulators , and tends to zero for metals . with this notation ,the condition number of the hamiltonian is . in this section, we will obtain the corresponding quantities for the .in particular , we will show that does not depend on in the large limit , and demonstrate that the are always better conditioned than the hamiltonian .( color online ) plot of , which is equal to ( equation ( [ eq : mq - function ] ) ) within .the dashed line corresponds to the locus of local minima . ]we must consider how the spectrum of the hamiltonian is mapped by the function it is readily found that , for any and , is a monotonically decreasing function of . for fixed , andthe minimum value is , which is reached for . from the plot of ( figure [ fig : mq - function ] ) , it is apparent that the region which can lead to ill - conditioned matrices is the one with and , where the spectrum of can contain eigenvalues close to zero . in this region , an upper bound to the maximum eigenvalue is given by , and an estimate of the minimum eigenvalue within is .the following set of results can easily be proved by series expansion in powers of , assuming and it can be seen from eq .( [ eq : cn - emmeq ] ) that the condition number tends rapidly to one as is increased , and is always smaller than ( see also figure [ fig : mq - condition ] ) .note that the last inequality in eq .( [ eq : cn - emmeq ] ) , valid for , shows that is bounded also in the metallic case .( color online ) condition number of , in the limit , for a typical value of .dark ( blue ) and light ( red ) series correspond to the behavior for a metal and for an insulator ( ) . even for a metallic systemthe condition number remains finite , and for the insulator it saturates at . in both cases, drops rapidly to one as increases . ]the analysis performed above suggests dealing separately with the few , worst - conditioned matrices having , and with those which have , for .the latter will form the `` tail '' contribution to the density matrix , and will be discussed first . in order to obtain a convergent power series for , it is convenient to perform an expansion around the diagonal matrix , where is an arbitrary complex number whose value will be chosen so as to accelerate convergence . defining the shorthand , one has ^{-1}\nonumber\\ & = & \left(1-z'\right)^{-1}\sum_{j=0}^{\infty } \left(\frac{{\ensuremath{{\bf z}}}_{{l}}-z'{{\ensuremath{{\bf 1}}}}}{1-z'}\right)^j.\label{eq : series - yqk}\end{aligned}\ ] ] the condition for convergence of ( [ eq : series - yqk ] ) is that the whole spectrum of lies within the unit circle in the complex plane .moreover , the convergence speed of the expansion will be determined by the eigenvalue which lies farthest from the origin ( see figure [ fig : y - spectrum ] ) .we refer to appendix [ sec : opt - k ] for a detailed analysis of the convergence ratio where we have set , defining , and introducing the and complex - valued parameter .there we show that , in the large limit , one obtains an upper bound to the convergence ratio , i.e. , provided one chooses for the optimal the analytical estimate having ensured that the series ( [ eq : series - yqk ] ) converges , we can estimate the error made by truncating the power series after terms , in order to achieve a _ relative _ accuracy on , it is necessary to retain at least \ ] ] terms .if we use eq .( [ eq : k - series - guess ] ) and eq . ( [ eq : sr - emmeq - inv ] ) , setting , and taking the large limit , this estimate takes the simpler form while the scaling with is not optimal , the dependence on limits its effects to the small- terms .these terms can be dealt with effectively with a different approach , as we will show below .the influence of the scaling on the overall operations count will therefore be limited .thanks to the chosen parametrization the matrix powers entering eq .( [ eq : series - yqk ] ) depend on only by a scalar factor , therefore , we can compute the expensive powers just once , and obtain any by combining them with the appropriate scalar coefficients .furthermore , we often need just the overall contribution to the density matrix arising from the tail , which reads if either or is very large , computing the scalar coefficients in ( [ eq : series - tail ] ) implies a sizable overhead , which is however independent of the system size , and becomes negligible for large systems .( color online ) the number of terms required to achieve relative accuracy in the polynomial expansion of is plotted for a hamiltonian with minimum eigenvalue , maximum eigenvalue , and for .a full line corresponds to results computed keeping fixed to the value , while dots correspond to the results computed by optimizing separately for each value of .dark ( blue ) and light ( red ) series correspond respectively to the results based on the analytical estimate ( [ eq : k - series - guess ] ) for , and to the ones obtained by iteratively minimizing ( [ eq : d - series ] ) .iterative refinement leads to a significant boost in performance . in any case, the number of terms computed for largely exceeds the terms needed to compute the contributions for larger values , even if is not optimized on a case - by - case basis . ] in order to assess the accuracy of eq .( [ eq : series - tail ] ) , further analysis is needed .if we want to reuse the powers , we must keep fixed to the value optimized at .expression ( [ eq : series - count ] ) gives the number of terms required to compute with accuracy , _ provided that is optimized for each . however , the dependence of on offsets the effect of using a non - optimal .it is easy to show , given the estimate ( [ eq : k - series - guess ] ) , that the number of terms computed for largely exceeds the number of terms required to compute for any , even if is kept fixed to the valued optimized for .figure [ fig : series - k ] shows that this is the case also when is iteratively optimized starting from the analytical estimate . to address the inversion of the worst - conditioned terms with , which are too expensive to obtain by polynomial expansion, one could resort to one of the techniques described in our previous work .in fact , the analysis performed so far can be seen as an improvement to those methods , since we can evaluate in one shot the contribution from the tail , lowering the number of terms which must be treated individually , and therefore improving the efficiency . in this sectionwe will discuss an alternative approach for computing the small- , based on a well - established newton method for matrix inversion .we give a brief outline of the algorithm and some of its known analytical properties , and will use them to estimate the number of operations necessary for our purposes . given a non - singular , matrix , the iterative procedure converges to . defining , the condition for convergence is that , and the error after iterations is which corresponds to a number of multiplies ( two per iteration ) }{\ln \chi } \label{eq : newt - niter}\end{aligned}\ ] ] needed to achieve a relative accuracy .one must then face the problem of finding the approximate inverse needed to start the iterations ( [ eq : nwt - iter ] ) .the authors of ref. suggested the simple form where and .if one uses eq .( [ eq : nwt - gen - guess ] ) , convergence is guaranteed .taking as usual the large and limit for a metallic system , one obtains as an estimate of the operations count to invert .even if a feeble -dependence has been introduced in the operation count , the efficiency is greatly improved if one needs high accuracy or if is large , thanks to the exponential convergence rate .it is however more effective to exploit the simple analytic form for to construct better initial guesses .for instance , one can use the following relation between and , ^{-1 } \nonumber\\ & = & e^{{\mathrm{i}}\pi\delta { l}/p } \sum_{j=0}^{\infty}\left(e^{{\mathrm{i}}\pi\delta { l}/p}-1\right)^j { { \ensuremath{{\bf \bf m}}}_{{l}}}^{-(j+1 ) } \label{eq : newt - extra}\end{aligned}\ ] ] to estimate a guess for starting from an already - computed inverse .the series ( [ eq : newt - extra ] ) converges provided that . in the limitthis amounts to the condition . in theory, all the terms up to could be computed inserting any into eq .( [ eq : newt - extra ] ) . in practice ,computing powers of is not advisable if we aim at linear scaling , since the and their powers tend to be much fuller than the hamiltonian , and the asymptotic convergence rate of eq .( [ eq : newt - extra ] ) is worse than the one for the iterative inversion . in any case , the lowest - order approximation is already much more effective than the universal guess described in ref .one finds that the convergence ratio for the computation of , using the low - order extrapolation , is , leading to an estimate of the number of the operations count this estimate is independent of because we considered the worst - case scenario where the system is metallic .it is also independent of and - most importantly - of . in practice ,one starts from obtained from the polynomial expansion , then computes , using as the initial guess , and continues stepwise , obtaining the initial estimate for iterative inversion of from the previously computed , and so on .alternatively , the first inverse matrix can be computed starting from the simple guess ( [ eq : nwt - gen - guess ] ) .efficient higher - order extrapolations will be discussed in appendix [ sec : ho - extra ] .( color online ) total number of matrix - matrix multiplications required to obtain the density matrix , combining series expansion and newton inversion methods , on a - plot .light ( red ) and dark ( blue ) lines correspond to and target accuracy respectively .full ( a ) , dashed ( b ) and dotted ( c ) lines correspond respectively to the number of operations estimated using the general - purpose initial estimate , using a zero-order extrapolation guess and using extrapolation together with fast polynomial evaluation in the tail region .grid lines mark the slope expected for a linear dependence between in units of and the overall operations count . ] in the previous section we obtained ( equations ( [ eq : series - count ] ) and ( [ eq : newt - niter ] ) ) an upper bound estimate of the number of matrix - matrix multiplications needed in order to obtain the tail contribution up to , and to invert a single using an iterative newton method .the optimal value for is obtained when the incremental cost of including an extra term in the tail contribution ( cfr .[ eq : series - tail ] ) becomes larger than the cost of a single iterative inversion , i.e. when the overall number of multiplications is then in figure [ fig : tot - mult ] we plot the overall operations count obtained by using our theoretical estimates for and .a dramatic improvement is obtained when we use as the initial guess for the inversion of .we can think of the extrapolated guess as an almost optimal preconditioner and are considering how this could be exploited in different inversion schemes as well .it is worth noting that - despite the fact that the tail contribution requires a number of multiplies scaling quadratically with - the overall scaling is significantly sublinear .fast polynomial summation methods can be used to compute both and .this reduces the number of multiplies from to , however at the cost of storing an extra matrices . combining these fast summation techniques with iterative inversionfurther lowers the operations count , leading to a scaling slightly better than ( figure [ fig : tot - mult]c ) .so far we have estimated the accuracy of the computation of each term using as a measure of the error affecting the estimate . however , the quantity we are more interested in is the band structure energy $ ] .a theoretical estimation of the error on requires several assumptions on the distribution of errors over the different eigenvalues of the hamiltonian , and the different terms , and we have not attempted it here .we have instead tested our method against a real system , selecting the self - consistent dft hamiltonian matrix of a 128-atom sample of the metallic fcc phase of , as computed by the cp2k package basis with one additional set of polarization functions , for a total of 1728 basis functions . in the fcc phase is a metal .since we are computing the hamiltonian at the point only , the spectrum has six half - occupied degenerate states at the zero - temperature fermi energy . in the low - temperature limit , which makes this system particularly challenging . ] .the orthogonal hamiltonian matrix is obtained by multiplying the non - orthogonal one with the inverse square root of the overlap matrix .we the computed with standard diagonalization techniques the chemical potential and the exact band - structure energy for different electronic temperatures .we also obtained the bounds of the spectrum of ( ev and ev ) , which are needed in eq .( [ eq : d - series ] ) and could in principle be computed in linear scaling with the lanczos method , or easily estimated by gershgorin s circle theorem or any matrix norm .( color online ) number of matrix - matrix multiplications used to achieve a given error on the band structure energy , for different electronic temperatures .details of the system are given in the text .the data points for every temperature , from left to right , correspond to , , , and target accuracy . ]we then applied our algorithm to the orthogonalized hamiltonian , using fast polynomial summation to compute the tail and using first - order extrapolation in the newton region , with a history vector containing the last two matrices ( cfr .( [ eq : extra - first - order ] ) ) .slight improvements in the operations count could be obtained by hand - tuning , but we just used the automatic procedure based on our theoretical estimates , as described in the previous section . in figure[ fig : test - oc - accu ] we plot the number of multiplications performed versus the resulting error on the energy .since we can use a large value of , can be computed with only a few matrix - matrix multiplies , which have not been included in the operations count .( color online ) the number of matrix - matrix multiplies performed to compute the density matrix for the test case , for different electronic temperatures and target accuracies , plotted on a scale , together with guidelines corresponding to a scaling . ] for a given target accuracy , the operations count scales better than ( figure [ fig : test - opcount ] ) .we also observe that the accuracy of the energy is much better than the relative accuracy guaranteed by the theoretical estimates .consider for example that , by requiring a relative `` spectral radius accuracy '' better than ( first data points in figure [ fig : test - oc - accu ] ) we obtain a relative error on the energy of the order of ( the total energy is kev ) .this is mainly due to the fact that the error in the energy is second order with respect to the error in the density matrix .however , we observe that also the error in the full density matrix , computed as the spectral radius of the difference with the result obtained with diagonalization , is in general almost one order of magnitude smaller than the required accuracy .this result is probably due to a combination of effects : firstly , we use worst - case estimates , so that the accuracy of the individual terms is necessarily higher than the assumed one .moreover , the errors affecting different terms might partially cancel each other out , and many of the contributions in the newton region are computed with an accuracy much higher than requested , due to the exponential convergence .the accuracy improves very quickly as the number of operations increases until , for errors around mev / atom , numerical issues come into play and prevent further refinement , which is anyway hardly necessary for most applications .we have performed a detailed study of a recently - proposed form for fermi operator expansion .the properties of this expansion allow features of the expansion in polynomial and rational functions to be combined , and by optimizing the mixture we can have the best of both worlds . in this way, we circumvent the tradeoff between the number of terms and the accuracy of the expansion , which was needed by prior implementations of this expansion of the fermi operator .moreover , sub - linear scaling of the matrix - matrix multiplications count with respect to the hamiltonian range is achieved , making the method particularly attractive for low - temperature and high - accuracy applications .however , there is still room for improvement . in particular, work is in progress in the direction of a better polynomial expansion in the tail region .we are also considering applying the method to molecular dynamics . in this caseone could use the stored from the previous step as a starting point for iterative minimization . in this way , the computation of the different -channels can be made independent , adding a layer of parallelism on top of the parallel matrix - matrix multiply .formal analogies between our expansion and trotter factorization entering path integral techniques suggest that some of the ideas presented here might be useful to tackle that problem as well . in order to achieve linear scaling ,attention should be paid to the issue of matrix truncation , since here we have dealt only with matrix - matrix operations counts .preliminary results show that in this respect there are no significant differences from standard expansion methods , as the minimum sparsity of the terms taken into account is basically the same as the sparsity of the whole density matrix , which is dictated by the physics of the system .the detailed analysis we have performed in this work has allowed us to obtain significant improvements over the previous applications of this decomposition of the fermi operator , and lays solid foundations for further progress .the generous allocation of computer time by the swiss national supercomputing center ( cscs ) and technical assistance from neil stringfellow is kindly acknowledged .we would also like to thank giovanni bussi and paolo elvati for fruitful discussion .we show how the value of in eq .( [ eq : chi - series ] ) can be optimized to obtain faster convergence of the polynomial expansion .expressions involved are quite lengthy , so we introduce several shorthands .let be the bounds of the hamiltonian spectrum .we parametrize as , define and .the square modulus of the extrema of the transformed hamiltonian spectrum ( see figure [ fig : y - spectrum ] ) is and the convergence ratio is .one can obtain an analytical estimate for , and an upper bound for , by taking the limit , and making the simplifying assumption .this implies and leads to the estimate ( [ eq : k - series - guess ] ) , which can be further improved by minimizing numerically ( [ eq : d - series ] ) with respect to and .one can derive expressions for high - order extrapolation of inverse matrices from equation ( [ eq : newt - extra ] ) , writing them as a linear combination of already - computed inverses .we will sketch the procedure by deriving the expression for the first - order extrapolation of , using only and , which is then easily extended to higher orders .let .one can write the first - order extrapolations for the new inverse and for the already - computed one , as a function of powers of : this linear system can be solved for and , obtaining for higher orders one simply inserts into the system more constraints , corresponding to `` older '' inverse matrices , and writes the extrapolation including higher powers of .the system is then solved in terms of these powers , eventually finding the coefficients for the estimate of the new inverse as a linear combination of the older ones .
we present a method to compute the fermi function of the hamiltonian for a system of independent fermions , based on an exact decomposition of the grand - canonical potential . this scheme does not rely on the localization of the orbitals and is insensitive to ill - conditioned hamiltonians . it lends itself naturally to linear scaling , as soon as the sparsity of the system s density matrix is exploited . by using a combination of polynomial expansion and newton - like iterative techniques , an arbitrarily large number of terms can be employed in the expansion , overcoming some of the difficulties encountered in previous papers . moreover , this hybrid approach allows us to obtain a very favorable scaling of the computational cost with increasing inverse temperature , which makes the method competitive with other fermi operator expansion techniques . after performing an in - depth theoretical analysis of computational cost and accuracy , we test our approach on the dft hamiltonian for the metallic phase of the alloy .
the problem of computing roots of univariate polynomials has a long mathematical history . recently , some new investigations focused on subdivision methods , where root localization is based on simple tests such as _ descartes rule of signs _ and its variant in the bernstein basis .complexity analysis was developed for univariate integer polynomial taking into account the bitsize of the coefficients , and providing a good understanding of their behavior from a theoretical and practical point of view .approximation and bounding techniques have been developed to improve the local speed of convergence to the roots .even more recently a new attention has been given to continued fraction algorithms ( cf ) , see e.g. and references therein .they differ from previous subdivision - based algorithms in that instead of bisecting a given initial interval and thus producing a binary expansion of the real roots , they compute continued fraction expansions of these roots .the algorithm relies heavily on computations of lower bounds of the positive real roots , and different ways of computing such bounds lead to different variants of the algorithm .the best known worst - case complexity of cf is , while its average complexity is , thus being the only complexity result that matches , even in the average the complexity bounds of numerical algorithms .moreover , the algorithm seems to be the most efficient in practice .subdivision methods for the approximation of isolated roots of multivariate systems are also investigated but their analysis is much less advanced . in ,the authors used tensor product representation in bernstein basis and domain reduction techniques based on the convex hull property to speed up the convergence and reduce the number of subdivisions . in , the emphasis is put on the subdivision process , and stopping criterion based on the normal cone to the surface patch . in , this approach has been improved by introducing pre - conditioning and univariate - solver steps . the complexity of the method is also analyzed in terms of intrinsic differential invariants .this work is in the spirit of .the novelty of our approach is the presentation of a tensor - monomial basis algorithm that generalizes the univariate continued fraction algorithm and does not assume generic position .we apply a subdivision approach also exploiting certain properties of the bernstein polynomial representation , even though no basis conversion takes place .our contributions are as follows .we propose a new adaptive algorithm for polynomial system real solving that acts in monomial basis , and exploits the continued fraction expansion of ( the coordinates of ) the real roots .this yields the best rational approximation of the real roots .all computations are performed with integers , thus this is a division - free algorithm .we propose a first step towards the generalization of vincent s theorem to the multivariate case ( th .[ vincentxyz ] ) we perform a ( bit ) complexity analysis of the algorithm , when oracles for lower bounds and counting the real roots are available ( prop . [ prop : mcf - complexity ] ) and we propose non - trivial improvements for reducing the total complexity even more ( sec .[ sec : complexity - improvements ] ) . in all casesthe bounds that we derive for the multivariate case , match the best known ones for the univariate case , if we restrict ourselves to . for a polynomial ] with , .if not specified , we denote .we are interested in isolating the real roots of a system of polynomials ] , .we denote by the solution set in of the equation , where is or . in what follows , resp . , means bit , resp .arithmetic , complexity and the , resp . , notation means that we are ignoring logarithmic factors . for , is the maximum bit size of the numerator and the denominator . for a polynomial ] . in this section, we describe the family of algorithms that we consider .the main ingredients are * a suitable representation of the equations in a given ( usually rectangular ) domain , for instance a representation in the bernstein basis or in the monomial basis ; * an algorithm to split the representation into smaller sub - domains ; * a reduction procedure to shrink the domain .different choices for each of these ingredients lead to algorithms with different practical behaviors .the general process is summarized in alg .[ algo : subdivision ] .[ algo : subdivision ] initialize a stack and add on top of it while is not empty do 1 .pop a system and : 2 .perform a precondition process and/or a reduction process to refine the domain .3 . apply an exclusion test to identify if the domain contains no roots .4 . apply an inclusion test to identify if the domain contains a single root . in this case output .if both tests fail split the representation into a number of sub - domains and push them to .the instance of this general scheme that we obtain generalizes the continued fraction method for univariate polynomials ; the realization of the main steps ( b - e ) can be summarized as follows : 1 .perform a precondition process and compute a lower bound on the roots of the system , in order to reduce the domain .2 . apply interval analysis or sign inspection to identify if some has constant sign in the domain , i.e. if the domain contains no roots .3 . apply miranda test to identify if the domain contains a single root . in this case output .4 . if both tests fail , split the representation at and continue . in the following sections, we are going to describe more precisely the specific steps and analyze their complexity . in sec .[ homography ] , we describe the representation of domains via homographies and the connection with the bernstein basis representation . subdivision , based on shifts of univariate polynomials , reduction and preconditionning are analyzed in sec .[ subdivision - reduction ] .exclusion and inclusion tests as well as a generalization of vincent s theorem to multivariate polynomials , are presented in sec .[ criteria ] . in sec .[ sec : complexity ] , we recall the main properties of continued fraction expansion of real numbers and use them to analyze the complexity of a subdivision algorithm following this generic scheme . we conclude with examples produced by our c++ implementation in sect .[ sec : impl ] .0 the algorithm in dimension two : + input : and defined over a box .+ output : approximations of the real roots of . 1 .initialize a stack and add on top of it .2 . while is not empty repeat 3 - 4 : 3 .check for common solutions : 1 .pop a pair and compute lower bounds on the common solutions in .2 . translate the polynomials by .3 . repeat ( a)-(b ) until for some pair it is .4 . if no roots and for some we have or else subdivide . 4 .subdivision step : + subdivide into ,1[^2 ] , ,1[ ] to ,+\infty[ ] to ,+\infty[ ] .for a tensor - bernstein polynomial we compute as needed .[ cor : bernsteincoefs ] the bernstein expansion of in is that is , the coefficients of coincide with the bernstein coefficients up to contraction and binomial factors .thus tensor - bernstein coefficients and tensor - monomial coefficients in a sub - domain of differ only by multiplication by positive constant .in particular they are of the same sign .hence this corollary allows us to take advantage of sign properties ( eg .the variation diminishing property ) of the bernstein basis without computing it .the resulting representation of the system consists of the transformed polynomials , represented as tensors of coefficients as well as integers , for from which we can recover the endpoints of the domain , using ( [ hbox ] ) .we describe the subdivision step using the homography representation .this is done at a point .it consists in computing up to new sub - domains ( depending on the number of nonzero s ) , each one having as a vertex . given that represent the initial system at some domain, we consider the partition of defined by the hyperplanes , .these intersect at hence we call this _ partition at . subdividing at is equivalent to subdividing the initial domain into boxes that share the common vertex and have faces either parallel or perpendicular to those of the initial domain . we need to compute a homography representation for every domain in this partition .the computation is done coordinate wise ; observe that for any domain in this partition we have , for all , either ] .it suffices to apply a transformation that takes these domains to . in the former case, we apply to the current polynomials and in the latter case we shift them by , i.e. we apply . the integers that keep track of the current domaincan be easily updated to correspond to the new subdomain .we can make this process explicit in general dimension : every computed subdomain corresponds to a binary number of length , where the bit is if is applied or if is applied . in our continued fraction algorithmthe subdivision is performed at . * illustration .* let us illustrate this process in dimension two .the system is defined over .we subdivide this domain into ^ 2 ] , ] of [x_1,{{ .. }},\widehat{x_k } , { { .. } } , x_n ] ] .this gives a total cost for computing of the latter sum implies that it is faster to apply the shifts with increasing order , starting with the smallest number . since for all , and we must shift a system of polynomials we obtain the stated result .let us present an alternative way to compute a sub - domain using contraction , preferable when the bitsize of is big .the idea behind this is the fact that and compute the same sub - domain , in two different ways .[ contractcomplexity ] if , , then the coefficients of , , can be computed in .the operation , i.e. computing the new coefficients can be done with multiplications : since , if these powers are computed successively then every coefficient is computed using two multiplications .moreover , it suffices to keep in memory the powers , in order to compute any .geometrically this can be understood as a stencil of points that sweeps the coefficient tensor and updates every element using one neighbor at each time .the bitsize of the multiplied numbers is hence the result follows . now if we consider a contraction followed by a shift by w.r.t . for polynomials we obtain operations for the computation of the domain .the disadvantage is that the resulting coefficients are of bitsize instead of with the use of shifts .also note that this operation would compute a expansion of the real root which differs from continued fraction expansion .in this section we define univariate polynomials whose graph in bounds the graph of . for every direction , we provide two polynomials bounding the values of in from below and above respectively .define [ u_bounds ] for any , and any , we have for , we can directly write the product of power sums is greater than 1 ; divide both sides by it .analogously for .[ cbounds ] given , if with ,\mu_k] ] , it is , i.e. all pos .roots are in . combining both boundswe deduce that \times\cdots\times [ \mu_n,\mathcal m_n ] ] .if then also and it follows .similarly .the same arguments hold for ] , namely , with coefficients .suppose that for .we compute thus .[ fig : env ] shows how these univariate quadratics bound the graph of in . to improve the reduction step, we use preconditioning .the aim of a preconditioner is to tune the system so that it can be tackled more efficiently ; in our case we aim at improving the bounds of cor .[ cbounds ] .a preconditioning matrix is an invertible matrix that transforms a system into the equivalent one .this transformation does not alter the roots of the system , since the computed equations generate the same ideal .the bounds obtained on the resulting system can be used directly to reduce the domain of the equations before preconditioning .preconditioning can be performed to a subset of the equations .since we use a reduction process using cor .[ cbounds ] we want to have among our equations of them whose zero locus is orthogonal to the direction , for all . assuming a square system , we precondition to obtain a locally orthogonal to the axis system; an ideal preconditioner would be the jacobian of the system evaluated at a common root ; instead , we evaluate at the image of the center of the initial domain , .thus we must compute the inverse of the jacobian matrix {1\leq i , j,\leq n} ] . if there is no toot of such that then all the coefficients are of the same signthat is , if , where is the center of , then is excluded by sign conditions . the interval ] and the disk is transformed into the half complex plane .we deduce that has no root with , . by thm .[ vincentxyz ] , the coefficients of are of the same sign .we deduce that if a domain is far enough from the zero locus of some then it will be excluded , hence redundant empty domains concentrate only in a neighborhood of .the tubular neighborhood of size of is the set we bound the number of boxes that are not excluded at each level of the subdivision tree .assume that for , is bounded .then the number of boxes of size kept by the algorithm is less than , where is such that st . , consider a subdivision of a domain into boxes of size .we will bound the number of boxes in this subdivision that are not rejected by the algorithm . by cor .[ corxyz ] if a box is not rejected , then we have for all , where is the center of the box . thus all the points of this box are at distance to that is in . to bound , it suffices to estimate the volume , since we have : when tends to , this volume becomes equivalent to a constant times . for a square system with single roots in , it becomes equivalent to the sum for all real roots in of the volumes of parallelotopes in dimensions of height and unitary edges proportional to the gradients of the polynomials evaluated at the common root ; it is thus bounded by .we deduce that there exists a constant such that . for overdetermined systems ,the volume is bounded by a similar expression .since has a limit when tends to , we deduce the existence of the finite constant and the bound of the lemma on the number of kept boxes of size . * inclusion test .* we present a test that discovers common solutions , in a box , or equivalently in , through homography . to simplify the statements we assume that the system is square , i.e. . the _ lower face _ polynomial of w.r.t .direction is .the _ upper face _ polynomial of w.r.t . is . if for some permutation , and are constant and opposite for all , then the equations have at least one root in . the implementation of the miranda test can be done efficiently if we compute a matrix with entry iff and are opposite . then , miranda test is satisfied iff there is no zero row and no zero column . to see thisobserve that the matrix is the sum of a permutation matrix and a matrix iff this permutation satisfies miranda s test .combined with the following simple fact , we have a test that identifies boxes with a single root .if has constant sign in a box , then there is at most one root of in .suppose are two distinct roots ; by the mean value theorem there is a point on the line segment , and thus in , s.t . hence . 0 in order to identify boxes that contain solutions we use the topological degree and the jacobian .another criterion that can be used is the newton test .if the newton iteration is a contraction over a box then there exists a unique root of the system inside .fixed point theorem for existence , mean value theorem for unicity .consider the system which has a solution iff is a simple root of , so then .the jacobian of this system * complexity of the inclusion criteria . *miranda test can be decided with evaluations on interval ( cf . ) as well as one evaluation of , overall operations .the cost of the inclusion test is dominated by the cost of evaluating polynomials of size on an interval , i.e. operations suffice .if the real roots of the square system in the initial domain are simple , then alg . [ algo : subdivision ] stops with boxes isolating the real roots in .if the real roots of in are simple , in a small neighborhood of them the jacobian of has a constant sign . by the inclusion test, any box included in this neighborhood will be output if and only if it contains a single root and has no real roots of the jacobian .otherwise , it will be further subdivided or rejected .suppose that the subdivision algorithm does not terminate .then the size of the boxes kept at each step tends to zero . by cor .[ corxyz ] , these boxes are in the intersection of the tubular neighborhoods for the maximal size of the kept boxes . if is small enough , these boxes are in a neighborhood of a root in which the jacobian has a constant size , hence the inclusion test will succeed . by the exclusion criteria ,a box domain is not subdivided indefinitely , but is eventually rejected when the coefficients become positive .thus the algorithm either outputs isolating boxes that contains a real root of the system or rejects empty boxes .this shows , by contradiction , the termination of the subdivision algorithm .in this section we compute a bound on the complexity of the algorithm that exploits the continued fraction expansion of the real roots of the system .hereafter , we call this algorithm mcf ( multivariate continued fractions ) . since the analysis of the reduction steps of sec .[ subdivision - reduction ] and the exclusion - inclusion test of sec .[ criteria ] would require much more developments , we simplify the situation and analyze a variant of this algorithm .we assume that two oracles are available .one that computes , exactly , the partial quotients of the positive real roots of the system , and one that counts exactly the number of real roots of the system inside a hypercube in the open positive orthant , namely . inwhat follows , we will assume the cost of the first oracle is bounded by , while the cost of the second is bounded by , and we derive the total complexity of the algorithm with respect to these parameters . in any casethe number of reduction or subdivision steps that we derive is a lower bound on the number of steps that every variant of the algorithm will perform .the next section presents some preliminaries on continued fractions , and then we detail the complexity analysis .our presentation follows closely . for additional detailswe refer the reader to , e.g. , . in general a _ simple ( regular ) continued fraction _ is a ( possibly infinite ) expression of the form ,\ ] ] where the numbers are called _ partial quotients _ , and for .notice that may have any sign , however , in our real root isolation algorithm , without loss of generality . by considering the recurrent relations it can be shown by induction that ] then and since this is a series of decreasing alternating terms it converges to some real number .a finite section ] are known as its _complete quotients_. that is ] be the continued fraction expansion of a real number .the gauss - kuzmin distribution states that for almost all real numbers ( meaning that the set of exceptions has lebesgue measure zero ) the probability for a positive integer to appear as an element in the continued fraction expansion of is \backsimeq \lg{\frac{(\delta+1)^2}{\delta(\delta+2 ) } } , \quad \text{for any fixed } i > 0 .\label{eq : gauss - kuzmin}\ ] ] the gauss - kuzmin law induces that we can not bound the mean value of the partial quotients or in other words that the expected value ( arithmetic mean ) of the partial quotients is diverging , i.e. = \sum_{\delta=1}^{\infty } { \delta\ , prob [ c_i = \delta ] } = \infty , \text { for } i > 0.\ ] ] surprisingly enough the geometric ( and the harmonic ) mean is not only asymptotically bounded , but is bounded by a constant , for almost all . for the geometric meanthis is the famous khintchine s constant , i.e. { \prod_{i=1}^{n}{c_i } } } = \mathcal{k } = 2.685452001 ... \ ] ] it is not known if is a transcendental number .the expected value of the bit size of the partial quotients is a constant for almost all real numbers , when or sufficiently big .notice that in ( [ eq : gauss - kuzmin ] ) , , thus is uniformly distributed in .let , then = { \ensuremath{\mathcal{o}}\xspace } ( \lg{\mathcal{k } } ) = { \ensuremath{\mathcal{o}}\xspace}(1 ) .\label{eq : exp_b}\ ] ] let be an upper bound on the bitsize of the partial quotient that appear during the execution of the algorithm .[ lem : mcf - steps ] the number of reduction and subdivision steps that the algorithm performs is .let be a real root of the system .it suffices to consider the number of steps needed to isolate the coordinate of .recall , that we assume , working in the positive orthant , we can compute exactly the next partial quotient in each coordinate ; in other words a vector , where each , , is the partial quotient of a coordinate of a positive real is the partial quotient of the positive imaginary part of a coordinate of a solution of the system .] solution of the system .let be the number of steps needed to isolate the coordinate of the real root .the analysis is similar to the univariate case .the successive approximations of by the lower bound , yield the -th approximant , of , which using ( [ eq : cf - approx ] ) satisfies in order to isolate , it suffices to have where is the local separation bound of , that is the smallest distance between and all the other -coordinates of the positive real solutions of the system . combining the last two equations, we deduce that to achieve the desired approximation , we should have , or .that is to isolate the coordinate it suffices to perform steps . to compute the total number of steps , we need to sum over all positive real roots and multiply by , which is the number of coordinates , that is where is the number of positive real roots . to bound the logarithm of the product , we use , i.e. aggregate separation bounds for multivariate , zero - dimensional polynomial systems .it holds taking into account that we conclude that the number of steps is .[ prop : mcf - complexity ] the total complexity of the algorithm is . at each -th step of algorithm , if there are more than one roots of the corresponding system in the positive orthant ( the cost of estimating this is , we compute the corresponding partial quotients , where ( the cost of estimating this is then , for each polynomial of the system , , we perform the shift operation , and then we split to subdomains .let us estimate the cost of the last two operations .a shift operation on a polynomial of degree , by a number of bitsize , increases the bitsize of the polynomial by an additive factor . at the step of the algorithm , the polynomials of the corresponding system are of bitsize , and we need to perform a shift operation to all the variables , with number of bitsize .the cost of this operation is , and since we have polynomials the costs becomes , the resulting polynomial has bitsize .to compute the cost of splitting the domain , we proceed as follows .the cost is bounded by the cost of performing operations , which in turn is .so the total cost becomes .it remains to bound . if is a bound on the bitsize of all the partial quotients that appear during and execution of the algorithm , then .moreover , ( lem .[ lem : mcf - steps ] ) , and so the cost of each step is . finally , multiplying by the number of steps ( lem .[ lem : mcf - steps ] ) we get a bound of . to derive the total complexity we have to take into account that at each step we compute some partial quotients and and we count the number of real root of the system in the positive orthant .hence the total complexity of the algorithm is . in the univariate case ( ) , if we assume that ( [ eq : exp_b ] ) holds for real algebraic numbers , then the cost of and is dominated by that of the other steps , that is the splitting operations , and the ( average ) complexity becomes and matches the one derived in ( without scaling ) .we can reduce the number of steps that the algorithm performs , and thus improve the total complexity bound of the algorithm , using the same trick as in .the main idea is that the continued fraction expansion of a real root of a polynomial does not depend on the initial computed interval that contains all the roots .thus , we spread away the roots by scaling the variables of the polynomials of the system by a carefully chosen value .if we apply the map , to the initial polynomials of the system , then the real roots are multiply by , and thus their distance increase .the key observation is that the continued fraction expansion of the real roots does not depend on their integer part .let be the roots of the system , and , be the roots after the scaling .it holds . from holds that and thus if we choose and assume that which is the worst case , then .thus , following the proof of lem .[ lem : mcf - steps ] , the number of steps that the algorithm is .the bitsize of the scaled polynomials becomes .the total complexity of algorithm is now where the maximum bitsize of the partial quotient appear during the execution of the algorithm .if we assume that ( [ eq : exp_b ] ) holds for real algebraic numbers , then . notice that in this case , when , the bound becomes , which agrees with the one proved in . the discussion above combined with prop .[ prop : mcf - complexity ] lead us to : [ th : improved - mcf - complexity ] the total complexity of the algorithm is .we have implemented the algorithm in the c++ library ` realroot ` of , which is an open source effort that provides fundamental algebraic operations such as algebraic number manipulation tools , different types of univariate and multivariate polynomial root solvers , resultant and gcd computations , etc .the polynomials are internally represented as a vector of coefficients along with some additional data , such as a variable dictionary and the degree of the polynomial in every variable .this allows us to map the tensor of coefficients to the one - dimensional memory .the univariate solver that is used is the continued fraction solver ; this is essentially the same algorithm with a different inclusion criterion , the descartes rule .the same data structures is used to store the univariate polynomials , and the same shift / contraction routines .the univariate solver outputs the roots in increasing order , as a result of a breadth - first traverse of the subdivision tree .in fact , we only compute an isolation box for the smallest positive root of univariate polynomials and stop the solver as soon as the first root is found .our code is templated and is efficiently used with gmp arithmetic , since long integers appear as the box size decreases .first , we consider the system ( ) , where , and .we are looking for the real solutions in the domain \times[-2,2]$ ] , which is mapped to , by an initial transformation .the isolating boxes of the real roots can be seen in fig .[ fig : sys-1 ] . in systems , we multiply and by quadratic components , hence we obtain and the isolating boxes of this system could be seen in fig .[ fig : sys-2 ] .notice , that size of the isolation boxes that are returned in this case is considerably smaller .consider the system , which consists of and , which is a polynomial of bidegree .the output of the algorithm , that is the isolating boxes of the real roots can be seen in fig .[ fig : sys-4 ] . one important observation is the fact the isolating boxes _ are not _ squares , which verifies the adaptive nature of the proposed algorithm .we provide execution details on these experiments in table [ tab : exec ] .several optimizations can be applied to our code , but the results already indicate that our approach competes well with the bernstein case . *acknowledgements * + the first and second author were supported by marie - curie initial training network saga , [ fp7/2007 - 2013 ] , grant [ pitn - ga-2008 - 214584 ] .the third author was supported by contract [ anr-06-blan-0074 ] `` decotes '' .i. z. emiris , b. mourrain , and e. p. tsigaridas . .in p. hertling , c. hoffmann , w. luther , and n. revol , ed . ,_ reliable implementa- tions of real number algorithms : theory and practi- ce _ , _ lncs _ vol . 5045 , pp .springer verlag , 2008 .
we present a new algorithm for isolating the real roots of a system of multivariate polynomials , given in the monomial basis . it is inspired by existing subdivision methods in the bernstein basis ; it can be seen as generalization of the univariate continued fraction algorithm or alternatively as a fully analog of bernstein subdivision in the monomial basis . the representation of the subdivided domains is done through homographies , which allows us to use only integer arithmetic and to treat efficiently unbounded regions . we use univariate bounding functions , projection and preconditionning techniques to reduce the domain of search . the resulting boxes have optimized rational coordinates , corresponding to the first terms of the continued fraction expansion of the real roots . an extension of vincent s theorem to multivariate polynomials is proved and used for the termination of the algorithm . new complexity bounds are provided for a simplified version of the algorithm . examples computed with a preliminary c++ implementation illustrate the approach . [ algebraic algorithms ]
approximate bayesian computation ( abc ) ( or likelihood - free inference ) has become increasingly prevalent in areas of the natural sciences in which likelihood functions requiring integration over a large number of complex latent states are essentially intractable .( see cornuet _et al . , _ 2008 ,beaumont , 2010 , marin _ et al ._ , 2011 and sisson and fan , 2011 for recent reviews . )the technique circumvents direct evaluation of the likelihood function by matching summary statistics calculated from the observed data with corresponding statistics computed from data simulated from the assumed data generating process . if such statistics are sufficient , the method yields an approximation to the exact posterior distribution of interest that is accurate , given an adequate number of simulations ; otherwise , _ partial _ posterior inference , reflecting the information content of the set of summary statistics only , is the outcome .the choice of statistics for use within the abc method , in addition to techniques for determining the matching criterion , are clearly of paramount importance , with much recent research having been devoted to devising ways of ensuring that the information content of the chosen set is maximized , in some sense ; e.g. * * * * joyce and marjoram ( 2008 ) , wegmann _ et al . _( 2009 ) , blum ( 2010a ) and fearnhead and prangle ( 2012 ) .recent contributions here include those of drovandi _ et al . _( 2011 ) , drovandi and pettitt ( 2013 ) , gleim and pigorsch ( 2013 ) and creel and kristensen ( 2014 ) , in which the statistics are produced by estimating an approximating _ auxiliary _ model using both simulated and observed data .this approach mimics , in a bayesian framework , the principle underlying the frequentist methods of indirect inference ( ii ) ( gouriroux _ et al ._ 1993 , smith , 1993 , heggland and frigessi , 2004 ) and efficient method of moments ( emm ) ( gallant and tauchen , 1996 ) , using , as it does , the approximating model to produce feasible , but sub - optimal , inference about an intractable true model . whilst the price paid for the approximation in the frequentist setting is a possible reduction in efficiency ,the price paid in the bayesian case is posterior inference that is conditioned on statistics that are not sufficient for the parameters of the true model , and which amounts to only partial inference as a consequence .our paper continues in this spirit , but with particular focus given to the application of auxiliary model - based abc methods in the state space model ( ssm ) framework .we begin by demonstrating that reduction to a set of sufficient statistics of fixed dimension relative to the sample size is _ infeasible _ in finite samples in ssms .this key observation then motivates our decision to seek asymptotic sufficiency in the state space setting by using the mle of the parameters of the auxiliary model as the ( vector ) summary statistic in the abc matching criterion .we focus on two qualitatively different cases : 1 ) one in which the auxiliary model _ coincides _ with the true model , in which case asymptotic sufficiency for the true parameters is achievable via the proposed abc technique ; and 2 ) the more typical case in which the exact likelihood function is inaccessible , and the auxiliary model represents an approximation only .the first case mimics that sometimes referenced in the ii ( or emm ) literature , in which the auxiliary model ` nests ' , or is equivalent to in some well - defined sense , the true model , and full asymptotic efficiency is achieved by the frequentist methods as a consequence .investigation of this case allows us to document the maximum accuracy gains that are possible via the auxiliary model route , compared with abc techniques based on alternative summaries , without the confounding effect of the error in the approximating model .the second case gives some insight into what can be achieved in a general non - linear state space setting when the investigator is forced to adopt an inexact approximating model in the implementation of auxiliary model - based abc .we give emphasis here to non - linear models in which the state ( and possibly the observed ) is driven by a continuous time model , as this is the canonical example in which simulation from the true model is feasible ( at least via an arbitrarily fine discretization ) , whilst the likelihood function is ( typically ) unavailable and exact posterior analysis thus not achievable .we begin by considering the very concept of finite sample sufficiency in the state space context , and the usefulness of applying a typical abc approach - based on _ ad hoc _ summary statistics - in this setting . using the linear gaussian model for illustration, we demonstrate the lack of reduction to a set of sufficient statistics of fixed dimension , this result providing motivation , as noted above , for the pursuit of asymptotic sufficiency via the auxiliary model method .we then proceed to demonstrate the bayesian consistency of the auxiliary model approach , subject to the typical quasi - mle form of conditions being satisfied .we also illustrate that to the order of accuracy that is relevant in establishing the theoretical properties of an abc technique ( i.e. allowing the tolerance used in the matching of the statistics to approach zero ) , a selection criterion based on the score of the auxiliary model - evaluated at the mle computed from the observed data - yields equivalent results to a criterion based directly on the mle itself . this equivalence is shown to hold in both the exactly and over - identified cases , and independently of any ( positive definite ) weighting matrix used to define the two alternative distance measures , andimplies that the proximity to asymptotic sufficiency yielded by the use of the mle in an abc algorithm will be replicated by the use of the score .given the enormous gain in speed achieved by avoiding optimization of the approximate likelihood at each replication of abc , this is an critical result from a computational perspective .the application of the proposed method in multiple parameter settings is addressed , with separate treatment of scalar ( or lower - dimensional blocks of ) parameters , via marginal , or integrated likelihood principles advocated , as a possible way of avoiding the inaccuracy that plagues abc techniques in high dimensions .( see blum , 2010b and nott _ et al ._ , 2014 ) .the results outlined in the previous paragraph are applicable to an auxiliary model - based abc method applied in any context ( subject to regularity ) and , hence , are of interest in their own right .however , our particular interest , as already noted , is in applying the auxiliary model method - and thereby exploiting these properties - in the state space setting . for case 1 ) we choose to illustrate the approach using the linear gaussian model , whereby the exact likelihood ( and hence score ) is accessible via the kalman filter ( kf ) , and asymptotic sufficiency thus achievable . for case 2 ) we illustrate the approach via a particular choice of ( approximating ) auxiliary model for the continuous time ssm .specifically , the approximating model is formed as a discretization of the true continuous time model , with the augmented unscented kalman filter ( aukf ) ( julier _ et al . _ , 1995 , julier and uhlmann , 2004 ) used to evaluate the likelihood of that model . the general applicability ,speed and simplicity of the aukf calculations render the abc scheme computationally feasible and relatively simple to implement .this particular approach to the definition of an auxiliary model also leads to a set of summary statistics of relatively small dimension .this is in contrast , for example , with an approach based on a highly parameterized ( ` nesting ' ) approximating model ( see , for example , gleim and pigorsch , 2013 ) , in which the large number of auxiliary parameters - in principle sufficient for the parameters of the true latent diffusion model - is likely to yield a very inaccurate ( non - parametric ) estimate of the true posterior , due to the large dimension of the conditioning statistics .the equality between the number of parameters in our exact and approximating models also means that marginalization of the approximating model to produce a scalar matching criterion for each parameter of the true model is meaningful .the paper proceeds as follows . in section [ abc ]we briefly summarize the basic principles of abc as they would apply in a state space setting , including the role played by summary statistics and sufficiency .we demonstrate the lack of finite sample sufficiency reduction in an ssm , using the linear gaussian model for illustration . * * * * in section [ aux ] , we then proceed to demonstrate the properties of abc based on the mle , score and marginal score , respectively , of a generic approximating model followed , in section [ model ] , by an outline of a computationally feasible approximation - based on the aukf - for use in the non - linear state space setting . using ( repeated samples of ) artificially generated data , the accuracy with which the proposed technique reproduces the exact posterior distributionis assessed in section [ assess ] .the abc methods are based respectively on : i ) the joint score ; ii ) the marginal score ; iii ) a ( weighted ) euclidean metric based on statistics that are sufficient for an observed autoregressive model of order one ; and iv ) the dimension - reduction technique of fearnhead and prangle ( 2012 ) , applied to this latter set of summary statistics .we conduct the assessment firstly within the context of the linear gaussian model , with the issues of sufficiency and matching that are key to accurately reproducing the true posterior distribution able to be illustrated precisely in this setting .the overall superiority of both the joint and marginal score techniques over the abc methods based on summary statistics is demonstrated numerically , as is the remarkable accuracy yielded by the marginal score technique in particular .this exercise thus forms a resounding proof - of - concept for the score - based abc method , albeit in a case where the exact score is available .we then proceed to assess performance in a particular non - linear latent diffusion model , in which the degree of accuracy of the aukf - based approximating model plays a role .a stochastic volatility for financial returns , in which the latent volatility is driven by a square root diffusion model , is adopted as the non - linear example , as the existence of known ( non - central chi - squared ) transition densities means that the exact likelihood function / posterior distribution is available , for the purpose of comparison .we apply the deterministic grid - based filtering method of ng _ et al . _( 2013 ) - suitable for this particular setting - to produce the exact comparators for our abc - based estimates of the relevant marginal posteriors , as well as the marginal posteriors associated with an euler approximation to the true model .the score methods out - perform the summary statistic methods in the great majority of cases documented .some gain in accuracy is still produced via the marginalization technique , although that gain is certainly less marked than in the linear gaussian case , in which the exact score is accessible .notably , all abc - based approximations , which exploit simulation from the exact latent diffusion model , serve as more accurate estimates ( overall ) of the exact posteriors than do the aukf and euler approximations themselves .section [ end ] concludes .the aim of abc is to produce draws from an approximation to the posterior distribution of a vector of unknowns , , given the -dimensional vector of observed data , in the case where both the prior , , and the likelihood , , can be simulated .these draws are used , in turn , to approximate posterior quantities of interest , including marginal posterior moments , marginal posterior distributions and predictive distributions .the simplest ( accept / reject ) form of the algorithm ( tavar _ _ et al ._ _ 1997 , pritchard , 1999 ) proceeds as follows : 1 .simulate , , from 2 .simulate , , from the likelihood , 3 .select such that: where is a ( vector ) statistic , is a distance criterion of some sort , and the tolerance level is arbitrarily small . in practice may be chosen such that , for a given value of , a certain ( small ) proportion of draws of are selected .the algorithm thus samples and from the joint _ _ _ _ posterior: where is the indicator function defined on the set and clearly , when is sufficient and arbitrarily small, approximates the true posterior , , and draws from can be used to estimate features of the true posterior . in practicehowever , the complexity of the models to which abc is applied implies , almost by definition , that sufficiency is unattainable . hence , in the limit , as , the draws can be used only to approximate features of adaptations of the basic rejection scheme have involved post - sampling corrections of the draws using kernel methods ( beaumont _ et al ._ , 2002 , blum , 2010a , blum and franois , 2010 ) , or the insertion of markov chain monte carlo ( mcmc ) and/or sequential monte carlo ( smc ) steps ( marjoram _ et al . _ , 2003 ,et al . , _ 2007 , beaumont _et al . _ , 2009 ,toni _ et al . _ , 2009 and wegmann _ et al . _ , 2009 ) , to improve the accuracy with which is estimated , for any given number of draws .focus is also given to choosing and/or so as to render a closer match to , in some sense ; see joyce and marjoram ( 2008 ) , wegmann _ et al ._ , blum ( 2010b ) and fearnhead and prangle ( 2012 ) . in the latter vein ,drovandi _ et al . _( 2011 ) argue , in the context of a specific biological model , that the use of comprised of the mles of the parameters of a well - chosen approximating model , may yield posterior inference that is conditioned on a large portion of the information in the data and , hence , be close to exact inference based on .( see also drovandi and pettitt , 2013 , gleim and pigorsch , 2013 , and creel and kristensen , 2014 ) .it is the spirit of this approach that informs the current paper , but with our attention given to rendering the approach feasible in a _general _ state space framework that encompasses a large number of the models that are of interest to practitioners , including continuous time models .our focus then is on the application of abc in the context of a general ssm with measurement and transition distributions, respectively , where is a -dimensional vector of static parameters , elements of which may characterize either the measurement or state relation , or both ._ _ _ _ for expositional simplicity , and without loss of generality , we consider the case where both and are scalars . in financial applicationsit is common that both the observed and latent processes _ __ _ are driven by continuous time processes , with the transition distribution in ( [ state_gen ] ) being unknown ( or , at least , computationally challenging ) as a consequence .bayesian inference would then typically proceed by invoking ( euler ) discretizations for both the measurement and state processes and applying mcmc- or smc - based techniques , with such methods being tailor - made to suit the features of the particular ( discretized ) model at hand ; see giordini _ et al . _( 2011 ) for a recent review .in some models expressed initially in discrete time , it may also be the case that the conditional distribution in ( [ meas_gen ] ) is unavailable in closed form , such as when empirically relevant distributions for financial returns are adopted ( e.g. peters _ et al . , _in such cases , smc- or mcmc - based inferential methods are typically infeasible .in contrast , in all of these cases the proposed abc method _ is _ feasible , as long as simulation from the true model ( at least via an arbitrarily fine discretization , in the continuous time case ) is possible .the full set of unknowns thus constitutes the augmented vector where , in the case where evolves in continuous time , represents the infinite - dimensional vector comprising the continuum of unobserved states over the sample period .however , to fix ideas , we define where is the -dimensional vector comprising the time states for the observation periods in the sample .implementation of the algorithm thus involves simulating from by simulating from the prior , followed by simulation of via the process for the state , conditional on the draw of , and subsequent simulation of artificial data conditional on the draws of and the state variable .crucially , the focus in this paper is on inference about only ; hence , only draws of are retained ( via the selection criterion ) and those draws used to produce an estimate of the marginal posterior , , and with sufficiency to be viewed as relating to only .hence , from this point onwards , when we reference a vector of summary statistics , , it is the information content of that vector with respect to that is of importance , and the proximity of to the marginal posterior of that is under question .we comment briefly on state inference in section [ end ] . before outlining the proposed methodology for the model in ( [ meas_gen ] ) and ( [ state_gen ] ) in section [ aux ] we highlight the key observation that motivates our approach , namely that reduction to sufficiency in finite samplesis not possible in state space settings . * * * * we use a linear gaussian state space model to illustrate this result , as closed - form expressions are available in this case ; however , as highlighted at the end of the section , the result is , in principle , applicable to any ssm .sufficient statistics are useful for inference about a ( vector ) parameter since , being in possession of the sufficient set means that the data itself may be discarded for inference purposes . when the cardinality of the sufficient set is small relative to the sample size a significant reduction in complexity is achieved and in the case of abc , conditioning on the sufficient statistics leads to no loss of information , and the method produces a simulation - based estimate of the true posterior .the difficulty that arises is that only distributions that are members of the exponential family ( ef ) possess sufficient statistics that achieve a reduction to a fixed dimension relative to the sample size . in the context of the general ssm described by ( [ meas_gen ] ) and ( [ state_gen ] )the effective use of sufficient statistics is problematic . for any it is unlikely that the marginal distribution of will be a member of the ef , due to the vast array of non - linearities that are possible , in either the measurement or state equations , or both .moreover , even if _ were _ a member of the ef for each , to achieve a sufficiency reduction it is required that the _ joint _ distribution of also be in the ef . for example , even if were gaussian , it does not necessarily follow that the joint distribution of will achieve a sufficiency reduction .the most familiar example of this is when follows a gaussian moving average ( ma ) process and consequently only the whole sample is sufficient .even the simplest ssms generate ma - like dependence in the data .consider the linear gaussian ssm , expressed in regression form as where the disturbances are respectively independent and variables .in this case , the joint distribution of the vector of s ( which are marginally normal and members of the ef ) is where is the inverse of the signal - to - noise ( sn ) ratio and is the familiar toeplitz matrix associated with an autoregressive ( ar ) model of order 1 . to construct the sufficient statistics we need to evaluate , which appears in the quadratic form of the multivariate normal density , with the structure of determining the way in which sample information about the parameters is accumulated and , hence , the sufficiency reduction that is achievable .( see , for example , anderson , 1958 , chp 6 . )representing as it is straightforward to show that as the order of the approximation is increased by retaining more terms , the extent of the accumulation across successive observations is reduced , with the full sample of observations on ultimately being needed to attain sufficiency . given that the magnitude of determines how many terms in ( [ expansion ] ) are required for the approximation to be accurate , we see that the sn ratio determines how well the set of summary statistics _ that would be sufficient _ for an observed ar(1 ) process ( with ) , namely , approximates the information content of the true set of sufficient statistics , that is , the full sample . if the sn ratio is large ( i.e. is small ) then using the set in ( [ ar1_summ_stats ] ) as summary statistics may produce a reasonable approximation to sufficiency .however , as the sn ratio declines ( and higher powers of can not be ignored as a consequence ) then this set deviates further and further from sufficiency .this same qualitative problem would also characterize any ssm nested in ( [ meas_gen ] ) and ( [ state_gen ] ) , with the only difference being that , in any particular case there would not necessarily be an analytical link between the sn ratio and the lack of sufficiency associated with any finite set of statistics calculated from the observations .the quest for an accurate abc technique in a state space setting - based on an arbitrary set of statistics - is thus not well - founded and this , in turn , motivates the search for asymptotic sufficiency via the mle .the asymptotic gaussianity of the mle for the parameters of ( [ meas_gen ] ) and ( [ state_gen ] ) ( under regularity ) implies that the mle satisfies the factorization theorem and is thereby asymptotically sufficient for the parameters of that model .( see cox and hinkley , 1974 , chp . 9 for elucidation of this matter . )denoting the log - likelihood function by * * * * , maximizing * * * * with respect to yields , which could , in principle , be used to define in an abc algorithm . for large enough algorithm would produce draws from the exact posterior .indeed , in arguments that mirror those adopted by gallant and tauchen ( 1996 ) and gouriroux _ et al . _( 1993 ) for the emm and ii estimators respectively , gleim and pigorsch ( 2013 ) demonstrate that if is chosen to be the mle of an auxiliary model that ` nests ' the true model in some well - defined way , asymptotic sufficiency will still be achieved ; see also gouriroux and monfort ( 1995 ) on this point . of course, if the ssm in question is such that the exact likelihood is accessible , the model is likely to be tractable enough to preclude the need for treatment via abc .further , as we allude to in the introduction , the quest for asymptotic sufficiency via a ( possibly large ) nesting auxiliary model conflicts with the quest for an accurate non - parametric estimate of the posterior using the abc draws ; a point that , to our knowledge , has not been noted in the literature .hence , in practice , the appropriate goal in the abc context is to define a _ parsimonious _, analytically tractable ( and computationally efficient ) model that _ approximates _ the ( generally intractable ) data generating process in ( [ meas_gen ] ) and ( [ state_gen ] ) as well as possible , and use that model as the basis for constructing a summary statistic within an abc algorithm .if the approximating model is ` accurate enough ' as a representation of the true model , such an approach will yield , via the abc algorithm , an estimate of the posterior distribution that is conditioned on a statistic that is ` close to ' being sufficient , at least for a large enough sample . despite the loss of full ( asymptotic ) sufficiency associated with the use of an approximating model to generate the matching statistics in an abc algorithm , we show here that bayesian consistency will still be achieved , subject to certain regularity conditions .define the choice criterion in ( [ distance ] ) as ^{\prime}\mathbf{\omega}\left [ \widehat{\mathbf{\beta } } ( \mathbf{y)-}\widehat{\mathbf{\beta}}(\mathbf{z}^{i}\mathbf{)}\right ] } \leq\varepsilon,\label{dist_mle}\ ] ] where is the mle of the parameter vector of the auxiliary model with log - likelihood function , , and is some positive definite matrix .the quadratic function under the square root essentially mimics the criterion used in the ii technique , in which case would assume the sandwich form of variance - covariance estimator - appropriate for when the auxiliary model does not coincide with the true model - and optimization with respect to the parameters of the latter is the goal .that is a multiple of the size of the empirical sample . ] in bayesian analyses , in which ( [ dist_mle ] ) is used to produce abc draws , may also be defined as the sandwich estimator ( drovandi and pettit , 2013 , and gleim and pigorsch , 2013 ) , or simply as the inverse of the ( estimated ) variance - covariance matrix of , evaluated at ( drovandi _ _ et al . , _ _ 2011 ) .however , in common with the frequentist proof of consistency of the ii estimator , bayesian consistency - whereby the posterior for is degenerate as at the true - is invariant to the choice of the ( positive definite ) the demonstration follows directly from the same arguments used to prove consistency of the ii estimator ( see , for e.g. gouriroux and monfort , 1996 , appendix 4a.1 ) , with the following regularity conditions required : ( a1 ) : : lim , uniformly in , where is a deterministic limit function .( a2 ) : : has a unique maximum with respect to ( a3 ) : : the equation admits a unique solution in , for all . under these conditions it follows that hence , in the abc context , in which the generic parameter represents a draw from the prior , , we see that as the choice criterion in ( [ dist_mle ] ) approaches ^{\prime}\mathbf{\omega}\left [ \mathbf{b}(\mathbf{\phi}_{0}\mathbf{)-b}(\mathbf{\phi}^{i}\mathbf{)}\right ] } \leq\varepsilon.\ ] ] as , being the relevant addition limiting condition required in the abc setting , we see that irrespective of the form of , the only values of that will be selected and , hence , be used to construct an estimate of the posterior distribution , are values such that given the assumption of the uniqueness of the solution of for ( or ) , if and only if hence , abc produces draws that produce a degenerate distribution at the true parameter , as required by the bayesian consistency property .once again , this is despite the fact that asymptotic sufficiency will not be achieved in the typical case in which the approximating model is in error , a result that is analogous to the frequentist finding of consistency for the ii estimator , without full ( cramer rao ) efficiency obtaining .we conclude this section by citing related work in hidden markov models ( e.g. yildirim _ et al . , _ 2013 ; dean _ et al ._ , 2014 ) , in which abc principles that avoid summarization have been advocated . specifically , the difference between the observed data and the simulated pseudo - data is operated time step by time step , as in .this form of abc approximation also allows for the derivation of consistency properties ( in the number of observations ) of the abc estimates .in particular , using such a distance in the algorithm allows for the approximation to converge to the genuine posterior when the tolerance goes to zero .one problem with this approach , however , is that the acceptance rate decreases quickly with , unless is increasing with .( 2014 ) provide some solutions here , but within an observation - driven model context only .finally , looking at the application of this form of abc approximation in a particle mcmc ( pmcmc ) setting , jasra _ et al . _( 2013 ) and martin _ et al . _( 2014 ) establish convergence ( to the exact posterior ) , in connection with the alive particle filter ( le gland and oudjane , 2006 ) . with large computational gains , in ( [ distance ] )can be defined using the score of the auxiliary model .that is , the score vector associated with the approximating model , when evaluated at the simulated data , and with substituted for , will be closer to zero the ` closer ' is the simulated data to the true .hence , the choice criterion in ( [ distance ] ) for an abc algorithm can be based on , where yielding ^{\prime}\mathbf{\sigma}\left [ \mathbf{s}(\mathbf{z}^{i};\widehat{\mathbf{\beta}}(\mathbf{y)})\right ] } \leq\varepsilon , \label{dist_score}\ ] ] where denotes an arbitrary positive definite weighting matrix .implementation of abc via ( [ dist_score ] ) is faster ( by many orders of magnitude ) than the approach based upon , due to the fact that maximization of the approximating model is required only once , in order to produce from the observed data all other calculations involve simply the _ _ evaluation _ _ of at the simulated data , with a numerical differentiation technique invoked to specify once again in line with the proof of the consistency of the relevant frequentist ( emm ) estimator , the bayesian consistency result in section [ theory ] could be re - written in terms , upon the addition of a differentiability condition regarding and the assumption that is the unique solution to the limiting first - order condition, and the convergence is uniform in . in brief , given that , as the choice criterion in ( [ dist_score ] ) approaches ^{\prime}\mathbf{\sigma}\left [ \partial l_{\infty}(\mathbf{\phi}^{i};\mathbf{b}(\mathbf{\phi}_{0}\mathbf{)})/\partial\mathbf{\beta}\right ] } \leq\varepsilon.\ ] ] as , irrespective of the form of , the only values of that will be selected via abc are values such that , which , given assumption ( a3 ) , implies hence , bayesian consistency is maintained through the use of the score .however , a remaining pertinent question concerns the impact on sufficiency ( or , more precisely , on _ the proximity to asymptotic sufficiency _ ) of the use of the score instead of the mle . in practical termsthis question can be re - phrased as : does the selection criterion based on yield identical draws of to those yielded by the selection criterion based on ? if the answer is yes then , unambiguously , for large enough and for , the score- and mle - based abc criteria will yield equivalent estimates of the exact posterior , with the accuracy of those ( equivalent ) estimates dependent , of course , on the nature of the auxiliary model itself . for any auxiliary model ( satisfying identification and regularity conditions ) with unknown parameter vector , we expand the score function in ( [ score ] ) , evaluated at , around the point ( with scaling via having been introduced at the outset in the definition of the score in ( [ score ] ) ) , = \mathbf{d}\left [ \widehat{\mathbf{\beta}}(\mathbf{y)}-\widehat{\mathbf{\beta}}(\mathbf{z}^{i}\mathbf{)}\right ] , \ ] ] where and denotes an ( unknown ) intermediate value between and .hence , the ( scaled ) criterion in ( [ dist_score ] ) becomes ^{\prime}\mathbf{\sigma}\left [ \mathbf{s}(\mathbf{z}^{i};\widehat{\mathbf{\beta}}(\mathbf{y)})\right ] } \nonumber\\ & = \sqrt{\left [ \widehat{\mathbf{\beta}}(\mathbf{y)}-\widehat{\mathbf{\beta}}(\mathbf{z}^{i}\mathbf{)}\right ] ^{\prime}\mathbf{d}^{\prime}\mathbf{\sigma d}\left [ \widehat{\mathbf{\beta}}(\mathbf{y)}-\widehat{\mathbf{\beta}}(\mathbf{z}^{i}\mathbf{)}\right ] ^{\prime}}\leq\varepsilon.\label{new_score}\ ] ] subject to standard conditions regarding the second derivatives of the auxiliary model , the matrix in ( [ d ] ) will be of full rank and as , some positive definite matrix ( given the positive definiteness of ) that is some function of hence , whilst for any , the presence of affects selection , as it is a function of the drawn value ( through ) , as , will be selected via ( [ new_score ] ) if and only if and are equal .similarly , irrespective of the form of the ( positive definite ) weighting matrix in ( [ dist_mle ] ) , the mle criterion will produce these same selections .this result pertains no matter what the dimension of relative to , i.e. no matter whether the true parameters are exactly or over - identified by the parameters of the auxiliary model .this result thus goes beyond the comparable result regarding the ii / emm estimators ( see , for e.g. gouriroux and monfort , 1996 ) , in that the equivalence is independent of the form of weighting matrix used _ and _ the form of identification that prevails . of course in practice ,abc is implemented with at which point the two abc criteria will produce different draws .however , for the models entertained in this paper , preliminary investigation has assured us that the difference between the abc estimates of the posteriors yielded by the alternative criteria is negligible for small enough hence , we proceed to operate solely with the score - based approach as the computationally feasible method of extracting approximate asymptotic sufficiency in the state space setting .the actual selection and optimization of the tolerance level , , has been the subject of intense scrutiny in the recent years ( see , for example , marin _ et al_. , 2011 for a detailed survey ) .what appears to be the most fruitful path to the calibration of the tolerance is to firmly set it within the realm of non - parametric statistics ( blum and franois , 2010 ) as this provides proper convergence rates for the tolerance ( rates that differ between standard and noisy abc ; see fearnhead and prangle , 2012 ) and shows that the optimal value stays away from zero for a given sample size .in addition , the practical constraints imposed by finite computing time and the necessity to produce an abc sample of reasonable length lead us to follow the recommendations found in biau _( 2012 ) , namely to analyze the abc approximation as a k - nearest neighbour technique and to exploit this perspective to derive a practical value for the tolerance .an abc algorithm induces two forms of approximation error .firstly , and most fundamentally , the use of a vector of summary statistics to define the selection criterion in ( [ distance ] ) means that a simulation - based estimate of the posterior of interest is the outcome of the exercise .only if is sufficient for is equivalent to the exact posterior otherwise the exact posterior is necessarily estimated with error because of the analytical difference between the exact density and the _ partial _ posterior density .secondly , the partial posterior density itself , will be estimated with simulation error . critically , as highlighted by blum ( 2010b ), the accuracy of the simulation - based estimate of will be less , all other things given , the larger the dimension of .this ` curse of dimensionality ' obtains even when the parameter is a scalar , and relates solely to the dimension of as elaborated on further by nott _( 2014 ) , this problem is exacerbated as the dimension of itself increases , firstly because an increase in the dimension of brings with it a concurrent need for an increase in the dimension of and , secondly , because the need to estimate a multi - dimensional density ( for ) brings with it its own problems related to dimension . as a potential solution to the inaccuracy induced by the dimensionality of the problem , nott _( 2014 ) suggest allocating ( via certain criteria ) a subset of the full set of summary statistics to each element of , using kernel density techniques to estimate each marginal density , and then using standard techniques to retrieve a more accurate estimate of the joint posterior , if required .however , the remaining problem associated with the ( possibly still high ) dimension of each , in addition to the very problem of defining an appropriate set for each , remains unresolved .( 2013 ) for further elaboration on the dimensionality issue in abc and a review of current approaches for dealing with the problem .the principle advocated in this paper is to exploit the information content in the mle of the parameters of an auxiliary model , , to yield ` approximate ' asymptotic sufficiency for . within this framework, the dimension of determines the dimension of and the curse of dimensionality thus prevails for high - dimensional however , in this case a solution is available , at least when the dimensions of and are equivalent and there is a one - to - one match between the elements of the two parameter vectors .this is clearly so for the two cases tackled in this paper . in the linear gaussian model investigated as case 1 )the auxiliary model coincides exactly with the true model , in which case in the stochastic volatility ssm investigated as case 2 ) , we produce an auxiliary model by discretizing the latent diffusion ( and evaluating the resultant likelihood via the aukf ) ; hence , and there is a natural one - to - one mapping between the parameters of the ( true ) continuous time and ( approximating ) discretized models . in both of these examples then , marginalizing the auxiliary likelihood function with respect to all parameters other than and then producing the score of this function with respect to ( as evaluated at the marginal mle from the observed data , ) , yields , by construction , an obvious _ scalar _statistic for use in selecting draws of and , hence , a method for estimating if the marginal posteriors only are of interest , then all marginals can be estimated in this way , with applications of -dimensional integration required at each step within abc to produce the relevant score statistics .importantly , we do not claim here that the ` proximity ' to sufficiency ( for ) of the vector statistic , translates into an equivalent relationship between the score of the marginalized ( auxiliary ) likelihood function and the corresponding scalar parameter , nor that the associated product density is coherent with a joint probability distribution . if the joint posterior ( of the full vector ) is of particular interest , the sort of techniques advocated by nott et al .( 2014 ) , amongst others , can be used to yield joint inference from the estimated marginals . in section [ assess ]we explore the benefits of marginalization , in addition to the increase in accuracy yielded by using a score - based abc method ( either joint or marginal ) rather than an abc algorithm based on a more _ ad hoc _ choice of summary statistics . however , prior to that we provide details in the following section of the form of auxiliary model advocated for the non - linear ssm case .when the ssm defined in ( [ meas_gen ] ) and ( [ state_gen ] ) is analytically intractable , an approximating model is needed to drive the score - based abc technique . in the canonical non - linear examplebeing emphasized in the paper , in which either or ( or both ) is driven by a continuous time process , this approximation begins with the specification of a discretized version of ( [ meas_gen ] ) and ( [ state_gen ] ) , expressed generically using a regression formulation as: for , where the and are assumed to be sequences of random variables .this formulation is general enough to include , for example , independent random jump components in either the measurement or state equations ( subsumed under and respectively ) , but does exclude cases where the nature of the model is such that a regression formulation in discrete time is not feasible .we note here that in producing ( [ discrete_meas ] ) and ( [ discrete_state ] ) either the observation or the state variable , or both , may need to be transformed ; however , for notational simplicity we continue to use the same symbols , and , as are used to denote the variables in the true model in ( [ meas_gen ] ) and ( [ state_gen ] ) .the nature of the discretization affects the functional form of and in section [ hest ] we illustrate the method using a model in which the true model for is already expressed in discrete time ( i.e. there is no discretization error via the measurement process ) and in which a ( first - order ) euler process is initially used to approximate a square root diffusion model for the state . the log - likelihood function associated with the approximate model in ( [ discrete_meas ] ) and ( [ discrete_state ] ) is defined by where only if the approximate model is linear and gaussian or the state variable is discrete on a finite support , are the components used to define ( [ approx_like ] ) available in closed form .given the nature of the problem we are tackling here , namely one in which ( [ discrete_meas ] ) and ( [ discrete_state ] ) are produced as discrete approximations to a continuous time model ( or , indeed , one in which ( [ discrete_meas ] ) and ( [ discrete_state ] ) represent an initial discrete - time formulation that is non - linear and/or non - gaussian ) , it is reasonable to assume that ( [ approx_like ] ) can not be evaluated exactly .whilst several methods ( see , e.g. simon , 2006 ) are available to approximate including , indeed , simulation - based methods such as particle filtering and smc , the fact that this computation is to be _ embedded _ within the abc algorithm makes it essential that the technique is both fast and numerically stable . _ _ _ _ the aukf satisfies these criteria and , hence , is our method of choice for this illustration . in brief , the unscented kalman filter ( ukf ) is based on the theory of unscented transformations , which is a method for calculating the moments of a non - linear transformation of a random variable .it involves selecting a set of points on the support of the random variable , called _ sigma points _ , according to a predetermined and deterministic criterion .these sigma points yield , in turn , a ` cloud ' of transformed points through the non - linear function , which are then used to produce approximations to the moments of the state variable used in implementing the kf , via simple weighted sums of the transformed sigma points .( see haykin , 2001 , chapter 7 , for an excellent introduction . )the _ _ augmented v__ersion of the ukf - the aukf - generalizes the filter to the case where the state and measurement errors are non - additive , applying the principles of unscented transformations to the _ augmented _ state vector the computational burden of the filter is thus minimal - comprising the calculation of updated sigma points for the time varying state at each , the computation of the relevant means and variances ( for both and ) using simple weighted sums , and the use of the usual kf up - dating equations .details of the specification of the sigma points , plus the steps involved in estimating ( [ approx_like ] ) are provided in the appendix . in implementing the score - based abc method in a non - linear continuous time setting , there are two aspects of the approximation used therein to consider : 1 ) the accuracy of the discretization ; and 2 ) the accuracy with which the likelihood function of the approximate ( discretized ) model is evaluated via the aukf and , hence , the accuracy of the resultant estimate of the mle . in addressing the first aspect ,one has access to the existing literature in which various discretized versions of continuous time models are derived , to different orders of accuracy . with regard to the accuracy of the aukf evaluation of the likelihood function, we make reference to relevant results ( see , e.g. haykin , 2001 ) that document the higher - order accuracy ( relative , say , to the extended kf ) of the aukf - based estimates of the mean and variance of the filtered and predictive ( for both state and observed ) distributions. we also advocate here using any transformation of the measurement ( or state ) equation that renders the gaussian approximations invoked by the aukf more likely to be accurate .however , beyond that , the accuracy of the gaussian assumption embedded in the aukf - based likelihood estimate is case - specific . on a first - order euler discretization only . ]we now undertake a numerical exercise in which the accuracy of the score - based methods of abc ( both joint and marginal ) is compared with that of abc methods based on a set of summary statistics that are chosen without reference to a model .we conduct this exercise firstly within the context of the linear gaussian model , in which case the exact score is accessible via the kf .the results here thus provide evidence on two points of interest : 1 ) whether or not accuracy can be increased by accessing the asymptotic sufficiency of the ( exact ) mle in an abc treatment of a state space model ( compared to the use of other summary statistics ) ; and 2 ) whether or not the curse of dimensionality can be obviated via marginalization , or integration , of the exact likelihood . in section [ hest ]we then assess accuracy in the typical setting in which an approximating model is used to generate the score .the square root volatility model is used as the example , with the approximate score produced by evaluating the likelihood function of a discretized version of that model using the aukf . whilst recognizing that the results relate to one particular model and approximation thereof ,they do , nevertheless , serve to illustrate that score - based methods can dominate the summary statistic - based techniques ( overall ) , even when the approximating model used is not particularly accurate . in this sectionwe simulate a sample of size from the linear gaussian ( lg ) model in ( [ measlg ] ) and ( [ statelg ] ) , based on the parameter settings : , and , with the three - dimensional parameter to be estimated . the value of the measurement error variance , , is set in order to fix the sn ratio .we compare the performance of the ( joint and marginal ) score - based techniques with that of more conventional abc methods based on summary statistics that may be deemed to be a sensible choice in this setting . given the relationship between the lg model and an observable ar(1 ) process , it seems sensible to propose a set of summary statistics that are sufficient for the latter , as given in ( [ ar1_summ_stats ] ) .two forms of distances are used .firstly , we apply the conventional euclidean distance , with each summary statistic also weighted by the inverse of the variance of the values of the statistic across the abc draws .that is , we define ^{1/2}\label{euclid}\ ] ] for abc iteration , where is the variance ( across ) of the , and is the observed value of the statistic .secondly , we use a distance measure proposed in fearnhead and prangle ( 2012 ) which , as made explicit in blum _( 2013 ) , is a form of dimension reduction method .we explain this briefly as follows .given the vector of observations , the set of summary statistics in ( [ ar1_summ_stats ] ) are used to produce an estimate of , which , in turn , is used as the summary statistic in a subsequent abc algorithm .the steps of the procedure ( as modified for this context ) described for selection of the scalar parameter , are as follows : 1 .simulate , , from and , subsequently , simulate from ( [ vol ] ) using the exact transitions , and pseudo data , using the conditional gaussian form of 2 . for , , calculate ^{\prime}\label{fp_stats}\ ] ] 3 .define {cccc}1 & 1 & \cdots & 1\\ \mathbf{s}^{1 } & \mathbf{s}^{2 } & \cdots & \mathbf{s}^{r}\end{array } \right ] ^{\prime} ] and is of dimension 4 .use ols to estimate ] 5 .define: where denotes the vector of summary statistics in ( [ fp_stats ] ) calculated from the vector of observed returns , and use: as the selection criterion for at each iteration .the joint score - based method uses the distance measure in ( [ dist_score ] ) , but with the score in this case computed from the _ exact _ model , evaluated using the kf .the weighting matrix is set equal to the hessian - based estimate of the covariance matrix of the ( joint ) mle estimator of , evaluated at the mle computed from the observed data , the marginal score - based method for estimating the marginal posterior for the element of , , is based on the distance where and is produced by integrating ( numerically ) the exact likelihood function ( evaluated via the kf ) with respect to all parameters other than , and taking the logarithm .we produce results that compare the performance of the four different methods : the joint score - based abc ( ` abc - joint score ' in all figures ) ; the marginal score - based abc ( ` abc - marg score ' ) ; the summary statistic - based abc using the euclidean metric in ( [ euclid ] ) ( ` abc - summ stats ' ) ; and the approach of fearnhead and prangle ( 2012 ) based on the metric in ( [ fp ] ) ( ` abc - fp ' ) .marginal density estimates are produced initially for a single run of abc , based on 50,000 replications of the accept / reject algorithm detailed in section 2.1 , and with defined as the 5th percentile of the 50,000 draws .the true data is generated from a process in which the sn ratio is high ( i.e. /\sigma_{e}^{2}=20 ] and the transition density for , conditional on , is where , , , , and is the modified bessel function of the first kind of order the conditional distribution function is non - central chi - square , , with degrees of freedom and non - centrality parameter . the discrete - time model for in ( [ return ] )can be viewed as a discretized version of a diffusion process for returns ( or returns can be viewed as being _inherently _ discretely observed ) whilst we retain the diffusion model for the latent variance .that is , we eschew the discretization of the variance process that would typically be used , with simulation of in step 2 of the abc algorithm occurring exactly , through the treatment of the random variable as a composition of central and variables . for the purpose of this illustration we set the parameters in ( [ vol ] ) to values that produce simulated values of both and that match the characteristics of ( respectively ) daily returns and daily values of realized volatility ( constructed from 5 minute returns ) for the s&p500 stock index over the 2003 - 2004 period ,namely this relatively calm period in the stock market is deliberately chosen as a reference point , as the inclusion of price and volatility jumps , and/or a non - gaussian conditional distribution in the model would be an empirical necessity for any more volatile period , such as that witnessed during the recent 2008/2009 financial crisis . in order to implement the score - based abc method , using the aukf algorithm to evaluate an auxiliary likelihood ,we invoke the following discretization , based on ( an exact ) transformation of the measurement equation and an euler approximation of the state equation, where is treated as a truncated gaussian variable with lower bound, directing the reader to the appendix for the detailed outline of the aukf approach , we note that sigma points that span the support of are defined by calculating and using deterministic integration and the closed form of with the specification of adopted for convenience .those for , are defined as: where and respectively denote the mean and variance of the relevant distribution of ( marginal , filtered or predictive , depending on the particular step in the aukf algorithm ) .the sigma points for ( [ v_trunc ] ) are then defined using the mean and variance of the truncated normal distribution , with the value of in ( [ v_trunc ] ) represented using the relevant sigma point for , and ( with reference to the appendix ) specified .in order to evaluate the accuracy of the estimate of the posterior produced using the abc method , we produce the exact joint posterior distribution for via the deterministic non - linear filtering method of ng _ et al . _( 2013 ) . in brief, this method represents the recursive filtering and prediction distributions used to define the likelihood function as the numerical solutions of integrals defined over the support of in ( [ trans ] ) , with deterministic integration used to evaluate the relevant integrals , and the _ exact _ transitions in ( [ non - central ] ) used in the specification of the filtering and up - dating steps .whilst lacking the general applicability of the abc - based method proposed here , this deterministic filtering method is ideal for the particular model used in this illustration , and can be viewed as producing a very accurate estimate of the exact density , without any of the simulation error that would be associated with an mcmc - based comparator , for instance .we refer the reader to ng _ et al ._ for more details of the technique ; see also kitagawa ( 1987 ) . in the currentsetting , in which is specifed parametrically , the known form of the distribution of is used directly in the evaluation of the relevant integrals .we refer the reader to section 2.2 . of that paper for a full description of the algorithm .preliminary experimentation with the number of grid points used in the deterministic integration was undertaken in order to ensure that the resulting estimate of the likelihood function / posterior stabilized , with 100 grid points underlying the final results documented here . ]the likelihood function , evaluated via this method , is then multiplied by a uniform prior that imposes the restrictions : , and .the three marginal posteriors are then produced via deterministic numerical integration ( over the parameter space ) , with a very fine grid on being used to ensure accuracy .we compare the auxiliary model - based abc technique with abc approaches based on summary statistics , as discussed in the linear gaussian context in section [ lg ] . for want of a better choice we use the vector of statistics given attention therein , as well as the two alternative distance measures described there .we also compute marginal posterior densities estimated using the aukf - based approximation ( of the likelihood ) itself . that is , using the aukf to evaluate the likelihood function associated with the discretized model in ( [ trans ] ) and ( [ v_state ] ) and normalizing ( using a uniform prior ) produces an approximation of the posterior which can , in principle , be invoked as an approximation in its own right , independently of its subsequent use as a score generator with an abc algorithm . finally , we compute the marginal posteriors ( again , based on a uniform prior ) using the likelihood function of the euler approximation in ( [ trans ] ) and ( [ v_state ] ) evaluated using the ng_ et al . _( 2013 ) filtering method , with gaussian transitions used in the filtering and up - dating steps .when normalized , this density can be viewed as the quantity that a typical mcmc scheme ( as based on the equivalent prior ) would be targeting , given that the tractability of gaussian approximations to the transitions would typically be exploited in structuring an mcmc algorithm . as a concluding note on computational matters , we re - iterate that the time taken to evaluate the aukf - based approximate likelihood function at any point in the parameter space is roughly comparable to that required for kf evaluation , thus rendering it a feasible method to be inserted within the abc algorithm .in contrast , evaluation of the euler - based likelihood via the ng _ et al . _ ( 2013 ) technique , whilst producing , in the main , ( as will be seen in the following section ) a more accurate estimate of the exact posterior than the aukf method , is many orders of magnitude slower and , hence , simply infeasible as a score generator within abc . in order to abstract initially from the impact of dimensionality on the abc methods , we first report results for each single parameter of the heston model , keeping the remaining two parameters fixed at their true values .three abc - based estimates of the relevant exact ( univariate ) posterior , invoking a uniform prior , are produced in this instance .three matching statistics are used , respectively : 1 ) the ( uni - dimensional ) auxiliary score based on the approximating model ( abc - score ) ; 2 ) the summary statistics in ( [ ar1_summ_stats ] ) , matched via the euclidean distance measure in ( [ euclid ] ) ( abc - summ stats ) ; and 3 ) the summary statistics in ( [ ar1_summ_stats ] ) , matched via the fp distance measure in ( [ fp ] ) ( abc - fp ) .we produce representative posterior ( estimates ) in each case , to give some visual idea of the accuracy ( or otherwise ) that is achievable via the abc methods .we then summarize accuracy by reporting the average ( over the 100 runs ) of the root mean squared error ( rmse ) of each abc - based estimate of the exact posterior for a given parameter , computed as: where is the ordinate of the abc density estimate and the ordinate of the exact posterior density , at the grid - point used to produce the plots .all single parameter results are documented in panel a of table 1 .we also tabulate there , as benchmarks of a sort , the rmses associated with the ( one - off ) aukf- and euler - based approximations of each univariate density .sq_uni_paper_graph__5.pdf[sq_fig ] figure [ sq_fig ] , panel a reproduces the exact posterior of ( the single unknown parameter ) , the posteriors associated with the aukf- and euler - based approximations , and the three abc - based estimates . as is clear , the aukf - based approximation is reasonably inaccurate , in terms of replicating the location and shape of the exact posterior - an observation that is interesting in its own right , given the potential for such a simple and computationally efficient approximation method to be used to evaluate likelihood functions ( and posterior distributions ) in non - linear state space models such as the one under consideration .however , once the approximation is embedded within an abc scheme , in the manner described in section [ model ] , the situation is altogether different , with the ( pink ) dotted line ( denoted by ` abc - score ' in the key ) providing a remarkably accurate estimate of the exact posterior , using only 50,000 replications of the simplest rejection - based abc algorithm , and fifteen minutes of computing time on a desktop computer .it is worth noting that the abc - based estimate is also more accurate in this case than the euler approximation , where we highlight , once again , that production of the latter still requires the application of the much more computationally burdensome non - linear filtering method .most notably , the abc method based on the summary statistics , combined using a euclidean distance measure , performs very badly , although the dimensional reduction technique of fearnhead and prangle ( 2012 ) , applied to this same set of summary statistics , yields a reasonable estimate of the exact posterior in this instance .comparable graphs are produced for the single parameters and in panels b and c respectively of figure [ sq_fig ] , with the remaining pairs of parameters ( and , and and respectively ) held fixed at their true values . in the case of ,the score - based method arguably provides the best representation of the shape of the exact posterior , despite being slightly inaccurate in terms of location .the fearnhead and prangle ( 2012 ) method also provides a reasonable estimate , whilst the summary statistic approach using the euclidean distance , once again performs very poorly . for the parameter ,_ only _ the score based method yields a density with a well - defined shape , with the two summary statistic - based techniques essentially producing uniform densities that reflect little more than the restricted support imposed on the parameter draws .interestingly , the euler approximation itself provides a quite poor representation of the exact marginal , a result which has not , as far as we know , been remarked upon in the literature , given that a typical mcmc scheme ( as noted above ) would in fact be targeting the euler density itself , as the best representation of the true model - the exact posterior remaining unaccessed due to the difficulty of devising an effective mcmc scheme which uses the exact transitions . for neither nor is the aukf approximation _ itself _ particularly accurate , despite the fact that it respects the non - linearity in the true state space model .the rmse results recorded in panel a of table 1 confirm the qualitative nature of the single - run graphical results . for and ,all three abc - based estimates are seen to produce lower rmse values ( sometimes an order of magnitude lower ) than the aukf approximation , indicating that _ any _ of the abc procedures would yield gains over the use of the unscented filtering method itself . for ,the aukf approximation is better than that of the summary statistic - based abc estimate , but in part because the latter is so poor . for and ,the score - based abc method is the most accurate , and is most notably _ very _ precise for the case of the persistence parameter . for the parameter the fp method is the most accurate according to this measure although , as indicated by the nature of the graphs in panel b of figure [ sq_fig ] , this result does tend to understate the ability of the score - based method to capture the basic shape of the exact posterior . for and ,the ( euclidean ) summary - statistic method is an order of magnitude more inaccurate than the other two abc methods , whilst also exhibiting no ability to identify the shape of the true posterior in the case of once again as is consistent with the graphs in figure [ sq_fig ] , the euler approximation for is reasonably accurate , but not as accurate as the score - based abc estimate . for and the euler approximation , whilst being more accurate that the aukf approximation , is dominated by all three abc estimates .[ table1 ] table 1 : rmse of an estimated marginal and the exact marginal : average rmse value over multiple runs of abc using 50,000 replications .` score ' refers to the abc method based on the score of the aukf model ; ` ss ' refers to the abc method based on a euclidean distance for the summary statistics in ( [ ar1_summ_stats ] ) ; ` fp ' refers to the fearnhead and prangle abc method , based on the summary statistics in ( [ ar1_summ_stats ] ) . for the single parameter case ,the ( single ) score method is documented in the row denoted by ` abc - marginal score ' , whilst in the multi - parameter case , there are results for both the joint and marginal score methods .the rmse of the aufk and euler approximations ( computed once only , using the observed data ) are recorded as benchmarks , in the top two rows of each panel . for the single and dual parameter cases ,100 runs of abc were used to produce the results , whilst for the three parameter case , 50 runs were used . the smallest rmse figure in each columnis highlighted in bold .[ c]lllc|l|l|ll|lll & & & & + & & & & + & & & & & & & & & & + approximate density & & & & & & & & & & + & & & & & & & & & & + aukf & & & 0.0935 & 0.0529 & 0.0370 & 0.0185 & 0.0798 & 0.0201 & 0.0862 & 0.0287 + euler & & 0.0434 & 0.0664 & 0.0072 & 0.0308 & 0.0175 & 0.0818 & 0.0120 & 0.0459 & 0.0242 + abc - joint score & & & - & 0.0054 & * 0.0217 * & 0.0101 & 0.0441 & * 0.0063 * & * 0.0124 * & * 0.0166 * + abc - marginal score & & & * 0.0392 * & * 0.0045 * & 0.0219 & * 0.0048 * & 0.0480 & 0.0085 & 0.0381 & 0.0167 + abc - ss & & & 0.0427 & 0.0310 & 0.0234 & 0.0119 & * 0.0312 * & 0.0109 & 0.0389 & 0.0170 + abc - fp & & & 0.0431 & 0.0145 & 0.0233 & 0.0093 & 0.0358 & 0.0124 & 0.0407 & 0.0168 + & & & & & & & & & & + in panels b and c respectively of table 1 , we record all rmse results for the case when two , then all three parameters are unknown , with a view to gauging the relative performance of the abc methods when multiple matches are required . in these multiple parameter cases , a preliminary run of abc , based on uniform priorsdefined over the domains defined above , has been used to determine the high mass region of the joint posterior .this information has been used to further truncate the priors in a subsequent abc run , and final marginal posterior estimates then produced .the results recorded in panels b and c highlight that when two parameters are unknown ( either and or and ) , the score - based abc method produces the most accurate density estimates in three of the four cases .marginalization produces an improvement in accuracy for for the other two parameters marginalization does not yield an increase in accuracy ; however , the differences between the joint and marginal score estimates are minimal . only in one case ( as pertains to ) does an abc method based on summary statistics outperform the score - based methods . in all four cases ,the aukf estimate is inferior to all other comparators , and the euler approximation also inferior to all abc - based estimates in three of the four cases .as is seen in panel d , when all three parameters are to be estimated , the score - based abc estimates remain the most accurate , with the joint score method superior overall and yielding notably improvements in accuracy over both the aukf and euler approximations .this paper has explored the application of approximate bayesian computation in the state space setting .certain fundamental results have been established , namely the lack of reduction to finite sample sufficiency and the bayesian consistency of the auxiliary model - based method .the ( limiting ) equivalence of abc estimates produced by the use of both the maximum likelihood and score - based summary statistics has also been demonstrated .the idea of tackling the dimensionality issue that plagues the application of abc in high dimensional problems via an integrated likelihood approach has been proposed .the approach has been shown to work extremely well in the case in which the auxiliary model is exact , and to yield some benefits otherwise . however , a much more comprehensive analysis of different non - linear settings ( and auxiliary models ) would be required for a definitive conclusion to be drawn about the trade - off between the gain to be had from marginalization and the loss that may stem from integrating over an _ inaccurate _ auxiliary model .indeed , the most important challenge that remains , as is common to the related frequentist techniques of indirect inference and efficient methods of moments , is the specification of a computationally efficient and accurate approximating model . given the additional need for parsimony , in order to minimize the number of statistics used in the matching exercise , the principle of aiming for a large nesting model , with a view to attaining full asymptotic sufficiency , is not an attractive one .we have illustrated the use of one simple approximation approach based on the unscented kalman filter .the relative success of this approach in the particular example considered , certainly in comparison with methods based on other more _ ad hoc _ choices of summary statistics , augers well for the success of score - based methods in the non - linear setting .further exploration of approximation methods in other non - linear state space models is the subject of on - going research .( see also creel and kristensen , 2014 , for some contributions on this front . ) finally , we note that despite the focus of this paper being on inference about the static parameters in the state space model , there is nothing to preclude marginal inference on the states being conducted , at a second stage .specifically , conditional on the ( accepted ) draws used to estimate , existing filtering and smoothing methods ( including the recent methods that exploit abc at the filtering / smoothing level ; see , for example , jasra _ et al . , _ 2010 ,calvet and czellar , 2014 , martin _ et al . , _ 2014 ) could be used to yield draws of the states , and ( marginal ) smoothed posteriors for the states produced via the usual averaging arguments . with the asymptotic properties of both approaches established ( under relevant conditions ) , of particular interest would be a comparison of both the finite sample accuracy and computational burden of the abc - pmcmc method developed martin _( 2014 ) , with that of the method proposed herein , in which is targeted more directly via the score - based approach .cornuet , j - m ., santos , f. , beaumont , m.a . ,robert , c.p . ,marin , j - m ., balding , d.j . ,guillemard , t. and estoup , a. 2008 . inferring population history with diy abc : a user - friendly approach to approximate bayesian computation , _ bioinformatics _ 24 , 2713 - 2719 .fearnhead , p , prangle , d. 2012 .constructing summary statistics for approximate bayesian computation : semi - automatic approximate bayesian computation . _ journal of the royal statistical society , series b. _ 74 : 419474 .forbes c.s . , martin , g.m . and wright j. 2007 .inference for a class of stochastic volatility models using option and spot prices : application of a bivariate kalman filter , _ econometric reviews , special issue on bayesian dynamic econometrics _ , _ __ _ 26 : 387 - 418 .julier , s.j . , uhlmann , j.k . and durrant - whyte , h.f . 2000 . a new method for the nonlinear transformation of means and covariances in filters and estimators , ieee _ transactions on automatic control _ 45 , 477 - 481 .le gland , f. and oudjane , n. 2006 .a sequential particle algorithm that keeps the particle system alive . in _stochastic hybrid systems : theory and safety critical applications_ ( h. blom and j. lygeros , eds ) , lecture notes in control and information sciences 337 , 351389 , springer : berlin .ng , j. , forbes , c , s . ,martin , g.m . and mccabe , b.p.m . 2013 . non - parametric estimation of forecast distributions in non - gaussian , non - linear state space models , _ international journal of forecasting _ 29 , 411430 pritchard , j.k . ,seilstad , m.t ., perez - lezaun , a. and feldman , m.w .population growth of human y chromosomes : a study of y chromosome microsatellites , _ molecular biology and evolution _16 1791 - 1798 .toni , t. , welch , d. , strelkowa , n. , ipsen , a. and stumpf , m.p.h . 2009 .approximate bayesian computation scheme for parameter inference and model selection in dynamical systems , _ jrss ( interface ) _ 6 , 187 - 202 . given the assumed invariance ( over time ) of both and in ( [ discrete_meas ] ) and ( [ discrete_state ] ) , the sigma points are determined as: and respectively , and propagated at each through the relevant non - linear transformations , and the values , , and are chosen according to the assumed distribution of and , with a gaussian assumption for both variables yielding values of as being ` optimal ' .different choices of these values are used to reflect higher - order distributional information and thereby improve the accuracy with which the mean and variance of the non - linear transformations are estimated ; see julier _et al . _ ( 2000 ) and ponomareva and date ( 2010 ) for more details .restricted supports are also managed via appropriate truncation of the sigma points .the same principles are applied to produce the mean and variance of the time varying state , except that the sigma points need to be recalculated at each time to reflect the up - dated mean and variance of as each new value of is realized. 1 . use the ( assumed ) marginal mean and variance of , along with the invariant mean and variance of and respectively , to create the matrix of augmented sigma points for , , as follows .define : {c}e(x_{t})\\ e(v_{t})\\ e(e_{t } ) \end{array } \right ] \text { , } p_{a0}=\left [ \begin{array } [ c]{ccc}var(x_{t } ) & 0 & 0\\ 0 & var(v_{t } ) & 0\\ 0 & 0 & var(e_{t } ) \end{array } \right ] , \text { } \label{e_var}\ ] ] and as the column of the cholesky decomposition ( say ) of given the diagonal form of ( in this case ) , we have{c}\sqrt{var(x_{t})}\\ 0\\ 0 \end{array } \right ] ; \text { } \sqrt{p_{a0}}_{2}=\left [ \begin{array } [ c]{c}0\\ \sqrt{var(v_{t})}\\ 0 \end{array } \right ] ; \text { } \sqrt{p_{a0}}_{1}=\left [ \begin{array } [ c]{c}0\\ 0\\ \sqrt{var(e_{t})}\end{array } \right ] .\ ] ] the seven columns of are then generated by where , and , and the corresponding notation is used for , 2 .propagate the sigma points through the transition equation as and estimate the predictive mean and variance of as: where denotes the element of the vector and the associated weight , determined as an appropriate function of the and see ponomareva and date ( 2010 ) .3 . produce a new matrix of sigma points , for generated by using the updated formulae for the mean and variance of from ( [ pred_e ] ) and ( [ pred_var ] ) respectively , in the calculation of and .4 . propagate the sigma points through the measurement equation as and estimate the predictive mean and variance of as: where denotes the element of the vector and is as defined in step 3 .estimate the first component of the likelihood function , , as a gaussian distribution with mean and variance as given in ( [ e_y ] ) and ( [ var_y ] ) respectively .6 . given observation produce the up - dated filtered mean and variance of via the usual kf up - dating equations: where: and the , are as computed in step 3continue as for steps 2 to 6 , with the obvious up - dating of the time periods and the associated indexing of the random variables and sigma points , and with the likelihood function in evaluated as the product of the components produced in each implementation of step 5 , and the log - likelihood in ( [ approx_like ] ) produced accordingly .
a new approach to inference in state space models is proposed , based on approximate bayesian computation ( abc ) . abc avoids evaluation of the likelihood function by matching observed summary statistics with statistics computed from data simulated from the true process ; exact inference being feasible only if the statistics are sufficient . with finite sample sufficiency unattainable in the state space setting , we seek asymptotic sufficiency via the maximum likelihood estimator ( mle ) of the parameters of an auxiliary model . we prove that this auxiliary model - based approach achieves bayesian consistency , and that - in a precise limiting sense - the proximity to ( asymptotic ) sufficiency yielded by the mle is replicated by the score . in multiple parameter settings a separate treatment of scalar parameters , based on integrated likelihood techniques , is advocated as a way of avoiding the curse of dimensionality . some attention is given to a structure in which the state variable is driven by a continuous time process , with exact inference typically infeasible in this case as a result of intractable transitions . the abc method is demonstrated using the unscented kalman filter as a fast and simple way of producing an approximation in this setting , with a stochastic volatility model for financial returns used for illustration . _ keywords : _ likelihood - free methods , latent diffusion models , linear gaussian state space models , asymptotic sufficiency , unscented kalman filter , stochastic volatility . _ jel classification : _ c11 , c22 , c58
[ sec : urban ] we draw our analysis upon a dataset collected from the largest location - based social network , foursquare .the dataset features 35,289,629 movements of 925,030 users across 4,960,496 places collected during six months in 2010 . by movements we express the indication of presence at a place that a user gives through the system ( in the language of location - based social networks , a location broadcast is referred as a _ checkin _ ) . for each placewe have exact gps coordinates . in order to confirm the large scale results reported in , we have computed the distribution of human displacements in our dataset ( figure [ alltrans ] ): we observe that the distribution is well approximated by a power law with exponent ( ) .this is almost identical to the value of the exponent calculated for the dollar bills movement ( ) and very proximate to the estimated from cellphones calls analysis of human mobility . with respect to these datasets ,we note that the foursquare dataset is planetary , as it contains movements at distances up to 20,000 kilometres ( we measure all distances using the great - circle distance between points on the planet ) . on the other extreme , small distances of the order of tens of meterscan also be approximated thanks to the fine granularity of gps technology employed by mobile phones running these geographic social network applications .indeed , we find that the probability of moving up to 100 meters is uniform , a trend that has also been shown in for a distance threshold .each transition in the dataset happens between two well defined venues , with data specifying the city they belong to .we exploit this information to define when a transition is urban , that is , when both start and end points are located within the same city .figure [ intracity ] depicts the probability density function of the about 10 million displacements within cities across the globe .we note that a power - law fit does not accurately capture the distribution .first of all , a large fraction of the distribution exhibits an initial flat trend ; then , only for values larger than 10 km the tail of distribution decays , albeit with a very large exponent which does not suggest a power - law tail .overall , power - laws tend to be captured across many orders of magnitude , whereas this is not true in the case of urban movements .the estimated parameter values are and exponent ( ) .since the distribution of urban human movements can not be approximated with a power law distribution nor with a physically relevent functional relation , how can we represent displacements of people in a city more appropriately ?we start by comparing human movements across different cities . in figure [ manycitiesmovements ] , we plot the distribution of human displacements for a number of cities .the shapes of the distributions , albeit different , exhibit similarities suggesting the existence of a common underlying process that seems to characterize human movements in urban environments .there is an almost uniform probability of traveling in the first 100 meters , that is followed by a decreasing trend between 100 meters and a distance threshold $ ] km , where we detect an abrupt cutoff in the probability of observing a human transition . the threshold could be due to the reach of the _ borders _ of a city , where maximum distances emerge . while the distributions exhibit similar trends in different cities , scales and functional relation may differ , thus suggesting that human mobility vary from city to city .for example , while comparing houston and san francisco ( see figure [ manycitiesmovements ] ) , different thresholds are observed .moreover , the probability densities can vary across distance ranges .for instance , it is more probable to have a transition in the range 300 meters and 5 kilometers in san francisco than in singapore , but the opposite is true beyond 5 kilometers .this difference could be attributed to many potential factors , ranging from geographic ones such as area size , density of a city , to differences in infrastructures such as transportation and services or even socio - cultural variations across cities . in the following paragraphs we present a formal analysis that allow to dissect these heterogeneities . inspired by stouffer s theory of intervening opportunities which suggests that _ the number of persons traveling a given distance is directly proportional to the number of opportunities at that distance and inversely proportional to the number of intervening opportunities _, we explore to what extend the density of places in a city is related to the human displacements within it .as a first step , we plot the place density of a city , as computed with our checkin data , against the average distance of displacements observed in a number of cities . in figure [ density ] one observes that the average distance of human movements is _ inversely proportional _ to the city s density . hence , in a very dense metropolis , like new york , there is a higher expectation of shorter movements .we have measured a coefficient of determination .intuitively , this correlation suggests that while distance is a cost factor taken into account by humans , the range of available places at a given distance is also important .this availability of places may relate to the availability of resources while performing daily activities and movements : if no super markets are around , longer movements might be more probable in order to find supplies . as a next step ,we explore whether the geographic area size covered by a city affects human mobility by plotting the average transition in a city versus its area size ( see fig .[ area ] ) .our data indicates no apparent linear relationship , with a low correlation , thus indicating that density is a more informative measure . to shed further light on the hypothesis that density is a decisive factor in human mobility , for every movement between a pair of places in a city we sample the rank value of it .the rank for each transition between two places and is the number of places that are closer in terms of distance to than is .formally : the rank between two places has the important property to be invariant in scaled versions of a city , where the relative positions of the places is preserved but the absolute distances dilated . in figure [ threecitiesranks ]we plot for the three cities the rank values observed for each displacement .the fit of the rank densities on a log - log plot , shows that the rank distribution follows linear trend similar to that of a power - law distribution .this observation suggests that the probability of moving to a place decays when the number of places nearer than a potential destination increases .moreover , the ranks of all cities collapse on the same line despite the variations in the probability densities of human displacements .we have fit the rank distribution for the thirty - four cities under investigation and have measured an exponent .this is indicative of a universal pattern across cities where density of settlements is the driving factor of human mobility .we superimpose the distribution of ranks for all cities in figure [ manycitiesranks ] .interestingly enough , a parallel of this finding can be drawn with the results in , where it is found that the probability of observing a user s friend at a certain distance in a geographic social network is inversely proportional to the number of people geographically closer to the user .the universal mobility behaviour emerging across cities paves the way to a new model of movement in urban environments .given a set of places in a city , the probability of moving from place to a place is formally defined as \propto \frac{1 } { rank_{u}(v)^{a } } \ ] ] where we run agent based simulation experiments ( see detailed description in ) where agents transit from one place to another according to the probabilities defined by the model above .averaging the output of the probability of movements by considering all possible places of a city as potential starting points for our agents , we present the human displacements resulting from the model in figure [ manycitiesmovementsfits ] : as shown , despite the simplicity of the model , this is able to capture with very high accuracy the real human displacements in a city .our model does not take into account other parameters such as individual heterogeneity patterns or temporal ones that have been studied in the past in the context of human mobility and yet it offers very accurate matching of the human traces of our dataset .a common parameter ( empirical average ) has been set for the simulations of all cities .we have observed movement fits to deteriorate as we move away from values measured empirically .in general , large values overestimate trips to nearby places and inversely very low values blur the effect of distance in human movements .moreover , our analysis provides empirical evidence that while human displacements across cities may differ , these variations are mainly due to the spatial distribution of places in a city instead of other potential factors such as social - cultural or cognitive ones .indeed , the agent based simulations are run with the same rules and parameters in each city , except for the set of places that is taken from the empirical dataset .the variation across the spatial organization of cities is illustrated in fig .[ manycitiesmaps ] , where we plot thermal maps of the density of places within cities and in fig .[ placecities ] , where we plot the probability density function that two random places are at a distance .our analysis highlights the impact of geography , as expressed through the spatial distribution of places , on human movements , and confirms at a large - scale the seminal analysis of stouffer who studied how the spatial distribution of places in the city of cleveland affected the migration movements of families .our analysis does not indicate that distance does not play a deterring role , but that it is not sufficient to express human mobility through universal laws .a proper modeling should account for place distribution , as rank does or possibly by complementing distance with information about place locations , as in constrained gravity models .plots for all thirty four cities that we have evaluated can be found in si .the empirical data on human movements provided by foursquare and other location - based services allows for unprecedented analysis both in terms of scale and the information we have about the details of human movements .the former means that mobility patterns in different parts of the world can be analyzed and compared surpassing cultural , national or other organizational borders .the latter is achieved through better location specification technologies such as gps - enabled smartphones , but also with novel online services that allow users to layout content on the geographical plane such as the existence of places and semantic information about those . as those technologies advance our understanding on human behaviorcan only become deeper . in this article, we have focused on human mobility in a large number of metropolitan cities around the world to perform an empirical validation of past theories on the driving factors of human movements .as we have shown , stouffer s theory of intervening opportunities appears to be a plausible explanation to the observed mobility patterns .the theory suggests that the distance covered by humans is determined by the number of opportunities ( i.e. , places ) within that distance , and not by the distance itself .this behaviour is confirmed in our data where we observed that physical distance does not allow for the formulation of universal rules for human mobility , whereas a universal pattern emerges across all cities when movements are analyzed through their respective rank values : the probability of a transition to a destination place is inversely proportional to the relative rank of it , raised to a power , with respect to a starting geographical point .moreover , presents minor variations from city to city .we believe that our approach opens avenues of quantitative exploration of human mobility , with several applications in urban planning and ict .the identification of rank as an appropriate variable for the deterrence of human mobility is in itself an important observation , as it is expected to lead to more reliable measurements in systems where the density of opportunities is not uniform , e.g. in a majority of real - world systems .the realization of universal properties in cities around the globe also goes along the line of recent research on urban dynamics and organization , where cities have been shown to be scaled versions of each other , despite their cultural and historical differences .contrary to previous observations where size is the major determinant of many socio - economical characteristics , however , density and spatial distribution are the important factors for mobility .moreover , the richness of the dataset naturally opens up new research directions , such as the identification of the needs and motives driving human movements , and the calibration of the contact rate , e.g. density- vs frequency - dependent , in epidemiological models . in a city .we have enumerated 11808 , 15970 , 15617 unique venues for houston , san francisco and singapore respectively .the probability is increasing with , as expected in two dimensions before falling due to finite size effect .it is interesting to note that the probability for two randomly selected places to be the origin and destination of a jump monotonically decreases with distance ( see si ) . ]the mobility dataset used in this work is comprised from _checkins _ made by foursquare users and become publicly available through twitter s streaming api .the collection process lasted from the 27th of may 2010 until the 3rd of november of the same year . during this periodwe have observed 35,289,629 _ checkins _ from 925,030 unique users over 4,960,496 venues .in addition , _ locality _ information together with exact gps geo - coordinates for each venue has become available through the foursquare website allowing us to associate a given venue with a city . by considering only consecutive _checkins _ that take place within the same city we have extracted almost 10 million _ intracity _ movements analysed in figure [ intracity ] .detailed statistics including the number of checkins and venues in each city can be found in .we have employed the methods detailed in to apply goodness - of - fit tests on the probability density functions of global and urban transitions observed in figures [ alltrans ] and [ intracity ] . in particular , we have measured the corresponding using the kolmogorov - smirnov test by generating 1000 synthetic distributions , while the maximum - likelihood estimation technique has been used to estimate the parameters of the power - laws . exceptionally , we have resorted to a _ least squares _ based optimization to measure the exponent of the rank values shown in figure [ threecitiesranks ] . while power - laws are not well defined for exponents smaller than , we are confident of the values estimated due to the excellent movement fits produced during our simulations .onnela , j. saramki , j. hyvnen , g. szab , d. lazer , k. kaski , j. kertsz , a .-barabsi ( 2006 ) structure and tie strengths in mobile communication networks ._ proceedings of the national academy of sciences _ , 104(18):73327336 .k. nicholson and r.g .webster , textbook of influenza ( blackwell , malden , massachusetts , 1998 ) l. hufnagel , d. brockmann and t. geisel ( 2004 ) forecast and control of epidemics in a globalized world , _ proc natl acad sci usa _ * 101 * , 15124 .v. colizza , a. barrat , m. barthlemy and a. vespignani ( 2007 ) predictability and epidemic pathways in global outbreaks of infectious diseases : the sars case study , _ bmc med . _* 5 * , 34 . v. carrothers ( 1956 ) a historical review of the gravity and potential concepts of human interaction , _j. of the am .inst . of planners_ * 22 * , pp .wilson ( 1967 ) a statistical theory of spatial distribution models , _ transportation research _ * 1 * , pp .253 - 269 .* 54 * , pp. 68 - 78 s. erlander and n. f. stewart ( 1990 ) the gravity model in transportation analysis : theory and extensions , brill academic publishers , utrecht .m. levy ( 2010 ) scale - free human migration and the geography of social networks , _ physica a _ * 389 * , 4913 - 4917 .krings , f. calabrese , c. ratti and v. d. blondel ( 2009 ) urban gravity : a model for inter - city telecommunication flows , _ journal of statistical mechanics : theory and experiment _ , l07003 .jung , f. wang and h.e .stanley ( 2008 ) gravity model in the korean highway , _ europhys .lett . _ * 81 * , 48005 .e. miller ( 1972 ) a note on the role of distance in migration : costs of mobility versus intervening opportunities , _ j. reg .* 12 * , 475478 .haynes , d. poston and p. sehnirring ( 1973 ) inter - metropolitan migration in high and low opportunity areas : indirect tests of the distance and intervening opportunities hypotheses , _ econ .geogr . _ * 49 * , 68 - 73 .wadycki ( 1975 ) stouffer s model of migration : a comparison of interstate and metropolitan flows , em demography * 12 * , 121 - 128 .freymeyer and p.n .ritchey ( 1985 ) spatial distribution of opportunities and magnitude of migration : an investigation of stouffer s theory , _ sociological perspectives _ * 28 * , 419 - 440 .c. cheung and j. black ( 2005 ) residential location - specific travel preferences in an intervening opportunities model : transport assessment for urban release areas , _ journal of the eastern asia society for transportation studies _ * 6 * , 3773 - 3788 . m. c. gonzalez , c. a. hidalgo and a .-barabsi ( 2008 ) understanding individual human mobility patterns , _ nature _ 453 , 779 - 782 .d. brockmann , l. hufnagel and t. geisel ( 2006 ) the scaling laws of human travel , _ nature _ * 439 * , 462 - 465 .smith , s. telfer , e.r .kallio , s. burthe , a.r .cook , x. lambin and m. begon ( 2009 ) host - pathogen time series data in wildlife support a transmission function between density and frequency dependence , _ proc .natl acad .* 106 * , 7905 - 7909 .
the advent of geographic online social networks such as foursquare , where users voluntarily signal their current location , opens the door to powerful studies on human movement . in particular the fine granularity of the location data , with gps accuracy down to 10 meters , and the worldwide scale of foursquare adoption are unprecedented . in this paper we study urban mobility patterns of people in several metropolitan cities around the globe by analyzing a large set of foursquare users . surprisingly , while there are variations in human movement in different cities , our analysis shows that those are predominantly due to heterogeneous distribution of places across different urban environments . moreover , a universal law for human mobility is identified , which isolates the rank distance as a key component , factoring in the number of places between origin and destination , rather than pure physical distance , as considered in previous works . building on our findings , we also show that a rank - based movement accurately captures real human movements in different cities . our results shed new light on the driving factors of urban human mobility , with potential applications to urban planning , location - based advertisement and even social studies . [ sec : intro ] since the seminal works of ravenstein , the movement of people in space has been an active subject of research in the social and geographical sciences . it has been shown in almost every quantitative study and described in a broad range of models that a close relationship exists between mobility and distance . people do not move randomly in space , as we know from our daily lives . human movements exhibit instead high levels of regularity and tend to be hindered by geographical distance . the origin of this dependence of mobility on distance , and the formulation of quantitative laws explaining human mobility remains , however , an open question , the answer of which would lead to many applications , e.g. improve engineered systems such as cloud computing and location - based recommendations , enhance research in social networks and yield insight into a variety of important societal issues , such as urban planning and epidemiology . in classical studies , two related but diverging viewpoints have emerged . the first camp argues that mobility is directly deterred by the costs ( in time and energy ) associated to physical distance . inspired by newton s law of gravity , the flow of individuals is predicted to decrease with the physical distance between two locations , typically as a power - law of distance . these so - called gravity - models " have a long tradition in quantitative geography and urban planning and have been used to model a wide variety of social systems , e.g. human migration , inter - city communication and traffic flows . the second camp argues instead that there is no direct relation between mobility and distance , and that distance is a surrogate for the effect of _ intervening opportunities _ . the migration from origin to destination is assumed to depend on the number of opportunities closer than this destination . a person thus tends to search for destinations where to satisfy the needs giving rise to its journey , and the absolute value of their distance is irrelevant . only their ranking matters . displacements are thus driven by the spatial distribution of places of interest , and thus by the response to opportunities rather than by transport impedance as in gravity models . the first camp appears to have been favoured by practitioners on the grounds of computational ease , despite the fact that several statistical studies have shown that the concept of intervening opportunities is better at explaining a broad range of mobility data . this long - standing debate is of particular interest in view of the recent revival of empirical research on human mobility . contrary to traditional works , where researchers have relied on surveys , small - scale observations or aggregate data , recent research has taken advantage of the advent of pervasive technologies in order to uncover trajectories of millions of individuals with unprecedented resolution and to search for universal mobility patterns , such to feed quantitative modelling . interestingly , those works have all focused on the probabilistic nature of movements in terms of physical distance . as for gravity models , this viewpoint finds its roots in physics , in the theory of anomalous diffusion . it tends to concentrate on the distributions of displacements as a function of geographic distance . recent studies suggest the existence of a universal power - law distribution , observed for instance in cell tower data of humans carrying mobile phones or in the movements of where is george " dollar bills . this universality is , however , in contradiction with observations that displacements strongly depend on where they take place . for instance , a study of hundreds of thousands of cell phones in los angeles and new york demonstrate different characteristic trip lengths in the two cities . this observation suggests either the absence of universal patterns in human mobility or the fact that physical distance is not a proper variable to express it . in this work , we address this problem by focusing on human mobility patterns in a large number of cities across the world . more precisely , we aim at answering the following question : do people move in a substantially different way in different cities or , rather , do movements exhibit universal traits across disparate urban centers ? " . to do so , we take advantage of the advent of mobile location - based social services accessed via gps - enabled smartphones , for which fine granularity data about human movements is becoming available . moreover , the worldwide adoption of these tools implies that the scale of the datasets is planetary . exploiting data collected from public _ check - ins _ made by users of the most popular location - based social network , foursquare , we study the movements of 925,030 users around the globe over a period of about six months , and study the movements across 5 million places in 34 metropolitan cities that span four continents and eleven countries . after discussing how at larger distances we are able to reproduce previous results of and , we also offer new insights on some of the important questions about human urban mobility across a variety of cities . we first confirm that mobility , when measured as a function of distance , does not exhibit universal patterns . the striking element of our analysis is that we observe a universal behavior in all cities when measured with the right variable . we discover that the probability of transiting from one place to another is inversely proportional to a power of their _ rank _ , that is , the number of intervening opportunities between them . this universality is remarkable as it is observed despite cultural , organizational and national differences . this finding comes into agreement with the social networking parallel which suggests that the probability of a friendship between two individuals is inversely proportional to the number of friends between them , and depends only indirectly on physical distance . more importantly , our analysis is in favour of the concept of intervening opportunities rather than gravity models , thus suggesting that trip making is not explicitly dependent on physical distance but on the accessibility of objectives satisfying the objective of the trip . individuals thus differ from random walkers in exploring physical space because of the motives driving their mobility . our findings are confirmed with a series of simulations verifying the hypothesis that the place density is the driving force of urban movement . by using only information about the distribution of places of a city as input and by coupling this with a rank - based mobility preference we are able to reproduce the actual distribution of movements observed in real data . these results open new directions for future research and may positively impact many practical systems and application that are centered on mobile location - based services .
the analysis of financial data by methods developed for physical systems has a long tradition , and has attracted the interest of physicists .one of the most motivated reasons is that it is a great scientific challenge to understand the dynamics of a strongly fluctuating complex system with a large number of interacting elements .in addition , it may be possible that the experience gained in the process of studying complex physical systems might yield new results in economics .there are many observables generated from financial markets , and one central issue of the research on the dynamics of financial markets is the statistics of price changes which determine losses and gains .the price changes of a time series of quotations are commonly measured by returns : , log - returns , or increments : at a time scale . in 1900 , bachelier proposed the first model for the stochastic process of returns an uncorrelated random walk with independent identically distributed gaussian random variables .however , prices do not follow a signal random walk process . for example, the daily correlation has been known as the daily log - returns correlated with themselves in such a way that positive returns are followed by positive returns as well .many considerations have been aroused by this effect recently and meanwhile the related research has been reported , not only for daily data but also for high frequency data .the langevin equation ( le ) which distinguishes the development of sample path into the deterministic and random terms , has been used to deal with the brownian motion problem .recently , the langevin approach was used to analyze the financial time series on scale . have investigated how price changes on different time scales are correlated motivated by hierarchical structure of financial time series , which is similar to the energy cascade in hydrodynamic turbulence .they derived a multiplicative langevin equation from a fokker - planck equation ( fpe ) in the variable scale and performed the statistical way to distinguish and quantify the deterministic and the random influence on the hierarchical structure of the financial time series in terms of the drift and diffusion parameters , and , respectively .different from the former study in which the le is used to analyze the scales evolutions of finance , yet , in this paper , with the langevin description , a new insight on the dynamics of the process will be obtained by investigating the time - dependence of log - returns .the time - dependent properties of prices evolution are derived in the way of estimating drift parameter of sampled local periods in the sliding window .then , the relation between and autocorrelation , average return , from which the practical significance of can be recognized , are resulted both from the statistical time - dependence of and some theoretical analyses .besides , our langevin description contains , as a particular case with flat- , the effect of daily correlation in log - returns . on the other hand , the form of diffusion parameter got in this paper , to some extent , explains the heavy tailed probability densities of price changes .the research are mainly carried out from the samples of the daily log - returns of s index from to , containing days , thus covering a wide time range with many different economic and political situations .the paper is organized as follows . in sec .ii , taking the daily log - returns as an example , we generally discuss the application of langevin approach to log - returns series . in sec .iii , we show the results and discussions . finally , the summary and the outlook of this paper are given in sec .for a time series of prices or market index values , the log - return over a time scale is defined as the forward change in the logarithm of , the behavior of daily log - return as a stochastic variable is described by the following le : where the drift parameter and diffusion parameter respectively describe the deterministic and the random influences on the time process of log - returns , and denotes the increment of a standard wiener process .it is assumed that , within each sampling window the parameters may depend on the log - returns , but not explicitly on time ( stationary ) .thus , the drift and diffusion parameters of the sampled period can be extracted from the sampled data by simply using the definition , here denotes the averaging operator and is a realization of the le ( [ eq01 ] ) . from eq.([eq02 ] ) , it is obvious that the drift parameter is the average increment of unit time under the condition , which represents the deterministic influences . is the deviation of which pictures the random influences .it has been known that the autocorrelation of the log - returns decays very fast which is usually characterized by a correlation time much shorter than a trading day .when the time increment is larger than day , the daily log - returns can be considered as the result of many uncorrelated ` shocks ' .thus , in this paper , is mainly set as day .compared to the length of time window , approximately accords with the limit in eq.([eq02 ] ) and eq.([eq03 ] ) .based on the samplings all over the long series , the statistical results of and which are respectively estimated by eq .( [ eq02 ] ) and eq .( [ eq03 ] ) have their simple and general forms .the results of are close to a linear form , and that of are close to a parabolic form , fig .[ fig1 ] presents the statistical results of and ( circles ) which are estimated from the daily log - returns time series of s index from may . to jun . ( window length ) .the statistical results for large are more noisy and uncertain than the points near the origin , because these border points are visited rarely by the trajectory . on the contrary , as viewed from statistics , the more one given is visited , the more times the averaging operator in eq .( [ eq02 ] ) works , which would produce more accurate and reasonable .thus , while approximating ( ) with linear form and ( ) with parabolic form by least - squares fit , the effect of the visited probability for each should be considered , with each corresponding to its own weight .it is natural for a physical scientist to define the weighted factor of as its probability , where the is the frequency of within the time window of length .[ fig1 ] shows the approximations of the statistical results of and with equal - weight ( dashed lines ) and weighted factor ( solid lines ) .the values of the fitting error and ( the mean standard deviations from approximations of and ) with weighted factor are visibly lower than those with equal - weight .therefore , while approximating the statistical results , the point at is endued the weight which is correlated to the frequency of within the window . for the given return value , the frequency also depends on the length of time windows .thus , it is worth mentioning the influences from the length on the approximations .the relations between and the fitting errors of and are investigated . to estimate the error purely made by window length , not by information the time series embodied ,the daily log - returns series is randomly rearranged and the fitting errors and vs. are calculated ( fig .[ fig2 ] ) .it was found that the weighted fitting errors and of the approximations with weighted factor decline quickly with a perfect power - law behavior , and .this behavior suggests that the statistical results of ( or ) with larger is more feasible to be approximated with linear ( or parabolic ) form .note incidentally that , only the results with the same window length could be compared since different corresponds to different fitting errors .the algorithm for the detection of the time - dependence of drift term in this paper can be described as follows : sample the long log - returns series with a sliding window of short length t , and compute for each location .the results estimated from the time window which samples a given local period , present the corresponding local characters of financial markets .the algorithm is more sensitive than merely studying transient behavior .the comment for the selection of window length is to choose long enough so that the averages in eq .( [ eq02 ] ) and eq .( [ eq03 ] ) are statistically meaningful but not so long as to lose the temporal resolution . in the results presented below , and window lengths and overlapping ( window shift by days per time )the corresponding fitting errors are , , , and , .it is expected that the variation of as a function of time can accurately indicate interesting dynamical changes in financial process .in the le ( [ eq01 ] ) , the drift parameter could be seen as an action of potential , with . noted that has a linear form with negative slope , it could be interpreted as the effect of linear state - dependent restoring force with symmetrical potential well , where presents the position of one given particle enslaved to it .the sketch maps of and are showed in fig .[ fig3 ] in which the equilibrium position ( ) is .[ fig4 ] shows the time series of log - returns , restoring force and potential from may . to jan .it is easy to find that the vibrancy of log - returns presented seems to be similar to force and potential , which can be clearly seen from several large events marked in this figure , and the restoring force always presents converse effect to log - returns . in some large events ,the potential is exceedingly large . in the following, the time dependence of restoring force will be discussed including the equilibrium position and the slope coefficient .in addition , discussions of the diffusion parameter and the error analysis will be given . from langevin equation , the so called equilibrium position , which is the zero value of negative - sloped linear drift term , corresponds to the minimum of the potential well . from a physical point of view , the average displacement of an oscillating particle in the potential well defined by eq .( [ eq09 ] ) should also be the minimum of the potential well , .thus , we get , the average displacement was directly obtained from the log - returns series , and the equilibrium position was calculated from eq .( [ eq04 ] ) .the time dependence of and coincide with each other very well all over the ranges [ see fig .[ fig5](a ) and [ fig5](b ) ] , and the plots of as a function of in fig .[ fig5](c ) was excellent agreement with eq .( [ eq07 ] ) . in language of finance , the average log - return describes the macroscopical trend of the price movement : indicates going up , and indicates going down .therefore , based on eq .( [ eq07 ] ) , the rising trend of prices could be estimated if the statistical result ; otherwise , the falling trend could be estimated if .thus , the equilibrium position of restoring force is comprehended as the ` trend index ' of stock prices .furthermore , the stock prices and their log - returns are macroeconomic indicators which are widely used because of the strong correlation between financial markets and economic development . in this case , the equilibrium position , which is derived from the les description of financial time series , would be another important indicator of macroeconomics . in fig .[ fig6](c ) , the time dependence of the slope coefficient calculated from the daily log - returns of s with and are plotted .the ranges of the slope coefficient with and are ] respectively , both of which are close to the value . as shown in fig .( [ fig3 ] ) we know that in a certain given position , steeper ( flatter ) slope of corresponds to larger ( smaller ) restoring force .thus , one can imagine the mechanism of our model : it would take few times for larger forces ( slope : ) to draw particles from one side of the equilibrium position to another side , which we called ` -crossing ' action for the moment , and more times for smaller forces ( slope : ) . to discuss the aforementioned mechanism more accurately ,normalized daily log - returns are used , which has zero mean value , . here the standard deviation of log - returns is defined as the time averaged volatility and the denotes an average over the entire length of the series within time window . from eq .( [ eq07 ] ) , has the equilibrium position equaling zero , so that the -crossing action could be reduced to the sign convert of . thus the mechanism can be described as follows : the sign of changes frequently while the slope is quite steep ; on the contrary , same signs congregate together and sequences of consecutive ` ' or ` ' appear when the slope is flat . the sign series of daily log - returns has been considered to study the daily correlation in log - returns .those researchers got the conditional dynamics from the sequence of consecutive ` ' and ` ' . in this paper , however , we will compare the sequence of the same sign with the converting sign of two neighboring days to check the relationship between and -crossing .the sign - cases of a given day and its previous day are : ` ' , ` ' , ` ' and ` ' .one can define ` ' and ` ' as the sign - sustained cases , and ` ' and ` ' as the sign - convert cases .incidentally , the contribution of ` ' to the sign - sustained cases was shown to be a little more than ` ' on average over the whole series . then , the time series of signs of is investigated by counting the frequencies of sign - sustained cases , , and sign - convert cases , .the time dependence of the proportion and the slope coefficient calculated from the daily log - returns of s with and are plotted in figs . [fig6](a ) and ( c ) .the good similarity of and proves that the slope coefficient is related to the -crossing action and reflects the correlation of neighboring daily log - returns . in detail , in one sampled period , while the slope of the restoring force is flat , the given day s sign of log - return is more likely to be the same as its previous day ; on the country , while the slope is steep , the given day s sign is more likely to be different from its previous day .thus , the mechanism of the daily correlation in log - returns is qualitatively explained by the restoring force .it s easy to notice from the time dependence of and [ fig .[ fig6](a)(c ) ] over the whole series , most of the periods have their slope larger than , and larger than , with mean values , , and , . these imply that , in practice , the flat - slope restoring force is more prevalent than the steep - slope one , and the case of sign - sustained is more than that of the sign - convert .consequently , from the general appearance of sign - sustained cases , the same conclusion was reached as that of : the return of the price during a given day can be correlated with the previous day , in particular with the sign of the previous day .however , it is worth noticing that , the analysis with langevin approach is more general because it contains the positive daily correlation as the particular case with flat - slope restoring force , and non - correlation with steep - slope restoring force . on the other hand , the autocorrelation function , a typically important statistics of stochastic processes , is always used to investigate pairwise correlation of the log - returns of a financial asset . in the following ,we compared it with the slope coefficient from mathematical relations and statistical results .it is known that , where denotes time averaging over all the trading days within the sampled local period with length , is time increment . for the weighted linear least - squares fit , which has been used to approximate the statistical results of , the weighted objective function can be written as , ^{2}}. \label{eq14}\ ] ] where presents the weight of the point .the values of and corresponding to the minimal values of function are something to be sought for .thus from eq .( [ eq14 ] ) , the solution of slope coefficient in is achieved , in the previous discussion , the weight of was defined as its probability ( eq .( [ eq06 ] ) ) .thus one get , in this paper , is calculated by this statistic formula . substituting eq .( [ eq02 ] ) and eq .( [ eq06 ] ) into eq .( [ eq10 ] ) , we get , .\nonumber\\ \label{eq11}\end{aligned}\ ] ]when , compared to , is sufficiently long , the stationary assumption of financial time series is : .then , compared eq .( [ eq11 ] ) with eq .( [ eq08 ] ) , a simple relation between and will be found , \label{eq12}\end{aligned}\ ] ] which is valid for any value of because of no limit to during the derivation .however , since the lack of correlation for , only the case with is analyzed .the statistical result is compared with the slope coefficient , and the correlation function between and exhibited the good effectiveness of eq .( [ eq12 ] ) [ showed in fig .[ fig6 ] ] .the analytical results indicate that the sign - cases , slope coefficient , and autocorrelation function reflect the similar properties of time series .known that correlations observed in financial time series show the incompleteness of the efficient market hypothesis , the three coefficients , , and may probably indicate the degree of market efficiency . from the same tendency of the three coefficients showed in fig .[ fig6 ] with ( right ) , two conclusions will be arrived at : ( i ) the market lost efficiency from to relatively , since the values is much larger than the remaining years ; ( ii ) the market tended to be more and more efficient from to because of the decreasing trend of the value . from the preceding analysis of the equilibrium position and the slope coefficient , the final form of restoring force will be got by substituting eq .( [ eq07 ] ) and eq .( [ eq12 ] ) into eq .( [ eq04 ] ) , this new form as a function of the traditional qualities , and , is a more direct way to understand the dynamical behavior of time series . to the financial data ,the information given by mixes features of the macroscopical properties together with the detail of the prices evolution : the macroscopical trend of prices is presented by the equilibrium position , and the detail correlation between two neighboring days is exhibited by the slope coefficient .( [ eq13 ] ) can be informatively rewritten as . is the characteristic relaxation time , , and reflects the same properties of the log - returns as the slope coefficient does , but is more visualized .the maximum values of , which were calculated from the sliding windows of various length , are all less than days , and the average values of with and are and .thus , the effect of the restoring force of one given day decays with the characteristic relaxation time less than days .the diffusion parameter , which usually has the form shown in eq .( [ eq05 ] ) , corresponds to a state - dependent linear multiplicative noise term in eq .( [ eq01 ] ) . that is to say, the langevin description of the log - returns requires a linear multiplicative noise term to describe the variability of the log - returns , which can be interpreted as the variability of log - returns increases with log - returns itself .thus , we conjecture that the heavy tailed probability densities are due to the form of diffusion parameter in our langevin description .it is valuable to investigate the weighted fitting error of and .[ fig7](b ) shows the time dependence of and calculated from the sliding window with down the log - returns series .the time averaged volatility of log - returns , which can measure the degree that the market is liable to fluctuate , is calculated from the same sliding window [ see fig . [ fig7](c ) ] . compared with the original daily log - returns series [ fig .[ fig7](a ) ] , one can easily find that the variation of and [ fig .[ fig7](b ) ] , together with the time averaged volatility [ fig .[ fig7](c ) ] , show sudden jumps when very volatile periods enter or leave the time window .for example , as pointed out by arrows , the jumps at and are caused by the crashes in may . and on the ` black monday ' , oct . . based on the above analysis , one can conclude that the fitting error , and , sensitively respond to the volatility of the financial markets , i.e. , the information of historical events in real market would influence the accuracy of the fitting results presented in this paper . in the way of our langevin approach to the log - return series with also studied .it can be concluded that , the forms of and [ eq .( [ eq04 ] ) and eq .( [ eq05 ] ) ] , the correlation function between the equilibrium position and the average log - return [ eq . ( [ eq07 ] ) ] , the correlation function between slope and autocorrelation [ eq . ( [ eq12 ] ) ] , and the new form of restoring force [ eq . ( [ eq13 ] ) ]are all effective for these series .table lists the results from the log - return series of the s index on the time scale from to ..[tab : table2 ] the results from the log - return series of the s index on the time scale from to . the average slope , the average value of autocorrelation , the average proportion of the sign - sustained and sign - convert cases , the characteristic relaxation time with and are listed in this table . [ cols="^,^,^,^,^,^,^,^,^,^,^,^ " , ]in this paper , we present a coarse - grain time - dependent langevin description of the dynamics of stock prices , which is proved to be effective by the results obtained from analyzing the s index .the time dependence of drift parameter , which was considered as the restoring force , was investigated by the simple sliding windows algorithm .significantly , while choosing the right weighted factor ( eq .( [ eq06 ] ) ) to approximate the statistical results of , the linear approximations of can reflect both the macroscopical and the detail properties of the price evolution , and the final form of the restoring force eq .( [ eq13 ] ) can be achieved from analytical methods .the macroscopical trend of price could be investigated from the equilibrium position , and the daily correlation in log - return was exhibited by the flat slope coefficient .the mechanism of our model is discussed by analyzing the sign series of log - returns . therefore ,from the restoring force in langevin approach , one can get the properties of experimental data or the properties of financial markets .furthermore , it must be pointed out that the random force also plays an important role in the dynamics of financial markets , which will be addressed further in the future study .
we present a time - dependent langevin description of dynamics of stock prices . based on a simple sliding - window algorithm , the fluctuation of stock prices is discussed in the view of a time - dependent linear restoring force which is the linear approximation of the drift parameter in langevin equation estimated from the financial time series . by choosing suitable weighted factor for the linear approximation , the relation between the dynamical effect of restoring force and the autocorrelation of the financial time series is deduced . we especially analyze the daily log - returns of s index from to . the significance of the restoring force towards the prices evolution are investigated from its two coefficients , slope coefficient and equilibrium position . the new simple form of the restoring force obtained both from statistical and theoretical analyses suggests that the langevin approach can effectively present the macroscopical and the detail properties of the price evolution .
a natural language understanding system is a machine that produces an action as the result of an input sentence ( speech or text ) .there are examples of systems that are able of modeling and learning the relationship between the input sentence and the action in a direct way . however ,when the task is rather complicated , i.e. the set of possible actions is extremely large , we believe that it is necessary to rely on a intermediate symbolic representation . fig .[ fig : transl ] depicts a natural language understanding as composed of two components .the first , called _ semantic translator _ analyzes the input sentence in natural language _( n - l ) _ and generates a representation of its meaning in a formal semantic language _ ( s - l)_. the _ action transducer _ converts the meaning representation into statements of a given computer language _ ( c - l ) _ for executing the required action .+ although there are several and well established ways of performing the semantic translation with relatively good performance , we are interested in investigating the possibility of building a machine that can learn how to do it from the observation of examples. traditional _non - learning _ methods are based on grammars ( i.e. set of rules ) both at the syntactic and semantic level .those grammars are generally designed by hand .often the grammar designers rely on corpora of examples for devising the rules of a given application .but the variety of expressions that are present in a language , even though it is restricted to a very specific semantic domain , makes the task of refining a given set of rules an endless job .any additional set of examples may lead to the introduction of new rules , and while the rate of growth of the number of rules decreases with the number of examples , larger and larger amounts of data must be analyzed for increasing the coverage of a system . moreover , if a new different application has to be designed , very little of the work previously done can be generally exploited .the situation is even more critical for spoken rather than written language .written language generally follows _ standard _ grammatical rules more strictly than spoken language , that is often ungrammatical and idiomatic .besides , in spoken language , there are phenomena like false starts and broken sentences that do not appear in written language . the following is a real example from a corpus of dialogues within the airline information domain , ( darpa atis project , see section [ sect : implement ] ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ from uh sss from the philadelphia airport um at ooh the airline is united airlines and it is flight number one ninety four once that one lands i need ground transportation to uh broad street in phileld philadelphia what can you arrange for that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is clear from this example that rules for analyzing spontaneously spoken sentences can hardly be foreseen by a grammarian .we believe that a system that learns from examples will ease the work of a designer of text or speech understanding system giving the possibility of analyzing big corpora of sentences .the question remains on how to collect those corpora , which kind of annotation is needed , and what is the amount of manual work that has to be carried on .+ the basis of the work exposed in this paper is a semantic translator , called _ chronus . _chronus is based on a stochastic representation of conceptual entities resulting from the formalization of the speech / text understanding problem as a communication problem .the paper is structured as follows . in section [sect : model ] we formalize the language understanding problem and we propose an algorithm based on the maximum a posteriori decoding . in section [sect : implement ] we explain how the described algorithm for conceptual decoding can be part of a complete understanding system and we give a short description of all the modules that were implemented for an information retrieval application . in section [sect : practice ] we discuss experimental performance of the system as well as issues related to the training of the conceptual decoder . finally in section [ sect : conclusion ] we conclude the paper with a discussion on the open problems and the future developments of the proposed learning paradigm .in this section we propose a formalization of the language understanding problem in terms of the noisy channel paradigm .this paradigm has been introduced for formalizing the general speech recognition problem and constitutes a basis for most of the current working speech recognizers .recently , a version of the paradigm was introduced for formalizing the problem of automatic translation between two languages .the problem of translating between two languages has the same flavor of the problem of understanding a language . in the former ,both the input and the output are natural languages , while in the latter the output language is a formal semantic language apt to represent meaning .the first assumption we make is that the meaning of a sentence can be expressed by a sequence of basic units and that there is a _ sequential correspondence _ between each and a subsequence of the acoustic observation , so that we could actually segment the acoustic signal into consecutive portions , each one of them corresponding to a phrase that express a particular .the second assumption consists in thinking of the acoustic representation of an utterance as a version of the original sequence of meaning units corrupted by a noisy channel whose characteristics are generally unknown .thus , the problem of understanding a sentence can be expressed in this terms : given that we observed a sequence of acoustic measurements we want to find which semantic message most likely produced it , namely the one for which the a posteriori probability is maximum .hence the problem of understanding a sentence is reduced to that of maximum a posteriori probability decoding ( map ) . for the actual implementation of this ideawe need to represent the meaning of a sentence as a sequence of basic units .a simple choice consists in defining a unit of meaning as a _ keyword / value _ pair , where , is a conceptual category ( i.e. a _concept _ like for instance _ origin of a flight , destination , meal _ ) and is the _ value _ with which is instantiated in the actual sentence ( e.g. _ boston , san francisco , breakfast _ ) .given a certain application domain we can define a concept dictionary and for each concept we can define a set of values .examples of meaning representation for phrases in the airline information domain are given in table [ tab : mean - examples ] ..example of keyword / pair representations of simple phrases within the atis domain.[tab : mean - examples ] [ cols="<,<",options="header " , ] from this analysis it results that most of the errors are due to the parts of the system that are not trained .the template generator errors reflect a lack of entries in the look - up tables .the dialog manager errors are due to the fact the the simple strategy for merging the context and the current template should be refined with more sophisticated rules .the sql translator should be an error - free module .its only function is that of translating between two different representation in a deterministic fashion .however , in this test , the sql module faced two kinds of problems .the first is that the interpretation rules used for generating the answers were not exactly the same ones used for the official test , and this accounts for roughly half of the errors .the other half of the error is due to the limited power of the template representation , and this will be discussed in section [ sect : conclusion ] . the conceptual model , as explained in section [ sect : model ] ,is defined by two sets of probabilities , namely the _ concept conditional bigrams _ and the _ concept transition probabilities _ . in the first experimentsthese probabilities were estimated using a set of 532 sentences whose conceptual segmentation was provided by hand . the accuracy of the system in the experiments carried out using the model estimated with such a small training set , although surprisingly high , shows a definite lack of training data .smoothing the estimated model probability provides an increase of the performance .the knowledge of the task can be introduced through a _ supervised smoothing _ of the concept conditional bigrams .the supervised smoothing is based on the observation that , given a concept , there are several words that carry the same meaning .for instance , for the concept * origin * , the words depart(s ) leave(s ) arrive(s ) can be considered as synonyms , and can be interchanged in sentences such as : _ the flight that depart(s ) from dallas _ + _ the flight that leave(s ) from dallas _ + _ the flight that arrive(s ) from dallas . _ a number of groups of synonyms were manually compiled for each concept .the occurrence frequencies inside a group were equally shared among the constituting words , giving the same bigram probability for synonymous words .+ if one wants to use a larger corpus than the initial handlabeled few hundred sentences and wants to avoid an intensive hand segmentation labor , one has to capitalize on all the possible information associated to the sentences in the corpus .unfortunately , when the corpus is not expressively designed for learning , like the atis corpus , the information needed may not be readily available . in the remaining of this sectionwe analyze solutions that , although particularly devised for atis , could be generalized to other corpora and constitute a guideline for the design of new corpora .a training token consists of a sentence and its associated meaning .the meaning of sentences in the atis corpus is not available in a declarative form . instead, each sentence is associated with the _ action _ resulting from the _ interpretation _ of the meaning , namely the correct answer .one way of using this information for avoiding the handlabeling and segmentation of all the sentences in the corpus consists in creating a training loop in which the provided correct answer serves the purpose of a feedback signal . in the training loopall the available sentences are analyzed by the understanding system obtained with an initial estimate of the conceptual model parameters .the answers are then compared to the reference answers and the sentences are divided into two classes .the _ correct _ sentences , for which we assume that the conceptual segmentation obtained with the current model is correct , and the _ problem sentences_. then the segmentation of the correct sentences is used for reestimating the model parameters , and the procedure is repeated again .the procedure can be repeated until it converges to a stable number of correct answers .eventually , the remaining _ problem sentences _ are corrected by hand and included in the set of correct sentences for a final iteration of the training algorithm .this procedure proved effective for reducing the amount of handlabeling . in the experiment described in we showed that the performance increase obtained with the described training loop , without any kind of supervision ( the remaining _ problem_ sentences were excluded from the training corpus ) is equivalent to that obtained with the supervised smoothing .this means that the training loop , although is not able to learn radically new expressions or new concepts , is able to reinforce the acquired knowledge and to _ infer _ the meaning of semantically equivalent words . in a set of 4500 sentences the training loop automatically classified almost 80% of the sentences , leaving the remaining 20% to the manual segmentation .+ in section [ sect : model ] we based our formalization of the speech understanding problem on the assumption that there is a sequential correspondence between the representation of a sentence ( words or acoustic measurements ) and the corresponding representation of meaning .this assumption is not generally true for any translation ( semantic or not ) task .an interesting example ( reported in ) of a task where there is no sequential correspondence between a message and its semantic representation , is that of roman numbers ( e.g. i , xxiv , xcix ) and their correspondent decimal representation ( e.g. 1 , 24 , 99 ) .fortunately , in a natural language understanding task , we may have the freedom of choosing the semantic representation , like we did in the implementation of chronus explained above .but in general , if we are dealing with a large corpus of sentences that have not been expressively designed for the purpose of learning a semantic translator , and we would like to take advantage of some kind of semantic annotation already available , we may have to face the problem of the not sequentiality of the representation .for instance , in the atis corpus , each sentences is associated with the intermediate representations used by the annotators for obtaining the reference correct answers .in fact the annotators rephrase each valid sentence in an artificial language that is a very restricted form of english .this _ pseudo - english _ rephrasing ( called _ win _ or _ wizard input _ ) constitute the input of a parser , called nlparse , that unambiguously generates the sql query .for instance , for a sentence like : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i d like to find the cheapest flight from washington d c to atlanta _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the _ win _ rephrasing is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ list cheapest one direction flights from washington and to atlanta _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and the corresponding associated sql statement is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( select distinct flight.flight_id from flight where ( flight.flight_id in ( select flight_fare.flight_id from flight_fare where flight_fare.fare_id in ( select fare.fare_id from fare where fare.one_direction_cost = ( select min ( fare .one_direction_cost ) from fare where fare.fare_id in ( select flight_fare.fare_id from flight_fare where flight_fare.flight_id in ( select flight.flight_id from flight where ( flight.from_airport in ( select airport_service.airport_code from airport_service where airport_service.city_code in ( select city.city_code from city where city.city_name = washington ) ) and flight.to_airport in ( select airport_service.airport_code from airport_service where airport_service.city_code in ( select city.city_code from city where city.city_name = atlanta ) ) ) ) ) ) ) ) and ( flight.from_airport in ( select airport_service.airport_code from airport_service where airport_service.city_code in ( select city.city_code from city where city.city_name = washington ) ) and flight.to_airport in ( select airport_service.airport_code from airport_service where airport_service.city_code in ( select city.city_code from city where city.city_name = atlanta ) ) ) ) ) ; . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ both the sql query and the _ win _ sentence can be considered semantic representations of the original sentence . in fact the sql query is the final target of the understanding system and can be unequivocally obtained from the _ win _ sentence through an existing parser .obviously the sequential correspondence assumption is strongly violated for the sql representation .however a sequential correspondence can be easily found between the pseudo - english _ win _ sentence and the original message , at least for the shown examples . since all the valid sentences in the atis corpus have a _ win _ annotation , the pseudo - english language can be thought of as an alternate candidate for the meaning representation in our learning framework . using _win _ for representing the meaning may lead to two different solutions . in the first we can think of developing a system that learns how to translate natural language sentences into pseudo - english sentences and then use the existing parser for generating the sql query . in the second solution each _win _ sentence in the corpus can be translated in the corresponding conceptual representation used for chronus .this translation is unambiguous ( _ win _ is an unambiguous artificial language by definition ) .a parser can be easily designed for performing the translation or , simply use chronus itself for performing the translation .unfortunately also for the _ win _ representation , the sequential correspondence assumption is violated for a good percentage of the sentences in the corpus .a typical example is constituted by the following sentence : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ could you please give me information concerning american airlines a flight from washington d c to philadelphia the earliest one in the morning as possible __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ whose corresponding _ win _ annotation is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ list earliest morning flights from washington and to philadelphia and american_. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the problem of reordering the words of _ win _ representation for aligning it with the original sentence is a complex problem that can not be solved optimally .suboptimal solutions with satisfactory perfomance can be developed based on effective heuristics .we will not discuss the details of how the reordering can be put into practice .rather we want to emphasize the fact that an iterative algorithm based on a model similar to that explained in section [ sect : model ] led to almost 91% correct alignments between english sentences and corresponding _ win _ representations on a corpus of 2863 sentences . with additional refinementsthis technique can be used , integrated in the training loop , for automatically processing the training corpus of the conceptual model .in this paper we propose a new paradigm for language understanding based on a stochastic representation of semantic entities called concepts . an interesting way of looking at the language understanding paradigm is in term of a language translation system . the first block in fig .[ fig : transl ] translates a sentence in natural language ( _n - l _ ) into a sentence expressed in a particular semantic language ( _ s - l _ ) .the natural language characteristics are generally unknown , while the semantic language designed to cover the semantic of the application is completely known and described by a formal grammar .the second step consists in the translation of the sentence in _ s - l _ into computer language code _c - l _ for performing the requested action. this second module can be generally ( but not necessarily ) designed to cover all the possible sentences in _ s - l _ , since both _s - l _ and _ c - l _ are known .however , the boundary between the first and the second module is quite arbitrary . in , for instance, an automatic system is designed for translating atis english sentences directly into the sql query , and in there is an example of a system that goes from an english sentence to the requested action without any intermediate representation of the meaning .however , the closer we move the definition of _ s - l _ to _n - l _ , the more complicate becomes the design of the _ action transducer _ , reaching in the limit the complexity of a complete understanding system .conversely , when we move the definition of _s - l _ closer to _ c - l _ , we may find that learning the parameters of the _ semantic translator_ becomes quite a difficult problem when the application entails a rather complex semantics .the subject of this paper deal with the investigation of the possibility of automatizing the design of the first block ( i.e. the _ semantic translator _ ) starting from a set of examples .the semantic language chosen for the experiments reported in this paper is very simple and consists of sequences of keyword / value pairs ( or _ tokens _ ) .there is no syntactic structure in the semantic language we use .two sentences for which the difference in the semantic representation is only in the order of the tokens are considered equivalent . in this way we cover a good percentage of sentences in the domain , but still there are sentences that would require a structured semantic language .for instance the two following sentences are indistinguishable when represented by our semantic language , and obviously they have a different meaning .although the system we propose uses a very simple intermediate semantic representation , we showed that it can successfully handle most of the sentences in a database query application like the atis task . when this simple representation is used and when the problem of _ semantic translation _ is formalized as a communication problem , a map criterion can be established for decoding the _ units of meaning _ from text or speech .the resulting decoder can then be integrated with other modules for building a speech / text understanding system .an understanding system based on a learning paradigm , like the one proposed in this paper , can evolve according to different dimensions of the problem .one dimension goes with the increase in complexity of the semantic language _s - l_. rather than using a sequential representation on could think of a tree representation of the meaning .however , this poses additional problems both in the training and decoding stage , and requires the use of algorithms designed for context - free grammars , like for instance the _ inside - outside _algorithm that have a higher complexity that those explained in this paper .another dimension of the problem goes toward a complete automatization of the system , also for those modules that , at the moment , require a manual compilation of some of the knowledge sources .one of these modules is the _ template generator_. both and report examples of systems where the decision about the actual values of the conceptual entities ( or an equivalent information ) is drawn on the basis of knowledge acquired automatically from the examples in the training corpus .the kind of annotation required for the training corpus is also another dimension along with the research on learning to understand language should move .a strategy for learning the understanding function of a natural language system becomes really effective and competitive to the current non - learning methods when the amount of labor required for annotating the sentences in a training corpus is comparable or inferior to the amount of work required for writing a grammar in a traditional system .this requires the development of a learning system the does not require any other information than the representation of the meaning associated to each sentence ( e.g. it does not require an initial segmentation into conceptual units , like in chronus , for bootstrapping the conceptual models ) .moreover , the representation of the meaning should be made using a _ pseudo - natural _ language , for making easier and less time consuming the work of the annotators .an example of this kind of annotation was introduced in section [ sect : pseudoe ] with the pseudo - english _ win _ rephrasing .this suggests a possible evolution of the learning strategy for understanding systems toward a system starting with the limited amount of knowledge required for understanding a small subset of the whole language ( e.g. the _ win _ language ) .then the system can evolve to understanding larger subsets of the language using the language already acquired for rephrasing new and more complex examples .but , of course , the science of learning to understand is still in its infancy , and many more basic problems must be solved before it becomes an established solution to the design of a language interface .baker , j. , `` trainable grammars for speech recognition , '' in wolf , j. j. , klatt , d. h. , editors , _ speech communication papers presented at the 97th meeting of the acoustical society of america , _ mit , cambridge , ma , june 1979 .katz , s. m. , `` estimation of probabilities from sparse data for the language model component of a speech recognizer , '' _ ieee trans . on acoustic , speech and signal processing , _ vol .assp-35 , no .3 , march 1987 .fissore , l. , laface , p. , micca , g. , pieraccini , r. , `` lexical access to large vocabularies for speech recognition , '' _ ieee trans . on acoustics , speech and signal processing , _37 , no . 8 , august 1989 .brown , p. f , cocke , j. , della pietra , s. a. , della pietra , v. j. , jelinek , f. , lafferty , j. , mercer , r. l. , roosin , p. s. , `` a statistical approach to machine translation , '' _ computational linguistics _ ,volume 16 , number 2 , june 1990 .fissore , l. , kaltenmeier , a. , laface , p. , micca , g. , pieraccini , r. , `` the recognition algorithms , '' in _ advanced algorithms and architectures for speech understanding , _g. pirani editor , springer - verlag brussels -luxemboug , 1990 .hemphill , c. t. , godfrey , j. j. , doddington , g. r. , `` the atis spoken language systems , pilot corpus , '' _ proc . of 3rd darpa workshop on speech and natural language _102 - 108 , hidden valley ( pa ) , june 1990 .pieraccini , r. , lee , c. h. , giachin , e. , rabiner , l. r. , `` an efficient structure for continuous speech recognition , '' in _ speech recognition and understanding , recent advances , trends and applications , _ p. laface and r. de mori editors , springer - verlag berlin heidelberg , 1992 .ostendorf , m. , kannan , a. , austin , s. , kimball , o. , schwartz , r. , rohlicek , j. r. , `` integration of diverse recognition methodologies through reevaluation of n - best sentence hypotheses , '' _ proc . of 4th darpa workshop on speech and natural language _ , pacific grove , ( ca ) , february 1991 .pieraccini , r. , levin , e. , lee , c .- h . , `` stochastic representation of conceptual structure in the atis task , '' _ proc .of 4th darpa workshop on speech and natural language _ , pacific grove , ( ca ) , february 1991 .giachin , e. p , `` automatic training of stochastic finite - state language models for speech understanding '' , _ proc . of international conference on acoustics , speech and signal processing _ , icassp-92 , san francisco , ca , march 1992 .pallett , d. s. , dahlgren , n. l. , fiscus , j. g. , fisher , w. m. , garofolo , j. s. , tjaden , b. c. , `` darpa february 1992 atis benchmark test results , '' _ proc . of fifth darpa workshop on speech and natural language _ , harriman , ny , feb 1992 .pieraccini , r. , tzoukermann , e. , gorelov , z. , levin , e. , lee , c .-h , gauvain , j .-l , `` progress report on the chronus system : atis benchmark results , '' _ proc . of fifth darpa workshop on speech and natural language _ , harriman , ny , feb 1992 .pieraccini , r. , gorelov , z. , levin .e. , tzoukermann , e. , `` automatic learning in spoken language understanding , '' _ proc . of 1992 international conference on spoken language signal processing _ , icslp 92 , banff , alberta , canada , october 1992 .tzoukermann , e. , pieraccini , r. , gorelov , z. , `` natural language processing in the chronus system , '' _ proc . of 1992 international conference on spoken language signal processing _, icslp 92 , banff , alberta , canada , october 1992 .oncina , j. , garca , p. , vidal , e. , `` learning subsequential transducers for pattern recognition interpretation tasks , '' _ ieee trans . on pattern analysis and machineintelligence _ ,5 , pp . 448 - 458 , may 1993 .kuhn , r. , de mori , r. , `` learning speech semantics with keyword classification trees , '' _ proc . of ieee international conference on acoustics , speech and signal processing , _icassp-93 , minneapolis , minnesota , april 27 - 30 , 1993 ,
in this paper we propose a learning paradigm for the problem of understanding spoken language . the basis of the work is in a formalization of the understanding problem as a communication problem . this results in the definition of a stochastic model of the production of speech or text starting from the meaning of a sentence . the resulting understanding algorithm consists in a viterbi maximization procedure , analogous to that commonly used for recognizing speech . the algorithm was implemented for building a module , called _ conceptual decoder _ for the decoding of the conceptual content of sentences in an airline information domain . the decoding module is the basis on which a complete prototypical understanding system was implemented and whose performance are discussed in the paper . the problems , the possible solutions and the future directions of the learning approach to language understanding are also discussed in this paper . 0.5 cm 1.0 cm 6.25 in 8.20 in
the observation of its structures show that our universe is not homogeneous .we see voids , groups of galaxies , clusters , superclusters , walls , filaments , etc .however , it is usually argued in the literature that the universe should be nearly homogeneous at large scales which is supposed to validate the use of friedmannian models . but how large these scales are and what nearly does imply is never precisely stated .it has however become , during the last few years , as a widely accepted fact , that the effect of the inhomogeneities can not be ignored when one wants to construct an accurate cosmological model up to the regions where structures start forming and their evolution becomes non - linear .three different methods have been proposed to deal with this issue : 1 . linear perturbation theory . however , this method is only valid when _ both _ the curvature and density contrasts remain small , which is not the case in the non - linear regime of structure formation and where the sne ia are observed .2 . averaging methods ` la buchert ' , promising , but needing to be improved ( see and references therein ) .3 . exact inhomogeneous solutions , valid at all scales and exact perturbations of the friedmann background which they can reproduce as a limit with any precision .the use of such exact solutions shows that well - established physics can explain several of the phenomena observed in astrophysics and cosmology without introducing highly speculative elements , like dark matter , dark energy , exponential expansion at densities never attained in any experiment ( inflation ) , and the like . here, we foccuss on the application of a couple of exact solutions of general relativity to structure formation and evolution and to the reproduction of cosmological data .very few exact inhomogeneous solutions of einstein s equations have been used for these purposes .those which appear most often are : 1 . the lematre tolman ( l t ) models which are spherically symmetric dust solutions of einstein s equations .they are determined by one coordinate choice and two free functions of the radial coordinate chosen among three independent ones : the energy per unit mass , , of the particles contained within the comoving spherical shell at a given , the gravitational mass , , contained in that shell and the bang time function , , meaning that the big bang occurred at different times at different values .the homogeneous flrw model is one sub - case .the lematre model ( usually known as misner - sharp ) is not an explicit solution but a metric determined by a set of two differential equations .it represents a spherically symmetric perfect fluid with pressure gradient .its solution is obtained by numerical integration .3 . the quasi - spherical szekeres ( qss ) models are dust solutions of einstein s equations with no symmetry at all .they are defined by one coordinate choice and five free functions of the radial coordinate .t and flrw models are sub - cases .4 . the spherically symmetric stephani models have also been used for cosmological purpose .they are exact solutions with homogeneous - energy density and inhomogeneous - pressure .l t models have been the most widely used in cosmology since they are the most tractable among the few available ones cited above .however , qss models are currently slightly coming into play .but caution with l t models is required since : * an origin , or centre of spherical symmetry , occurs at where for all ( here , is the areal radius ) .the conditions for a regular centre were derived by mustapha and hellaby . *shell crossings , where a constant shell collides with its neighbour , create undesirable singularities while the density diverges and changes sign .the conditions on the 3 arbitrary functions of the model that ensure none be present anywhere in an l t model are given in .* the assumption of central observer , generally retained for simplicity , can be considered as grounded on the observed quasi - isotropy of the cmb temperature , and thus as a good working approximation at large scales . at smaller scales, it gives simplified models of the universe averaged over the angular coordinates around the observer , i. e. , with the relax of only one degree of symmetry as regards the homogeneity assumption .the central observer location is just an artefact of this angular averaging .however , models assuming a non - central observer and l t swiss - cheeses have also been studied to get rid of possible misleading features of spherical symmetry .the l t class of solutions have been used to reproduce the formation and evolution of structures from the near homogeneity seen in the cmb temperature . in , the evolution from an initial density profile to a galaxy cluster whose density profile approximate the ` universal profile ' of an abell cluster implies a density amplitude at the initial time which differs from the observed values at last scattering by three orders of magnitude . however , the velocity amplitude at the initial time obtained by the same process is on the border of the observationally implied range .this shows that velocity perturbations generate structures much more efficiently than density perturbations .moreover , it is demonstrated in that a smooth evolution can take an initial condensation to a void .this implies that the initial density distribution does not determine the final structure which will emerge from it .the velocity distribution can obliterate the initial setup .however , a void consistent with the observational data ( density contrast less than , smooth edges and high density in the surrounding regions ) is very hard to obtain with l t models without shell crossing . but adding a realistic distribution of radiation ( using tolman models ) helps forming such voids . in the works reported in the above subsection , the spherically symmetric l t and lematre models were used to study structure formation .however , the structures observed in the universe are far from being spherical and the analysis needs thus to be refined by considering a wider variety of astrophysical objects with different shapes .the investigation of the evolution of small voids inside compact clusters and of large voids surrounded by walls or filaments has been performed with qss models .a void with an adjourning supercluster evolved inside an homogeneous background ( figure 1 ) has been considered in . to estimate how two neighbouring structures influence each other ,the evolution of a double structure in qss models ( figure 2 ) has been compared with that of a single structure in l t models and in linear perturbation theory . in the qss models studied , the growth of the density contrast is 5 times faster than in the corresponding l t models and 8 times faster than in the linear approach .this could imply a strong improvement of the phenomenon of structure formation which , in the standard cdm model , is too slow to form the structures corresponding to the observations made at larger and larger redshifts .however , the evolution of the void is slower within the szekeres model than it is in the l t model .this suggests that single , isolated voids evolve much faster than the ones which are in the neighbourhood of large overdensities where the mass of the perturbed region is above the background mass . , of a double - structure model in back - ground units .the left panel shows the iso - density curves in the spatial section x - y .the right panel shows the present day colour - coded density distribution in the spatial section y - z .white represents high - density and black , low - density regions , title="fig : " ] , of a double - structure model in back - ground units .the left panel shows the iso - density curves in the spatial section x - y .the right panel shows the present day colour - coded density distribution in the spatial section y - z .white represents high - density and black , low - density regions , title="fig : " ] . the model of triple structureis composed of an overdense region at the origin , followed by a small void which spreads to a given coordinate . at a larger distance from the origin ,the void is huge and its larger side is adjacent to an overdense region ( figure 3 ) .where the void is large , it evolves much faster than the underdense region closer to the ` centered ' cluster .the exterior overdense region close to the void along a large area evolves much faster than the more compact supercluster at the centre ( figure 4 ) .this confirms that , in the universe , small voids surrounded by large high densities evolve much more slowly than large isolated voids . , of a triple - structure model in back - ground units , showing a slice through the origin . ]since its discovery during the late 1990s , the ` dimming ' of distant type ia supernovae has been mostly ascribed to the influence of a mysterious dark energy component , i.e. an unknown ` fluid ' or ` field ' with negative pressure .formulated in a friedmannian framework , based upon the ` cosmological principle ' , this interpretation has given rise to the ` concordance ' model where the universe expansion is accelerated by the dark energy pressure . however , what we observe is _ not an accelerated expansion _ ( this is only the outcome of the friedmannian assumption ) but the dimming of the supernovae as regards their luminosity predicted in the previous einstein - de sitter standard model of cosmology .more exactly we establish their luminosity distance - redshift relation , itself inferred from the flux measurement of their light curves . shortly after this discovery , it was proposed by a small number of authors that this effect could be due to the large - scale inhomogeneities of the universe . after a period of relative disaffection , this proposal experienced a renewed interest about five years ago . now , the accelerated expansion interpretation was sufficiently misleading such as to induce some authors to try to derive or rule out no - go theorems , i.e. , theorems stating that a locally defined expansion can not be accelerating in inhomogeneous models satisfying the strong energy condition .but , as we have seen above , this is not the point. other authors stressed , more accurately , that the definition of a deceleration parameter in an inhomogeneous model is tricky and has nothing to do with reproducing the supernova data .it is well - known , from the work of mustapha , hellaby and ellis that an infinite class of l t models can fit a given set of observations isotropic around the observer .this has been used by clrier to examplify her demonstration that models which are spherically symmetric around the observer can fit the supernova data and that the problem is completely degenerate .this is the reason why many different central observer l t models have been proposed in the literature and shown to succeed rather well .thus , to constrain the model further on , it is mandatory to fit it to other cosmological data .there exist two procedures for trying to explain away dark energy with ( l t ) models : 1 .the direct way uses a smaller number of degrees of freedom than allowed . here, one first guesses the form of the parameter functions defining a class of models supposed to represent our universe with no cosmological constant or dark energy , and writes the dependence of these functions in terms of a limited number of constant parameters . then one fits these constant parameters to the observed sn ia data or to the luminosity distance - redshift relation of the model .the inverse problem is more general .it amounts to consider the luminosity distance as given by observations or by the model as an input and try to select a specific l t model with zero cosmological constant best fitting this relation . then , to avoid degeneracy, one must jump to a further step and try to reproduce more and possibly all the available observational data , .t solutions with a central observer has been used as a first step in the process of reproducing cosmological inhomogeneities .they model the universe with the inhomogeneities smoothed out over angles around the observer whose location can be anywhere ( this does not contradict any copernican principle ) .this is analogous to the smoothing out over the whole space in homogeneous models .the use of such models must be regarded as a first approach which will be followed in the future by more precise ways of dealing with the observed inhomogeneities . as it has been recalled in the second section of this article , an l t model is defined by two independent arbitrary functions of the radial coordinate , which must be fitted to the observational data . however , in most of the proposals available in the literature , the generality of the models have been artificially limited by giving the initial - data functions a handpicked algebraic form depending on the authors feelings about which kind of model would best represent our universe . only a few constant parameters are left arbitrary to be adapted to the observations . another way in which the generality of the l t models has been artificially limited is the assumption that the age of the universe is everywhere the same , i.e. that the l t bang - time function is constant . with being constant , the only single - patch l t model that fits observations is one with a giant void .the argument brought in defense of the constant assumption is that a non - constant generates decreasing modes of perturbation of the metric , so any substantial inhomogeneity at the present time stemming from would imply huge perturbations of homogeneity at the last scattering .this , in turn , would contradict the cmb observations and the implications of inflationary models . in the recent years , we have thus seen the increase in popularity of a large , then huge , then giant void model , where the observer is located at or near the centre of a large , huge , giant l t void of size of up to a few gpc .the most achieved example of this class of models is the gbh model , specified by its matter content and its expansion rate , governed by 5 free constant parameters and matching to an einstein - de sitter universe at large scales .it is fitted to a series of observations ( cmb , lss , bao , sn ia , hst measure of , age of the globular clusters , gas fraction in clusters , kinematic sunyaev - zeldovich effect for 9 distant galaxy clusters ) used to constrain its parameters . with this class of models ,the conclusion is that the possibility that we leave close to the centre of a large ( around 2.5 gpc ) void within an einstein - de sitter universe with no dark energy is not excluded .however , as many other central void models , the gbh model is constructed with a central underdensity defined a priori by a set of constant parameters ( the underdensity at the centre of the void , the size of the void and the transition width of the void profile ) .hence , the outcome of the fit can be nothing but a central void .why , then , several researchers have been led astray by the frequent claims of a giant void being implied by an l the reason might be twofold . from the results of an analysis of 44 sn ia distances they performed in the framework of a flrw model with and , zehavi _ et al ._ suggested a monopole in the peculiar velocity field they interpreted as marginal evidence for a local void of radius 70 mpc and % underdensity surrounded by a dense shell roughly coinciding with the local great walls .they called this putative local void the ` hubble bubble ' , and , under this name , it became popular in the literature .strangely , a mere toy model can become nearly as strong a reference as actual observations . in a series of articles ,tomita used a simple toy model to explain in a very pedagogical way how ` dark energy ' could in principle be mimicked by a local void .he assumed a low - density inner homogeneous region is connected at some redshift to an outer homogeneous region of higher density .both regions decelerate , but since the inner void expands faster than the outer region , an apparent acceleration is experienced by the observer located inside this void and looking at supernovae bursting in the outer region .such a toy model was only designed to stress that an actual accelerated expansion is not needed to reproduce the sn ia data .however , many authors have taken this example literally and cited this model as if it were evidence for the existence of a local void . since ,as we show below , the arguments in favour of the limitations imposed above on the arbitrary functions of the radial coordinate lack sufficient strength , there is no need to restrict a priori their degrees of freedom .the arguments in favour of or against a local ` hubble bubble ' have been disputed at length within the astronomical community and the latest results seem to point to our being located in a ` local sheet ' ( 7 mpc long ) which bounds a ` local void ' , a nearly empty region with radius at least 23 mpc .this seems to contradict the putative existence of a giant local void .note , however ( see below ) that this assumed local void does not exist on our past light cone but in hypersurfaces constant and it is thus in a space - like relation to us .the arguments brought in defense of the constant assumption are only expectations that should not be treated as objective truth unless they are verified by calculations .such calculations have already been done , and it turns out that the inhomogeneities in needed to explain a structure of present radius 30 mpc is of the order of a few hundred years and that this age difference between the oldest and youngest region would generate cmb temperature fluctuations equal to and , respectively .this is well - hidden in the observational errors at the current level of precision .therefore , there is no justification to the assumption constant .hence , to determine the l t model best fitting the observations the two independent functions of the radial coordinate defining this model must be left totally arbitrary .we thus impose neither an hand - picked algebraic form to these functions , nor = constant . using a set of data the angular diameter distance together with the mass density in redshift space both assumed to have the same form on our past light cone as in the model , the two left arbitrary l t functions are determined and give the mass distribution in spacetime . in this case , the current density profile does not exhibit a giant void but a giant hump and we are located in a shallow and wide basin on top of this hump ( figure 5 ) .note , however , that as the giant void , the giant hump is not directly observable .it exists in the space now , of events simultaneous with our present instant in the cosmological synchronization , i.e. it is in a space - like relation to us .however , contrary to the giant void of the gbh type which does not reproduce the function and can thus be expected to be testable by high redshift galaxy counts , the giant hump is not observable in or in the number count data , since , by construction , it is designed to reproduce them faithfully .why such a difference between the density distribution on our past light cone and in the now space ?it is due to one basic feature of the l t model ( and in fact of all inhomogeneous models ) : on any initial data hypersurface , whether it is a light cone or a constant space , _ the density and velocity distributions are two algebraically independent functions of the radial position_. thus the density on a later hypersurface may be quite different , since it depends on both initial functions .whatever initial density distribution can be completely transformed by the velocity distribution .for example , as predicted by mustapha and hellaby and explicitly demonstrated by krasiski and hellaby , any initial overdensity can evolve into a void and vice versa . in flrw models , there are no physical functions of position , and all worldlines evolve together .thus , while dealing with an l t ( or any inhomogeneous ) model , one must forget all robertson walker - inspired prejudices and expectations .it is also worth emphasizing that the existence of the giant hump is most probably a feature of the particular l t models that we ended up with .this result must not become the starting point of a new paradigm in observational cosmology , aimed at detecting the hump .before this happens , it must be decided at the theoretical level whether the hump is a necessary implication of l t models properly fitted to other observations .the model is a lattice of l t bubbles with radius mpc in an einstein - de sitter ( eds ) background .initially , the void at the center of each hole is dominated by negative curvature and a compensating overdensity matches smoothly the density and curvature eds values at the border of the hole . since the voids expand faster than the cheese , the overdense regions contract and become thin shells at the borders of the bubbles while underdense regions turn into emptier voids , eventually occupying most of the volume .this shows once more that the evolution of the voids bends the photon paths and affects more photon physics than the geometry of the inhomogeneities .but , here , the inhomogeneities of the model are only able to partly mimic the effect of dark energy .these results have induced us to consider qss swiss - cheese models which exhibit enhanced structure evolution ( see the subsection ` structure evolution with quasi - spherical szekeres models ' ) and with which we thus hope to increase the fit to the ` dark energy ' component .the inverse problem of deriving the arbitrary functions of a l t model from observations is very much involved .this is the reason why most of the authors who have tried to deal with this issue have added some a priori constraints to the model .however , lu and hellaby , mcclure and hellaby and hellaby and alfedeed have initiated a program to extract the metric from a set of observations .this is the full inverse problem and it is not degenerate . to dateit has assumed the metric has the l t form , as a relatively simple case to start from , though the long term intention is to remove the assumption of spherical symmetry .these authors have developed and coded an algorithm that generates the l t metric functions , given observational data on the redshifts , apparent luminosities or angular diameters , number counts of galaxies , estimates for the absolute luminosities or true diameters and source masses , as functions of .this allows both of the physical functions of an l t model to be determined without any a priori assumption on their form .the increasing precision of observational data implies that flrw models must now be considered just a zeroth order approximation , and linear perturbation theory a first order approximation whose domain of validity is an early , nearly homogeneous universe . in the nonlinear regime , which was entered since structuresformed , there is no escape from the use of exact methods ( or of averaging schemes aiming at investigating this issue from the standpoint of backreaction ) . in the era of `precision cosmology ' , the effect of the inhomogeneities on the determination of the cosmological models can not be ignored .inhomogeneous models constitute an exact perturbation of the friedmann background and can reproduce it as a limit with any precision .this is the reason why they are fully adapted for the purpose of studying astrophysical and cosmological effects and for constructing precise models of universe . while using l t models with a central observer to represent our ` local ' universe averaged over angles around us , a giant void is not mandatory to explain away dark energy .a giant overdensity can also do the job .however while neither the void nor the overdensity are directly observable , the giant void alone can be tested with observations of the density function on our past light cone .the giant hump will need more and more precise data to be constrained .exact inhomogeneous solutions can be employed not only for studying the geometry and dynamics of the universe , but also to investigate the formation and the evolution of structures .they give enhanced formation efficiency and might therefore help solving the problem of structure formation pertaining to the standard model . while the l t models have been mostly used up to now for modeling the inhomogeneities of the universe , the need of getting rid of spherical symmetry for this purpose will lead the cosmological community to consider other solutions , and among them qss models , for the future developments of inhomogeneous cosmology .
it is commonly stated that we have entered the era of precision cosmology in which a number of important observations have reached a degree of precision , and a level of agreement with theory , that is comparable with many earth - based physics experiments . one of the consequences is the need to examine at what point our usual , well - worn assumption of homogeneity associated to the use of perturbation theory begins to compromise the accuracy of our models . it is now a widely accepted fact that the effect of the inhomogeneities observed in the universe can not be ignored when one wants to construct an accurate cosmological model . well - established physics can explain several of the observed phenomena without introducing highly speculative elements , like dark matter , dark energy , exponential expansion at densities never attained in any experiment ( i.e. inflation ) , and the like . two main classes of methods are currently used to deal with these issues . averaging , sometimes linked to fitting procedures a la stoegger and ellis , provide us with one promising way of solving the problem . another approach is the use of exact inhomogeneous solutions of general relativity . this will be developed here .
the theory of general relativity ( gr ) is currently the most commonly accepted theory describing space - time and gravitation .the theory has been accurately tested within the weak - field , stationary regime ( _ e.g. _ through solar system tests ) , and no deviations from gr have been conclusively found . however , by the very nature of gr , gravitation is a dynamical phenomenon , a key aspect being the prediction of gravitational waves ( gw ) .the first clue towards this dynamical aspect came with the discovery of the hulse - taylor binary pulsar .its orbital motion is in agreement with the assumption that gws carry away energy and angular momentum as predicted by gr .this test , as well as similar test performed on other binary pulsars that were subsequently discovered , however still involve indirect observations of gws .furthermore , even for the most relativistic binary pulsar that is currently known ( psr j0737 - 3039 ) , one has ( where is the total mass , and is the orbital separation ) and a typical orbital speed of . the observed binary pulsars are still very far from merger .however , given access to a sufficiently large volume of space , one should find compact binaries in the final stages of inspiral . at the nominal last stable orbit, these will have a separation of , and .the process of inspiral has been modelled in detail in the context of the so - called post - newtonian ( pn ) approximation ( _ cf_. and references therein ) .the direct detection of the gravitational wave signals from such binaries would enable us to probe the genuinely strong - field , dissipative dynamics of gr .compact binary systems that are close to merger are in fact among the primary targets for kilometer - sized interferometric detectors .these include the virgo detector in italy , the two ligo detectors in the united states and geo600 in germany .both virgo and ligo are in the process of being upgraded to advanced virgo and advanced ligo , which are expected to be completed around 2015 .furthermore , an interferometer named kagra ( formerly know as lcgt ) in japan is in a planning stage , and a detector in india is also being considered . with the current estimates for the advanced detectors ,the rate of detection of inspiralling compact binaries is expected be around a few tens per year .quite a number of alternative theories to gr have been discussed in the literature , and the accuracy with which some of these could be probed with gravitational waves has been studied within the fisher matrix formalism ; for scalar - tensor theories , see , for a varying newton constant , for modified dispersion relation theories ( commonly referred to as ` massive gravity ' ) , for violations of the no hair theorem , for violations of cosmic censorship , and for parity violating theories .practical bayesian methods for performing tests of gr on actual gravitational wave data when they become available include the work by del pozzo _ in the context of massive gravitons , that of cornish _ et al . _ , which employed the so - called parameterized post - einsteinian ( ppe ) waveform family , and that of gossan _ et al . _ which focussed on the ringdown signal . a proposal by arun _ is to measure various quantities within the phase and to check their consistency with the predictions of gr .this could lead to a very generic test , in that one would not be looking for particular ( classes of ) alternative theories . however , so far its viability was only explored through fisher matrix studies .inspired by the method of arun _ et al ._ , the authors of the present paper developed a new bayesian framework , with the following features : * contrary to previous bayesian treatments such as , it addresses the question `` do _ one or more _ testing parameters characterizing deviations from gr differ from zero ? '' as opposed to `` do _ all _ of them differ from zero ? '' in practice this comes down to testing a number of auxiliary hypotheses , in each of which only a subset of the set of testing parameters is allowed to be non - zero ; * precisely because in most of the auxiliary hypotheses a smaller set of testing parameters is used , this method will be more suited to a scenario where most sources have low signal - to - noise ratio , as we expect to be the case with advanced ligo / virgo ; * as with most bayesian methods , information from multiple sources can easily be combined ; * the framework is not tied to any particular waveform family or even any particular part of the coalescence process . however , in we focussed on the inspiral part and chose the testing parameters to be shifts in the lower - order inspiral phase coefficients , as we will also do here .besides establishing a theoretical framework , also showed results for a few simple example deviations from gr .in particular , it was illustrated how the method can be sensitive to deviations which in principle can not be accomodated by the model waveforms .in fact , it is reasonable to assume that the technique will be able to pick up _ generic _ deviations from gr , on condition that their effect on the phasing is of the same magnitude as that of a simple shift in one or more of the low - order phasing coefficients of the standard post - newtonian waveform .more precisely , as long as the change in the phase at frequencies where the detectors are the most sensitive is comparable to the effect of a shift of a few percent at beyond leading order ( corresponding to radians at hz ) , we expect the deviation from gr to be detectable by our method . in this paper , we will show some striking examples to provide further support for this claim .subsequent sections of this paper are structured as follows . in section [sec : method ] we recall the theory and the implementation of the method introduced in .section [ sec : results ] shows results from simulations done with some specific examples of modifications to the waveform phase .a discussion and conclusions are presented in section [ sec : conclusion_discussion ] .at the heart of the method we proposed in lies the question `` to what degree do we believe gr is the correct theory describing the detected signals ? '' this question is best answered within the framework of bayesian model selection .the cornerstone of bayesian analysis is the comparison between the probabilities of two hypotheses given the available data .this is quantified by the _ odds ratio _ where are the hypotheses of the models to be compared , represents the data and is the relevant background information . using bayes theorem , we can then write this odds ratio as the odds ratio is thus the product of two ingredients . the first factor is the ratio of the so - called _ evidences _ , , which is also known as the _ bayes factor_. the evidence ( also called the marginal likelihood ) for _ e.g. _ the hypothesis is given by where are the parameters associated with the hypothesis , and is the prior probability distribution of the parameters . the second factor in eq .( [ oddsratio ] ) is the ratio of the prior probabilities , , and is often referred to as _prior odds_. it should be noted that the prior probability distribution is uniquely determined by the prior information. the assignment of the prior probability will be further explained in subsection [ subsec : odds_ratio ] .details on the calculation of the bayes factor can be found in subsection [ subsec : implementation ] . before moving on to define the odds ratio for the problem at hand ,let us explain the model waveforms used in this paper . in the inspiral regime of compact binary coalescing systems ,the waveforms are accurately described by the post - newtonian approximation .this approximation describes important quantities such as the energy and the flux as expansions in powers of , where is the characteristic velocity of the binary system . to illustrate our method we will use the analytic , frequency domain taylorf2 waveform model , which is implemented in the ligo algorithms library in the following way : where is the distance , the sky position in the detector frame , and the orientation of the orbital plane with respect to the direction to the line of sight . is the so - called chirp mass , and is the symmetric mass ratio ; in terms of the component masses one has and .the phase takes the form \,f^{(i-5)/3 } , \ ] ] with and the time and phase at coalescence , respectively .central to the method are the phase coefficients and . these either have a functional dependence on as predicted by gr ( _ cf ._ for the explicit expressions ) , or are allowed to deviate from the value predicted by gr . in eq .( [ taylorf2 ] ) , the ` frequency sweep ' is itself an expansion in powers of the frequency with mass - dependent coefficients . note that is related to the phase and we could in principle allow it to deviate from the gr prediction .however , for stellar mass binaries and with advanced detectors , we do not expect to be particularly sensitive to sub - dominant contributions to the amplitude , so we will keep the function fixed to its gr expression . we note that in the case of binary neutron stars , which are the sources we will in fact focus on , taylorf2 waveforms are likely to already suffice for a first test of gr .indeed , in the relevant mass range , taylorf2 has a match and fitting factor close to unity with effective one - body waveforms modified for optimal agreement with numerical simulations .spins are unlikely to be very important in this case. one might worry about finite size effects , but as shown in , even for the most extreme neutron star equations of state and for sources as close as 100 mpc , advanced detectors will not be sensitive to these at frequencies below hz ; hence one could cut off the recovery waveforms at 400 hz , in which case the loss in snr would be less than a percent .however , if one also wanted to test gr using systems composed of a neutron star and a black hole , or two black holes , then dynamical spins , sub - dominant signal harmonics and merger / ringdown would become important , and in that case more sophisticated waveform models will be called for .the latter is currently the subject of investigation . focussing our attention on deviations from gr in the phase of the measured waveform , _i.e. _ keeping the amplitude fixed to its gr - predicted value , we consider within the bayesian model selection framework the following two hypotheses : * : the waveform has a phase with the functional dependence on as predicted by gr ; * : one or more of the phase coefficients do not have the functional dependence on as predicted by gr .the gr hypothesis , , is the hypothesis that our gr waveform model ( taylorf2 in this case ) correctly describes the signal originating from the inspiral of two compact objects .ideally , would simply have been the negation of . however , _ a priori _ , deviations from gr can occur an infinite number of ways .what we will argue is that for the core question that we want to address , _ i.e. _ whether or not the observed phase deviates from gr , it will be sufficient to allow for a _ limited _ set of possible deviations in the recovery waveforms . for taylorf2, we take the set of deviations only to be within the known phase coefficients . to date , the taylorf2 phase has ten known phase coefficients ( , and two additional coefficients and associated with logarithmic contributions ) . herewe will not use as a variable coefficient ; even so , if one were to consider all the subsets of the set of remaining coefficients , one would have to take into account ways in which a deviation can occur .apart from this being computationally demanding , we do not expect to be sensitive to the highest - order coefficients ; hence it makes sense to limit oneself to all the subsets of where is the number of phase coefficients one chooses to consider .we thus allow one or more of the coefficients to vary freely , instead of following the functional dependence on as predicted by gr .the choice of will be in part be influenced by the required generality of the test , measurability of phase coefficients , and computational limitations .finally , we quantify our belief in whether one or more phase coefficients deviate from gr by means of auxiliary hypotheses , which are defined as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ is the hypothesis that the phasing coefficients do not have the functional dependence on as predicted by general relativity , but all other coefficients , _ do _ have the dependence as in gr ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is important to note that by definition of the hypotheses , they are mutually , logically disjoint , _i.e. _ , is always false for . for a signal to be inconsistent with gr, we require that one or more phase coefficients deviate from gr . in terms of hypotheses , we are thus interested in the logical ` or ' of the sub - hypotheses , , defined above . with this in hand, we can now define to be : the odds ratio for coefficients is given by : using the fact that the auxiliary hypotheses are mutually , logically disjoint , one can write applying bayes theorem , one finds where at this point , one has to set the values for the relative prior probabilities , .when one is devoid of prior information as to which of the test coefficients are inconsistent with gr , one can choose to invoke total ignorance and assign to each an equal weight , _i.e. _ despite the choice of total ignorance , however , one more quantity needs to be set .the overall relative prior , , describes the prior belief in whether gr is the correct theory or not .the choice of this quantity is left to the reader . for convenience ,however , we write as will become apparent below , will end up being just an overall scaling of the odds ratio .later on , for the purposes of showing results , we will set . the equality ( [ eq : totalignorance ] ) , together with ( [ eq : generalprioroddsratios ] ) and the logical disjointness of the hypotheses implies in terms of the , the odds ratio can then be written as up to an overall prefactor , the odds ratio is thus a straightforward average of the bayes factors from the individual sub - hypotheses , .although the detection rate for compact binary coalescences is still rather uncertain , we expect advanced instruments to detect several events per year .it is therefore important to take advantage of multiple detections to provide tighter constraints on the validity of gr .the extension of the odds ratio to include observations from several independent sources can be found in .here we simply state the result , referring the interested reader to these papers for details .if one assumes independent measurements and the events are labelled by , one can write the odds ratio as where with being the data associated to the detection. from a theoretical point of view , the data favours the hypothesis compared to the hypothesis when . the relative degree of belief in the two hypotheses is encapsulated in the magnitude of the odds ratiohowever , in the case of advanced ground - based detectors , the signals will be buried deep inside the noise .this introduces the problem that the noise itself can mimic the effect of a deviation from gr that is non - negligible .hence we need to study the effect of noise on the odds ratio . for this purpose, we constructed a so - called _ background _ , _ i.e. _ a distribution of log odds ratios from a large number of catalogues , collectively denoted by , of simulated signals consistent with and embedded within noise .the background distribution of log odds ratios for catalogues of gr sources can be seen as the blue dotted histogram in the right hand panel of fig .[ fig : dphia2_histograms ] below . in the advanced detector era, one will only have access to a single catalogue of detected _ foreground _ events .the associated measured log odds ratio should subsequently be compared with the background distribution in order to quantify our belief in a deviation from gr .to do this , in sec .[ subsec : catalogue_size ] we will introduce a maximum tolerable false alarm probability , which together with the background distribution sets a threshold for the measured log odds ratio to overcome . for specific examples of gr violations, we will want to know how likely it is that the catalogue of foreground sources will have an odds ratio that is above threshold .for this reason we will also simulate large numbers of foreground catalogues , collectively denoted by . for a given false alarm rate , onecan then calculate what fraction of the simulated foreground catalogues has a log odds ratio above the associated threshold ; this fraction we will call the _efficiency_. a few remarks have to be made regarding the implementation of the aforementioned method .first , we use testing parameters .the varying of these phase coefficients was parameterised in the following fashion : , \label{eq : deltapsi}\ ] ] with the functional form of the dependence of on according to gr , and the dimensionless is a fractional shift in .the 0.5pn case , , however can not be implemented in a similar way , as gr predicts .instead , deviations from are modelled as and the interpretation of a fractional shift is not adequate ; rather , is related to the magnitude of the deviation itself .for the computation of the odds ratio defined in eq . andeq . , one needs to compute the relevant bayes factors via the evidences . in high - dimensional problems , brute force methods to calculate the integral in eq .are computationally too expensive .one can , however , make use of more efficient methods to make this calculation computationally feasible . in this paper , we resort to an algorithm called nested sampling . more specifically , an implementation tailored to ground - based observations of coalescing binaries by veitch and vecchio was used .both the model waveforms and the nested sampling algorithm were appropriately adapted from existing code in the ligo algorithms library .in this section , we want to lend further support to the claim in that the method is in principle sensitive to deviations that are not considered within the model waveforms , as long as the phase shift at hz , where the detectors are the most sensitive , is comparable to , say , a shift of at 1.5pn order ( corresponding to a shift in the overall phase of radians at the given frequency ) . to this endwe use two heuristic examples where the change in phase of the signals can not be accomodated by the model waveforms , yet the deviation from gr turns out to be detectable . the first example , in subsection [ subsec : mass_dep_freq_power ] , considers an additional term in the phase associated with a power of frequency which _ itself _ depends on the total mass of the system .this power is chosen in such a way that within the range of total masses we consider , the frequency dependence of the anomalous contribution varies from effectively being 0.5pn at the lower end to 1.5pn at the higher end .clearly , our model waveforms are in no way designed to capture such a deviation from gr .the second case , in subsection [ subsec : quadratic_curvature ] , considers a deviation at a pn order ( 2pn ) that is higher than the orders at which we allow phase coefficients to vary in our model waveforms ( 0.5pn , 1pn , and 1.5pn ) . after presenting the main results, we study the effects of the number of detected sources on our confidence in a deviation from gr .for this investigation we use the example in subsection [ subsec : quadratic_curvature ] . from fisher matrix analyses , it has been shown that the phase coefficients in eq .are best measured as the total mass of the system goes down .therefore , the signals were chosen to originate from neutron stars with masses between and .for such systems , it has been shown that contributions from the spin interactions and the sub - dominant signal harmonics are negligible , and merger / ringdown do not have a significant impact .the aim is to simulate the situation at advanced virgo and ligo as realistically as possible .we have assumed an advanced detector network with detectors at hanford and livingston , both with the advanced ligo noise curve , and a detector at cascina with the advanced virgo noise curve .three data streams were produced , containing stationary , gaussian noise coloured by these respective noise curves , to which simulated signals were added .events were placed uniformly in volume ( _ i.e. _ probability density proportional to , where is the luminosity distance ) , between mpc and mpc , to reflect the estimates of the number of detectable sources and the appropriate horizon distance . a lower cut - off of 8 was imposed on the _ network _ snr , defined as the quadrature sum of the individual detector snrs , so as to be consistent with the ligo / virgo minimum for an event to be claimed as detected .the waveforms were chosen to go up to 2pn in phase both for the injected signals and the model waveforms .the test coefficients were taken to be , and , so that the hypothesis contains logically disjoint sub - hypotheses .the priors given to the deviations were chosen to be flat and centered around zero , with a total width of .the priors on the remaining parameters were taken to be the same as in , with the exception that the distance is allowed to go up to mpc .it should be stressed that the choice of waveform approximant , test coefficients , and priors on the deviations were , to a large extent , arbitrary . in the advanced detector era , one would seek to perform the most general test that computational resources will allow .this will include the most accurate waveforms available at that time , the highest number of test coefficients one can handle , and the least restrictive priors that are in accordance with our prior information at that moment .in our first example , the signals are given a deviation in the phase that has a mass dependent frequency power .specifically , the deviation is of the form : where denotes the total mass of the binary system .we note that for a system with component masses in the middle of our range , , the change in phase at hz is about the same as for a 10% shift in .more precisely , for these masses the change in is 13.3 radians , to be compared with the 12.8 radians change induced by a constant 10% shift in . in order to assess the statistics of the odds ratio , a large number or signalswere simulated , with the parameter distribution explained above .for each of the signals , we calculated the odds ratio as defined in eq . .the distribution of the odds ratio as a function of snr can be seen in fig .[ fig : dphia2_oddsvssnr ] .the separation between ` foreground ' and ` background ' is more or less complete already below snr . in the top panel of fig .[ fig : dphia2_histograms ] , the odds ratios for the sources with a deviation from gr are compared with a ` background ' distribution ( in the sense defined above ) .next we collected sources into ` catalogues ' of 15 sources each and computed the _ combined _ odds ratio of eq .( [ eq : oddscombined ] ) for all of these catalogues ; the distribution of these odds ratios for ` background ' and ` foreground ' are shown in the bottom panel of fig .[ fig : dphia2_histograms ] .clearly , the ability to combine information from multiple sources is a powerful tool in increasing one s confidence in a violation of gr . for the given deviation, a violation of gr can be established with near - certainty .( blue dotted ) or have a shift with a mass dependent frequency behaviour given in eq .( red striped ) .right : normalised distributions of logs of the combined odds ratios for the same injections as at the top , but collected into independent catalogues of 15 sources each .the effect of combining sources is to separate the distribution of gr injections and anomalous injections , increasing one s confidence in a deviation from gr.,title="fig : " ] ( blue dotted ) or have a shift with a mass dependent frequency behaviour given in eq .( red striped ) .right : normalised distributions of logs of the combined odds ratios for the same injections as at the top , but collected into independent catalogues of 15 sources each .the effect of combining sources is to separate the distribution of gr injections and anomalous injections , increasing one s confidence in a deviation from gr.,title="fig : " ] our testing coefficients are , , , so that the model waveforms can only have shifts in pn phase contributions up to 1.5pn order . to show that we can nevertheless be sensitive to anomalies at higher pn order, we now consider signals with a constant shift at 2pn .we note in passing that theories with quadratic curvature terms in the action tend to introduce extra contributions at 2pn .thus , we consider injections with , \label{eq : quadratic_curvature}\ ] ] where the magnitude is set to be . for comparison , at hz and for a system with component masses , the change in the phase caused by such a deviation is comparable to the one caused by a negative relative shift in of 3.5% ( namely , the shift in at hz is radians ) .[ fig : dphi4_20pc_oddsvssnr ] shows the odds ratio as a function of the optimal snr , both for gr injections and anomalous ones .this time , as opposed to the example considered in subsection [ subsec : mass_dep_freq_power ] , separation between the odds ratios of signals with the deformations of eq . andthe noise induced distribution of odds ratios for gr injections becomes apparent at snr .this can be attributed the fact that the deviations are in more subdominant contributions to the phase compared to the case considered earlier . in fig .[ fig : dphi4_20pc_histograms ] we show the odds ratio for individual sources and for random catalogues with 15 sources each . for individual sources ,the separation between the background and the foreground is present but weak .however , when one assumes random catalogues of 15 sources each , the separation becomes very significant .this further illustrates the importance of combining information from multiple sources .( blue dotted ) or have a deviation of the form given in eq .( red striped ) .right : normalised distribution of logs of the combined odds ratios for the same signals as at the top , but randomly arranged in catalogues of 15 sources each .the effect of combining sources is in this case is profound .only a small difference between background and foreground is visible when considering individual sources . for catalogues of 15 sources ,the differentiation becomes significant.,title="fig : " ] ( blue dotted ) or have a deviation of the form given in eq .( red striped ) .right : normalised distribution of logs of the combined odds ratios for the same signals as at the top , but randomly arranged in catalogues of 15 sources each .the effect of combining sources is in this case is profound .only a small difference between background and foreground is visible when considering individual sources . for catalogues of 15 sources ,the differentiation becomes significant.,title="fig : " ] we have seen in the previous section that constructing the odds ratio from a catalogue of sources greatly increases the confidence in a deviation from gr .however , the rates of binary inspiral observed in the advanced detector era are highly uncertain .it is therefore instructive to study the effect of the catalogue size on our confidence in detecting a deviation . to characterise such a confidence, we introduce the concept of _ efficiency_. assume one has two distributions of log odds ratios : the _ background distribution _ of log odds ratio obtained when the simulated catalogues , collectively denoted by , are in agreement with , , and the _ foreground distribution _ obtained when the simulated catalogues adhere to some alternative theory , .now choose a maximum tolerable _ false alarm probability _ .this sets a threshold for the measured log odds ratio to overcome , as follows : we now define the _ efficiency _ , , as the fraction of foreground with a false alarm probability of or less , _ i.e. _ the portion that lies above the threshold : the efficiency can be viewed as the chance that if there is a deviation from gr corresponding to , the catalogue of sources that is actually detected will have a log odds ratio above threshold , _i.e. _ that it will have a false alarm probability of or less . note that with these definitions , the efficiency is independent of the overall prior odds ratio in eqs .( [ eq : totalignorance ] ) and ( [ eq : oddscombined ] ) , as it corresponds to the same shift of in the background distribution , the threshold , and the foreground distribution . in fig .[ fig : sig_catsize ] , we show the efficiency for the example shown in subsection [ subsec : quadratic_curvature ] as a function of the catalogue size , for . which sources are placed together in a catalogue is determined randomly . to understand the statistical fluctuations in the efficiency when collecting sources into catalogues in different ways , for the same set of signals we considered 5000 random orderings in which the signals are combined into catalogues .the resulting median and the 68 confidence levels are shown as the central curve and the error bars , respectively .( ) .5000 random orderings of the same set of sources were split into catalogues .the mean ( central curve ) and the 68 confidence intervals ( error bars ) are plotted .the efficiency rises sharply as a function of the number of sources , underscoring the importance of coherently considering all detected signals events . ] as can be seen in fig .[ fig : sig_catsize ] , the acceptance probability rises sharply as a function of the catalogue size .this underscores the importance of considering all the detected source in a coherent fashion , as was explained in subsection [ subsec : multiple_sources ] . even though a single detection might not yield confidence in a deviation from gr , coherently adding information from multiple sourcescan rapidly increase this confidence . to put the numbers in fig .[ fig : sig_catsize ] into perspective , the predicted rate for binary inspiral in the so - called ` realistic ' case is 40 per year .we have given two striking examples to support the claim that our method proposed in can distinguish deviations that are not captured by the limited model waveforms , as long as the phase shift in the frequency range where the detectors are the most sensitive ( hz ) is comparable to one caused by a shift of at least , _i.e. _ , radians . in the first example, signals were studied that have a deviation in the phase with a mass dependent power of frequency , effectively ranging from 0.5pn to 1.5pn as the total mass is varied from the lowest to the highest value we consider .the magnitude of the effect was such that at hz , the change in phase ( radians ) was about the same as that induced by a constant relative shift .the odds ratio for individual sources already showed confidence that such deviations can be measured .when sources were combined into catalogues of 15 each , the confidence in having detected a deviation improved drastically , and a deviation of this kind will be measurable with a false alarm probability of essentially zero .we further showed results for signals with a shift in the 2pn phase coefficient , . setting ,the induced change in phase at hz is comparable to a constant shift at 1.5pn , namely 4.5 radians .the choice of a modification at 2pn was inspired by corrections to the phase if one considers a modified einstein - hilbert action containing terms that are quadratic in the riemann tensor , as calculated in . as can be seen from fig .[ fig : sig_catsize ] , the efficiency for a maximum false alarm probability of 1% is essentially unity for catalogues comprising more than 25 sources .lastly , we investigated the effect of the catalogue size on our confidence in detecting a deviation .in general , this confidence rises sharply with the number of sources in the catalogue , underscoring the necessity to combine information from multiple sources in the advanced detector era .finally , we want to mention some necessary future developments .first and foremost , the most accurate waveforms will need to be incorporated in order to distinguish between genuine effects predicted by gr , and possible deviations from gr . especially for systems consisting of two black holes , or a neutron star and a black hole, these waveforms will need to include dynamical spins , sub - dominant signal harmonics , residual eccentricity , a description of the merger and ringdown , _ etc_. the development of waveforms including these effects is ongoing .furthermore , the effects of realistic detector noise , instead of idealised stationary , gaussian noise , need to be studied in detail .once the advanced detectors have reached their design sensitivities and a number of detections have been made ( _ e.g. _ using template - based searches ) , a test of general relativity using compact binary coalescences could go as follows .starting from the best available gr waveforms , one introduces parameterised deformations in phase as well as in amplitude , leading to disjoint hypotheses , the logical ` or ' of all of which is .next , many injections are performed of gr waveforms into real or realistic data and collected into ` catalogues ' to establish a background distribution for the log odds ratio , and a suitable threshold is set below which a deviation from gr will not be accepted .then is computed for the catalogue of sources that were actually detected .if this number is above threshold , a violation of gr is likely . the number of testing parameters one can consider will be limited , mainly by the computational restrictions one will have in the advanced detector era .our method is meant , first and foremost , to establish whether or not a violation of gr is plausible , _ of whichever kind _ and _ not _ mainly to pinpoint the eventual alternative theory of gravity responsible for the gw signal , nor to estimate the parameters of the alternative model , as it is unlikely that low - snr signals as those expected for the advanced stage of ligo / virgo will enable a detection of a gr deviation _ and _ an identification of its nature .however , once a deviation is found , a follow - up investigation can be performed with our inference method in an attempt to find out its precise nature by trying different alternatives to gr , _i.e. _ using waveforms inspired by specific ( families of ) alternative theories of gravity .a version of the so - called parameterised post - einsteinian waveform family could be useful in this respect . in thisregard we recall that our framework is not tied to any particular waveform family . the results of , and the further investigations presented here , motivate the construction of a full data analysis pipeline based on the method we have presented .although much work remains to be done on the data analysis side , the advanced detectors will enable us to go well beyond the tests performed using the observed binary pulsars , and give us our very first empirical access to the genuinely strong - field dynamics of space - time .tgfl , wdp , sv , cvdb and ma are supported by the research programme of the foundation for fundamental research on matter ( fom ) , which is partially supported by the netherlands organisation for scientific research ( nwo ) .jv s research was funded by the science and technology facilities council ( stfc ) , uk , grant st / j000345/1 .kg , ts and av are supported by science and technology facilities council ( stfc ) , uk , grant st / h002006/1 .the work of rs is supported by the ego consortium through the vesf fellowship ego - dir-41 - 2010 .it is a pleasure to thank n. cornish , b.r .iyer , b.s .sathyaprakash and n. yunes for their valuable comments and suggestions .the authors would also like to acknowledge the ligo data grid clusters , without which the simulations could not have been performed . specifically ,these include the computing resources supported by national science foundation awards phy-0923409 and phy-0600953 to uw - milwaukee .also , we thank the albert einstein institute in hannover , supported by the max - planck - gesellschaft , for use of the atlas high - performance computing cluster .
in this paper we elaborate on earlier work by the same authors in which a novel bayesian inference framework for testing the strong - field dynamics of general relativity using coalescing compact binaries was proposed . unlike methods that were used previously , our technique addresses the question whether _ one or more _ ` testing coefficients ' ( _ e.g. _ in the phase ) parameterizing deviations from gr are non - zero , rather than all of them differing from zero at the same time . the framework is well - adapted to a scenario where most sources have low signal - to - noise ratio , and information from multiple sources as seen in multiple detectors can readily be combined . in our previous work , we conjectured that this framework can detect _ generic _ deviations from gr that can in principle not be accomodated by our model waveforms , on condition that the change in phase near frequencies where the detectors are the most sensitive is comparable to that induced by simple shifts in the lower - order phase coefficients of more than a few percent ( radians at 150 hz ) . to further support this claim , we perform additional numerical experiments in gaussian and stationary noise according to the expected advanced ligo / virgo noise curves , and coherently injecting signals into the network whose phasing differs structurally from the predictions of gr , but with the magnitude of the deviation still being small . we find that even then , a violation of gr can be established with good confidence .
as an increasing number of artificial satellites or spacecrafts have been and are being launched into deeper space since 1960s , the problem of controlling the translational motion of a spacecraft in the gravitational field of multiple celestial bodies such that some cost functionals are minimized or maximized arises in astronautics .the circular restricted three - body problem ( crtbp ) , which though as a degenerate model in celestial mechanics can capture the chaotic property of -body problem , is extensively used in the literature in recent years to study optimal trajectories in deeper space .the controllability properties for the translational motion in crtbps are studied by caillau _ , showing that there exist admissible controlled trajectories in an appropriate subregion of state space .the present paper is concerned with the -minimization problem for the translational motion of a spacecraft in a crtbp , which aims at minimizing the -norm of control . therefore ,if the control is generated by propulsion systems which expel mass in a high speed to generate an opposite reaction force according to newton s third law of motion , the -minimization problem is referred to as the well - known fuel - optimal control problem in astronautics .the existence of the -minimization solutions in crtbps can be obtained by a combination of filippov theorem in ref . and the technique in ref . if we assume that admissible controlled trajectories remain in a fixed compact , see ref . . while in the planar case where the translational motion is restricted in a 2-dimensional ( 2d ) plane , the singular extremals and the corresponding chattering arcs are analyzed by zelikin and borisov in ref . , the synthesis of the solutions of singular extremals in 3-dimensional ( 3d ) case , to the author s knowledge , is not covered up to the present time .therefore , in this paper , in addition to an emphasis on the necessary conditions arising from the pontryagin maximum principle ( pmp ) , which reveals the existences of bang - bang and singular controls , the solutions of singular extremals are investigated to show that the -minimization trajectories in 3d case can exhibit fuller or chattering phenomena according to the theories developed by marchal in ref . as well as by zelikin and borisov in ref . .even though one does not consider singular and chattering controls , the bang - bang type of control as well as the chaotic property in crtbps makes the computation of the -minimization solutions a big challenge . to address this challenge , various numerical methods ,e.g. , _ direct methods , indirect methods , and hybrid methods , have been developed recently . in this paper ,the indirect method , proposed by caillau _et al . _ in refs . to combine a shooting method with a continuation method , is employed to compute the extremal trajectories of the -minimization problem .based on this method , some kinds of fuel - optimal trajectories in a crtbp are computed recently as well in ref . .whereas , one can notice that the extremal trajectories computed by this indirect method can not be guaranteed to be at least locally optimal unless sufficient optimality conditions are satisfied .thus , it is indeed crucial to test sufficient conditions to check if a computed trajectory realizes a local optimality , which is what is missing in the research of optimal trajectories in crtbps .the sufficient conditions for optimal control problems are widely studied in the literature in recent years , see refs . and the references therein . through defining an accessory finite dimensional problem in refs . , some sufficient conditions are developed for optimal control problems with a polyhedral control set . in ref . , two no - fold conditions are established for the -minimization problem , which generalises the results of refs . . assuming the endpoints are fixed , these two no - fold conditions are sufficient to guarantee a bang - bang extremal of the -minimization problem to be a strong local optimizer ( cf .subsection [ subse : sufficient1 ] ) .whereas , in addition to the two no - fold conditions , a third condition has to be established once the dimension of the constraint submanifold of final states is not zero , see refs . . in this paper , a parameterized family of extremals around a given extremalis constructed such that the third condition is managed to be related with jacobi field under some regularity assumptions ( cf .subsection [ subse : sufficient2 ] ) .then , it is shown that the propagation of jacobi field is enough to test the sufficient optimality conditions ( cf .[ se : procedure ] ) . the paper is organized as follows . in sect .[ se : problem_formulation ] , the -minimization problem is formulated in crtbps .then , the necessary conditions are derived with an emphasis on singular solutions in sect .[ se : necessary ] . in sect .[ se : sufficient ] , a parameterized family of extremals is first constructed . under some regularity assumptions ,the sufficient conditions for the strong - local optimality of the nonsingular extremals with bang - bang controls are established . in sect .[ se : procedure ] , a numerical implementation for the optimality conditions is derived . in sect .[ se : numerical ] , consider the earth - moon - spacecraft system as a crtbp , a transfer trajectory of a spacecraft from a circular geosynchronous orbit of the earth to a circular orbit around the moon is calculated to provide a bang - bang extremal , whose local optimality is tested thanks to the second - order optimality conditions developed in this paper .a crtbp in celestial mechanics is generally defined as an isolated dynamical system consisting of three gravitationally interacting bodies , , , and , whose masses are denoted by , , and , respectively , such that 1 ) the third mass is so much smaller than the other two that its gravitational influence on the motions of the other two is negligible and 2 ) the two bodies , and , move on their own circular orbits around their common centre of mass . without loss of generality , we assume and consider a rotating frame such that its origin is located at the barycentre of the two bodies and , see fig . [ fig : rotating_frame ] .the unit vector of -axis is defined in such a way that it is collinear to the line between the two primaries and and points toward , the unit vector of -axis is defined as the unit vector of the momentum vector of the motion of and , and the -axis is defined to complete a right - hand coordinate system .it is advantageous to use non - dimensional parameters .let be the distance between and , and let , we denote by and the unit of length and mass , respectively .we also define the unit of time in such a way that the gravitational constant equals to one .accordingly , one can obtain through the usage of kepler s third low .then , denote by the superscript " the transpose of matrices , if , the two constant vectors and denote the position of and in the rotating frame , respectively . in this paper ,we denote the space of -dimensional column vectors by and the space of -dimensional row vectors by .let be the non - dimensional time and let and be the non - dimensional position vector and velocity vector of , respectively , in the rotating frame .then , consider a spacecraft as the third mass point controlled by a finite - thrust propulsion system and let , its state ( ) consists of position vector , velocity vector , and mass , i.e. , .denote by the two constants and the radiuses of the two bodies and , respectively , and denote by the constant the mass of the spacecraft without any fuel , we define the admissible subset for state as where " denotes the euclidean norm .then , the differential equations for the controlled translational motion of the spacecraft in the crtbp in the admissible set for positive times can be written as with \v , \\boldsymbol{g}(\r ) = \left[\begin{array}{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0\end{array}\right ] \r - \frac{1- \mu}{\|\r-\r_1\|^3}(\r - \r_1 ) - \frac{\mu}{\|\r -\r_2\|^3}(\r - \r_2),\nonumber % \left[\begin{array}{c } % x - ( 1 - \mu)(x + \mu)/r_1 ^ 3 - \mu ( x + \mu - 1)/r_2 ^ 3\\ % y - ( 1 - \mu ) y /r_1 ^ 3 - \mu y /r_2 ^ 3\\ % -(1-\mu)z / r_1 ^ 3 - \muz /r_2 ^ 3 % \end{array}\right],\end{aligned}\ ] ] where is a scalar constant determined by the specific impulse of the engine equipped on the spacecraft and is the thrust vector , taking values in where the constant , in unit of , denotes the maximum magnitude of the thrust of the engine .denote by ] , we say is the admissible set for the control .let us define the controlled vector field on by where then , the dynamics in eq .( [ eq : sigma ] ) can be rewritten as the control - affine form given an such that , we define the -codimensional constraint submanifold on final state as where denotes a twice continuously differentiable function of and its expression depends on specific mission requirements , see an explicit example in eq .( [ eq : function_phi ] ) .then , given a fixed initial state and a fixed final time , the -minimization problem for the translational motion in the crtbp consists of steering the system in by a measurable control on ] is an optimal one of the -minimization problem , there exists a nonpositive real number and an absolutely continuous mapping on ] there holds {1,0,0 } { \begin{cases } \dot{\x}(t ) = \frac{\partial h}{\partial \p}(\x(t),\p(t),p^0,\boldsymbol{u}(t)),\\ \dot{\p}(t ) = -\frac{\partial h}{\partial \x}(\x(t),\p(t),p^0,\boldsymbol{u}(t ) ) , \end{cases } % } \label{eq : cannonical}\end{aligned}\ ] ] and where + p^0 \rho , \label{eq : hamiltonian}\end{aligned}\ ] ] is the hamiltonian .moreover , the transversality condition asserts {1,0,0}\color[rgb]{0,0,1}{\boldsymbol{p}(t_f)\perp t_{\x(t_f)}\mathcal{m } } , % 0&=&\frac{\partial \phi^t(\x(t_f),t_f)}{\partial t}\boldsymbol{\nu } + h({\x}(t_f),{\p}(t_f),{p}^0,\boldsymbol{\tau}(t_f)),\label{eq : transversality_2}\end{aligned}\ ] ] where is a constant vector whose elements are lagrangian multipliers .the 4-tuple on ] , the corresponding extremal control is a function of on ] .thus , in the remainder of this paper , with some abuses of notations , we denote by and on ] the maximized hamiltonian of the extremal on ] , this extremal is called a bang - bang one . along a bang - bang extremal on ] with called a maximum - thrust ( or burn ) arc if , otherwise it is called a zero - thrust ( or coast ) arc . an extremal on ] with .note that the maximum condition in eq .( [ eq : maximum_condition ] ) is trivially satisfied for every ] with , assume on ] is said to realize a weak - local optimality in -topology ( resp .a strong - local optimality in -topology ) if there exists an open neighborhood of in -topology ( resp . an open neighborhood of in -topology ) such that for every admissible controlled trajectory in associated with the measurable control in on ] ) with the boundary conditions and , there holds we say it realizes a strict weak - local ( resp .strong - local ) optimality if the strict inequality holds .[ de : optimality ] note that if a trajectory on ] is an extremal .note that at this moment we do not restrict any conditions on the final point of the extremal on ] , let be an open neighbourhood of , we say the subset ,\ \p_0\in\mathcal{p}\big\},\nonumber % \label{eq : family}\end{aligned}\ ] ] is a -parameterized family of extremals around the extremal on ] if the projection of the family loses its local diffeomorphism at .we say the projection of the family at ] , without loss of generality , let the positive integer be the number of switching times ( ) such that .along the extremal on ] keeps as and the -th switching time of the extremals on ] .let ,\ \p_0 \in \mathcal{p}\big\ } , \nonumber\end{aligned}\ ] ] for with and . if the subset is small enough , there holds let on ] , i.e. , ,\ ] ] on ] .[ as : disconjugacy_bang ] though this condition guarantees that both the restriction of on for and the restriction of on \times\mathcal{p} ] is a diffeomorphism as well , as fig .[ fig : trans ] shows that the flows may intersect with each other near a switching time .the behavior that the projection of at a switching time is a fold singularity can be excluded by a transversal condition established by noble and schttler in ref .this transversal condition is reduced as by chen __ in ref . . for each switching time for .[ as : transversality ] if this condition is satisfied , the projection of the family around each switching time is a diffeomorphism at least for a sufficiently small subset , see ref . .given the extremal on ] for does not contain conjugate points .then , for every , we are able to construct a perturbed lagrangian submanifold ( cf .theorem 21.3 in ref . or appendix a in ref . ) around the extremal on ] .[ re : lagrangian ] as a result of this remark , one obtains the following remark .if the subset is small enough , let it follows that the projection of onto its image is a diffeomorphism ; the projection of is a tubular neighborhood of the extremal trajectory on ] .[ re : neighborhood ] then , directly applying the theory of field of extremals ( cf .theorem 17.1 in ref . ) , one obtains the following result .given the extremal on ] . then , if _ conditions [ as : disconjugacy_bang ] and [ as : transversality ] _ are satisfied and if the subset is small enough , every extremal trajectory on ] with the same endpoints and , i.e. , where the equality holds if and only if on ] for realizes a strict minimum cost among every admissible controlled trajectory on ] for every , one proves this theorem .note that the endpoints of the -minimization problem are fixed if . as a combination of remark [ re : neighborhood ] and theorem [ co : cor1 ], one obtains that _ conditions _ [ as : disconjugacy_bang ] and [ as : transversality ] are sufficient to guarantee the extremal trajectory on ] such that each switching point is regular ( cf .assumption [ as : regular_switching ] ) , conjugate points can occur not only on each smooth bang arc at a time if but also at each switching time if .the fact that conjugate points can occur at switching times generalizes the conjugate point theory developed by the classical variational methods for totally smooth extremals , see refs . . in this subsection ,we establish the sufficient optimality conditions for the case that the dimension of the final constraint submanifold is not zero . if , to ensure the extremal trajectory on ] , not only with the same endpoints and but also with the boundary conditions and , has a bigger cost than the extremal trajectory on ] is an admissible controlled trajectory of the -minimization problem . [ re : admissible_control_trajectory ] given the extremal on ] a twice continuously differentiable curve on such that . [ de : smooth_curve ] given the extremal on ] , there exists a smooth path on ] . [le : smooth_path ] note that the mapping restricted to the subset is a diffeomorphism under the hypotheses of the lemma .then , according to the _ inverse function theorem _ , the lemma is proved .define a path \rightarrow t^*_{\y(\cdot)}\mathcal{x},\ \eta\mapsto \boldsymbol{\lambda}(\eta) ] .then , for every ] the integrand of the poincar - cartan form along the extremal lift on ] such that each switching point is regular ( cf .assumption [ as : regular_switching ] ) and _ conditions _ [ as : disconjugacy_bang ] and [ as : transversality ] are satisfied , assume is small enough .then , the extremal trajectory on ] .[ co : j_xi_j_0 ] let us first prove that , under the hypotheses of this proposition , eq .( [ eq : lemma1_compare ] ) is a sufficient condition for the strict strong - local optimality of the extremal trajectory on ] be an admissible controlled trajectory with the boundary conditions and .let and on ] , respectively . according to definition [ de : smooth_curve ] and lemma [ le : smooth_path ] , for every final point , there must exist a \backslash\{0\} ] such that and .since the trajectory on ] , according to theorem [ co : cor1 ] , one obtains where the equality holds if and only if on ] , on ] , and on ] .then , taking into account eq .( [ eq : hamiltonian ] ) , a combination of eq .( [ eq : compare1111 ] ) with eq .( [ eq : rho_*_rho_xi ] ) leads to \nonumber\\ & = & - j(\xi ) + \int_0^{t_f}\big[\p(t,\p_0(\xi))\dot{\x}(t,\p_0(\xi ) ) - h(\x(t,\p_0(\xi)),\p(t,\p_0(\xi)))\big]dt\nonumber\\ & = & - j(\xi ) + \int_0^{t_f}\rho(t,\p_0(\xi))dt\nonumber\\ & \leq & - j(\xi ) + \int_0^{t_f}\rho_*(t)dt .\label{eq : lemma1_compare_new}\end{aligned}\ ] ] since , eq .( [ eq : lemma1_compare ] ) implies the strict inequality holds if or . for the case of , eq .( [ eq : compare111 ] ) is satisfied as well according to theorem [ co : cor1 ] , which proves that eq .( [ eq : lemma1_compare ] ) is a sufficient condition .next , let us prove that eq .( [ eq : lemma1_compare ] ) is a necessary condition .assume eq .( [ eq : lemma1_compare ] ) is not satisfied , i.e. , there exists a smooth curve on ] such that .then , according to eq .( [ eq : lemma1_compare_new ] ) , one obtains note that the extremal trajectory in is an admissible controlled trajectory of the -minimization problem ( cf .remark [ re : admissible_control_trajectory ] ) .thus , the proposition is proved .given the extremal on ] is a necessary condition ( resp .a sufficient condition ) for the strict strong - local optimality of the extremal trajectory on ] , we have since is a tangent vector of the submanifold at .then , according to proposition [ co : j_xi_j_0 ] , this proposition is proved .given the extremal on ] such that each switching point is regular ( cf .assumption [ as : regular_switching ] ) , assume _ conditions _ [ as : disconjugacy_bang ] and [ as : transversality ] are satisfied .then , the inequality ( resp . strict inequality ) is satisfied for every smooth curve on ] there holds ^t\big\{\frac{\partial \p^t(t_f,\bar{\p}_0)}{\partial \p_0 } \left [ \frac{\partial \x(t_f,\bar{\p}_0)}{\partial \p_0}\right]^{-1 } - \bar{\boldsymbol{\nu}}d^2\phi(\bar{\x}(t_f ) ) \big\}\y^{\prime}(0).\nonumber\end{aligned}\ ] ] note that the vector can be an arbitrary vector in the tangent space , one proves this proposition .given the extremal on ] such that every switching point is regular ( cf .assumption [ as : regular_switching ] ) , let . then , if _ conditions _ [ as : disconjugacy_bang ] , [ as : transversality ] , and [ as : terminal_condition ] are satisfied , the extremal trajectory on ] is computed , according to definition [ de : lagrangian_multiplier ] , the vector of lagrangian multipliers in condition [ as : terminal_condition ] can be computed by ^{-1}.\label{eq : numerical_nu}\end{aligned}\ ] ] we define by a full - rank matrix such that its columns constitute a basis of the tangent space .[de : definition_c ] then , one immediately gets that condition [ as : terminal_condition ] is satisfied if and only if there holds ^{-1 } - \bar{\boldsymbol{\nu}}d^2\phi(\bar{\x}(t_f))\right\}\boldsymbol{c } \succ 0 .\label{eq : positive_definite}\end{aligned}\ ] ] note that the matrix can be computed by a simple gram schmidt process once one derives the explicit expression of the matrix .thus , it suffices to compute the matrix on ] .thus , taking derivative of eq .( [ eq : cannonical ] ) with respect to on each segment , we obtain = \left[\begin{array}{cc } h_{\p\x}(\bar{\x}(t),\bar{\p}(t ) ) & h_{\p\p}(\bar{\x}(t),\bar{\p}(t))\\ -h_{\x\x}(\bar{\x}(t),\bar{\p}(t ) ) & - h_{\x\p}(\bar{\x}(t),\bar{\p}(t ) ) \end{array}\right ] \left [ \begin{array}{c } \frac{\partial \x}{\partial \p_0}(t,\bar{\p}_0)\\ \frac{\partial \p^t}{\partial \p_0}(t,\bar{\p}_0 ) \end{array } \right ] .\label{eq : homogeneous_matrix}\end{aligned}\ ] ] since the initial point is fixed , one can obtain the initial conditions as where and denote the zero and identity matrix of .note that the two matrices and are discontinuous at the each switching time . comparing with the development in refs . , the updating formulas for the two matrices and at each switching time can be written as where .up to now , except for , all necessary quantities can be computed .note that for every there holds taking into account , see eq .( [ eq : h01 ] ) , and differentiating eq .( [ eq : h_1(t_i ) ] ) with respect to yields according to assumption [ as : regular_switching ] , there holds for .thus , we obtain /{h}_{01}(\bar{\x}(t_i),\bar{\p}(t_i)).\nonumber\end{aligned}\ ] ] therefore , in order to compute the two matrices and on ] , is a constant on every zero - thrust arc .hence , to test focal points ( or conjugate points for ) , it suffices to test the zero of on each maximum - thrust arc and to test the non - positivity of at each switching time .in this numerical section , we consider the three - body problem of the earth , the moon , and an artificial spacecraft . since the orbits of the earth and the moon around their common centre of mass are nearly circular , i.e. , the eccentricity is around , and the mass of an artificial spacecraft is negligible compared with that of the earth and the moon , the earth - moon - spacecraft ( ems ) system can be approximately considered as a crtbp , see ref .then , we have the below physical parameters corresponding to the ems , , km , seconds , and kg . the initial mass of the spacecraft is specified as kg , the maximum thrust of the engine equipped on the spacecraft is taken as n , i.e. , such that the initial maximum acceleration is m/s .the spacecraft initially moves on a circular earth geosynchronous orbit lying on the -plane such that the radius of the initial orbit is km .when the spacecraft moves to the point on -axis between the earth and the moon , i.e. , , we start to control the spacecraft to fly to a circular orbit around the moon with radius km such that the -norm of control is minimized at the fixed final time days . accordingly, the initial state is given as where is the non - dimensional velocity of the spacecraft on the initial orbit , and the explicit expression of the function in eq .( [ eq : final_manifold ] ) can be written as ^t\parallel^2 - \frac{1}{2}(r_m / d_*)^2 \\\frac{1}{2}\parallel \v(t_f ) \parallel^2 - \frac{1}{2}v_m^2 \\\v^t(t_f)\cdot(\r(t_f ) - [ 1-\mu,0,0]^t ) \\ \r^t(t_f)\cdot \1_{z } \\\v^t(t_f)\cdot \1_z \end{array } \right ] , \label{eq : function_phi}\end{aligned}\ ] ] where ^t ]. it suffices to solve a shooting function corresponding to a two - point boundary value problem .a simple shooting method is not stable to solve this problem because one usually does not know a priori the structure of the optimal control , and the numerical computations of the shooting function and its differential may be intricate since the shooting function is not continuous differentiable .we use a regularization procedure by smoothing the control corner to get an energy - optimal trajectory firstly , then use a homotopy method to solve the real trajectory with a bang - bang control .note that both the initial point and the final constraint submanifold lie on the -plane , it follows that the whole trajectory lies on the -plane as well .[ fig : transferring_orbit3_1 ] illustrates the non - dimensional profile of the position vector along the computed extremal trajectory . of the -minization trajectory in the rotating frame of the ems system .the thick curves are the maximum - thrust arcs and the thin curves are the zero - thrust arcs . the bigger dashed circle and the smaller one are the initial and final circular orbits around the earth and the moon , respectively .] the profiles of , , and with respect to non - dimensional time are shown in fig .[ fig : transferring_orbit4_1 ] , from which we can see that the number of maximum - thrust arcs is 15 with 29 switching points and that the ragularity condition in assumption [ as : regular_switching ] at every switching point is satisfied .since the extremal trajectory is computed based on necessary conditions , one has to check sufficient optimality conditions to make sure that it is at least locally optimal . according to what has been developed in section [ se : sufficient ], it suffices to check the satisfaction of _ conditions _ [ as : disconjugacy_bang ] , [ as : transversality ] , and [ as : terminal_condition ] . using eqs .( [ eq : homogeneous_matrix][eq : update_formula_p ] ) , one can compute on ] is rescaled with respect to non - dimensional time along the -minimization extremal in ems . ] by , which can capture the sign property of on ] for every is a one - dimensional curve restricted on the final circular orbit around the moon .[ fig : j_xi ] and be the projection of the position vector on - and -axis of the rotating frame , respectively , and let and be the projection of the velocity vector on - and -axis of the rotating frame , respectively .the figure plots the profiles with respect to , , , and .the dots on each plot denote . ]illustrates the profile of with respect to in a small neighbourhood of .we can clearly see that on \backslash\{0\}$ ] .up to now , all the conditions in theorem [ th : optimality ] are satisfied .so , the computed -minimization trajectory realizes a strict strong - local optimality in -topology .in this paper , the pmp is first employed to formulate the hamiltonian system of the -minimization problem for the translational motion of a spacecraft in the crtbp , showing that the optimal control functions can exhibit bang - bang and singular behaviors . moreover , the singular extremals are of at least order two , revealing the existence of fuller or chattering phenomena . to establish the sufficient optimality conditions , a parameterized family of extremals is constructed . as a result of analyzing the projection behavior of this family , we obtain that conjugate points may occur not only on maximum - thrust arcs between switching times but also at switching times . directly applying the theory of field of extremals , we obtain that the disconjugacy conditions ( cf .conditions [ as : disconjugacy_bang ] and [ as : transversality ] ) are sufficient to guarantee an extremal to be locally optimal if the endpoints are fixed . for the casethat the dimension of the final constraint submanifold is not zero , we establish a further second - order condition ( cf . condition [ as : terminal_condition ] ) , which is a necessary and sufficient one for the strict strong - local optimality of a bang - bang extremal if disconjugacy conditions are satisfied .in addition , the numerical implementation for these three sufficient optimality conditions is derived .finally , an example of transferring a spacecraft from a circular orbit around the earth to an orbit around the moon is computed and the second - order sufficient optimality conditions developed in this paper are tested to show that the computed extremal realizes a strict strong - local optimum . the sufficient optimality conditions for open - time problems will be considered in future work .zhang , c. , topputo , f. , bernelli - zazzera , f. , and zhao , y. , low - thrust minimum - fuel optimization in the circular restricted three - body problem , journal of guidance , control , and dynamics , 38(8 ) , 15011510 ( 2015 ) sussmann , h. j. , envelopes , conjugate points and optimal bang - bang extremals , in proc .1985 paris conf . on nonlinear systems ,fliess , m. , and hazewinkel , m. , eds ., reidel publishers , dordrecht , the netherlands , ( 1987 ) bonnard , b. , caillau , j .- b . , and trlat , e. , second - order optimality conditions in the smooth case and applications in optimal control , esaim control optimization and calculus of variation , 13(2 ) , 207 - 236 ( 2007 )
in this paper , the -minimization for the translational motion of a spacecraft in a circular restricted three - body problem ( crtbp ) is considered . necessary conditions are derived by using the pontryagin maximum principle , revealing the existence of bang - bang and singular controls . singular extremals are detailed , recalling the existence of the fuller phenomena according to the theories developed by marchal in ref . and zelikin _ et al . _ in refs . . the sufficient optimality conditions for the -minimization problem with fixed endpoints have been solved in ref . . in this paper , through constructing a parameterised family of extremals , some second - order sufficient conditions are established not only for the case that the final point is fixed but also for the case that the final point lies on a smooth submanifold . in addition , the numerical implementation for the optimality conditions is presented . finally , approximating the earth - moon - spacecraft system as a crtbp , an -minimization trajectory for the translational motion of a spacecraft is computed by employing a combination of a shooting method with a continuation method of caillau _ et al . _ in refs . , and the local optimality of the computed trajectory is tested thanks to the second - order optimality conditions established in this paper .
we assembled a model of a self - replicating cyanobacterial cell based on a genome - scale metabolic reconstruction of the cyanobacterium _ synechococcus elongatus _the model incorporates a manually curated representation of all key processes relevant to the energetics of phototrophic growth : photons are absorbed by light - harvesting antennae , the phycobilisomes , attached primarily to photosystem ii ( psii ) .the energy derived from absorbed photons drives water splitting at the oxygen - evolving complex ( oec ) and , via the photosynthetic electron transport chain ( etc ) , results in the regeneration of cellular atp and nadph .the etc consists of a set of large protein complexes , psii , cytochrome b complex ( cytb ) , photosystem i ( psi ) , and atp synthase ( atpase ) , embedded within the thylakoid membrane .inorganic carbon is taken up via co concentrating mechanisms ( ccms ) and assimilated via the calvin - benson cycle .the product of the ribulose-1,5-bisphosphate carboxylase / oxygenase ( rubisco ) , 3-phosphoglycerate ( 3pg ) , serves as a substrate for the biosynthesis of cellular components , such as dna , rna , lipids , pigments , glycogen , and amino acids .cellular metabolism is represented by a detailed genome - scale reconstruction of _ synechococcus elongatus _amino acids serve as building blocks for structural , metabolic , photosynthetic , and ribosomal proteins .all cellular components are represented by their known molecular composition .the model is depicted in figure [ fig : model ] and detailed in the materials and methods .it encompasses a total of macromolecules and reactions , including metabolic and exchange reactions , metabolic genes , as well as compound production reactions . to implement the conditional dependencies of phototrophic growth ,the rate of each process is constrained by the abundances of the respective catalyzing macromolecules and their respective catalytic efficiencies .for example , at any point in time , each individual metabolic reaction is constrained by the abundance of its catalyzing enzyme ( or enzyme complex ) and the respective catalytic turnover number .the latter values are globally sourced from databases , see materials and methods .protein synthesis is limited by the abundance of ribosomes and modeled according to general principles of peptide elongation , taking into account energy expenditure ( one atp and two gtps per amino acid ) and coupling to metabolism .light absorption at psii is constrained by the reported effective cross - section of phycobilisomes and depends on ( variable ) phycobilisome rod length .detachment of phycobilisomes from psii reduces energy transfer to the oec . for simplicity , light absorption at psiis assumed to take place in the absence of phycobilisomes using an effective cross - section per psi complex and energy spillover from psii is not considered ( see supplementary text for further discussion ) . for the photosynthetic and respiratory electron transport chains ,maximal catalytic rates per protein complex are sourced from the literature .we note that all aforementioned dependencies only constrain maximal rates of processes , actual rates may be lower due to ( unknown ) fractional saturation of reaction rates . during a full ld cycle , the capacity constraint induced by the abundance of catalyzing compounds on each maximal reaction ratemust be fulfilled at each point in time .catalyzing compounds , however , can be synthesized _ de novo _ , depending on available resources , and may therefore accumulate over a diurnal period , and hence increase the capacity of the respective reactions . to this end ,the abundances of macromolecules ( metabolic enzymes , transporters , photosynthetic and respiratory protein complexes , phycobilisomes , and ribosomes ) are time - dependent quantities that are governed by the respective differential mass - balance equations . to solve the global resource allocation problem , the mass - balance equations including the abundance - dependent rate constraints are cast into a linear programming ( lp ) problem .the lp - problem is supplemented by periodic boundary conditions for the macromolecules of the form where denotes ( absolute ) abundances of time - dependent cellular components at time , is the initial time , and the multiplication factor .the elements of at time are themselves an outcome of the resource allocation problem and not specified externally .time is discretized using a gau implicit method ( midpoint rule ) .we are primarily interested in diurnal dynamics , and hence a time - scale of several hours . following the arguments of rgen et al . and waldherr et al . , we therefore assume that internal metabolites are in quasi - steady - state .equation [ [ eq : growth ] ] represents balanced growth in a periodic environment .specifically , we assume stationary diurnal experimental conditions , such that the average measured cellular composition per unit biomass after a full diurnal period is invariant . equation [ [ eq : growth ] ] , in conjunction with the mass - balance constraints , the abundance - dependent rate constraints , and the growth objective , , define a self - consistent resource allocation problem for diurnal phototrophic growth of the cyanobacterium _ synechococcus elongatus _pcc 7942 . as input parameters, we require the stoichiometric composition of macromolecules in terms of their constituent amino acids and micro - nutrients , as well as their catalytic efficiencies per enzyme or enzyme - complex .we argue that reasonable approximation of both quantities exist for almost all cellular macromolecules . using this narrow and well - defined set of parameters ,we seek to derive the emergent properties of diurnal phototrophic growth without making use of any further ad - hoc assumptions about metabolic functioning or regulation . for details of the implementation and a discussion of the limits of applicabilitysee the materials and methods and the supplement . prior to the evaluation of diurnal dynamics ,we evaluate light - limited growth under constant light .the uptake of all other nutrients , in particular inorganic carbon , is described by simple michaelis - menten uptake reactions and only constrained by the availability of the respective transporters .carbon cycling is not considered explicitly , the respective energy expenditure is considered as part of general maintenance . solving the global resource allocation problem , we obtain the multiplication factor and the growth rate as a function of light intensity , as well as the cellular composition for different growth rates .key results are shown in figure [ fig : constlight ] .for comparison with conventional flux balance analysis ( fba ) , we use a light intensity photons , resulting in the absorption of photons , a growth rate of ( multiplication factor ) , and an oxygen evolution rate of .these values are in excellent agreement with previous estimates using fba , and the respective experimental data .in particular , evaluating the metabolic reconstruction of _ synechococcus elongatus _pcc 7942 with conventional fba and a static biomass objective function ( bof ) using a light uptake of photons absorbed , results in an oxygen evolution rate of and a growth rate of .in contrast to the static bof used in fba , the cellular composition of the autocatalytic model is an emergent result of the global resource allocation problem ( figure [ fig : constlight]d ) , and is in good agreement with previously reported bofs .when evaluating different light intensities , the growth rate and oxygen evolution increase with increasing light ( figure [ fig : constlight]a and [ fig : constlight]b ) . we note that light uptake depends on the assumed maximal effective cross section of psii , reported to be the results shown in figure [ fig : constlight ] indicate that the reported value underestimates the actual effective cross section ( with no further impact on model results , see also supplementary text ) .similar to findings for models of heterotrophic growth , the relative amount of ribosomes increases with increasing growth rate ( figure [ fig : constlight]c ) .we observe that growth as a function of light saturates at a growth rate of ( multiplication factor ) , estimated using a monod growth equation ( supplementary figure s2 ) .the maximal growth rate is slightly slower than the maximal growth rate observed for _ synechococcus elongatus _pcc 7942 , reported as by yu et al .we therefore performed a sensitivity analysis of growth rate as a function of estimated parameters . while the sensitivity with respect to the catalytic efficiencies of invidual enzymes is rather low ( supplemental figures s3-s5 ) , a major determinant of maximal growth rate is the assumed ratio of non - catalytic ( quota ) proteins ( supplemental figure s6 ) .based on recent proteomics data for slow growing cells , the ratio was determined to be % of total protein .no experimental estimates exist for fast growing cells .if the actual percentage for fast growing cells is assumed to be % , the resulting growth rates ( , corresponding to a division time of ) are in good agreement and slightly exceed fastest known growth rates of _ synechococcus elongatus _pcc 7942 . going beyond constant light conditions, we next evaluate the global resource allocation problem for diurnal light conditions as a dynamic optimization problem with the objective .after discretization , the problem is transformed into a sequence of linear optimization problems and solved to global optimality using a binary search .we emphasize that our approach does not impose any constraints on the timing of specific synthesis reactions .rather , the resulting time - courses as well as the cellular composition are emergent properties of the global resource allocation problem .the light intensity was modeled as a sinusoidal half - wave with a peak light intensity of photons .figure [ fig : referenceday]a shows the resulting flux values for a reference day as a function of diurnal time .we observe that most metabolic activity takes place during the light period . in the absence of light ,glycogen is mobilized and utilized for cellular maintenance , serving as a substrate for cellular respiration via the pentose phosphate pathway and ultimately cytochrome c oxidase .figure [ fig : referenceday]b shows selected metabolic fluxes as a function of time together with the respective enzymatic capacity ( dashed lines ) .the observed flux activity is in good agreement with known facts about metabolite partitioning during diurnal growth : in the presence of light , carbon is imported and assimilated via the calvin - benson cycle .carbon assimilation and photosynthesis follow light availability .synthesis of macromolecules is distributed over the light period ( figure [ fig : referenceday]b ) .firstly , at dawn , fluxes related to central metabolism , amino acids and pigment synthesis increase .secondly , reactions with respect to lipid synthesis , dna / rna synthesis , and peptidoglycan synthesis exhibit increased flux .finally , reactions related to _ de novo _ synthesis of co - factors ( nadph , thf , tpp , fad ) carry flux . at dusk ,almost all metabolic activities cease .dark metabolism is dominated by utilization of storage products and respiratory activity : stored glycogen is mobilized and consumed via the oxidative pentose phosphate pathway ( oppp ) , thereby generating nadph for the respiratory electron transport chain .the global cellular resource allocation problem gives rise to a highly coordinated metabolic activity over a diurnal period .the numerical results are highly robust with respect to changes in parameters .growth rates and overall cellular composition ( supplementary figure s7 ) depend on peak light intensity , the results ( supplementary figures s8 and s9 ) are qualitatively similar to the case of constant light shown in figure [ fig : constlight ] .glycogen is the main storage compound in cyanobacteria .cells accumulate glycogen during the light phase and mobilize it as a source of carbon and energy during the night .it was recently shown that the timing of glycogen accumulation is under tight control of the cyanobacterial circadian clock and disruption of the clock results in altered glycogen kinetics .we therefore evaluate the optimality of glycogen accumulation in the context of the global resource allocation problem .we note that our simulation does not impose any _ ad hoc _ constraints on the kinetics and timing of glycogen synthesis .rather , accumulation of glycogen is a systemic property that emerges as a consequence of optimal resource allocation .figure [ fig : glycogen]a shows the time course of glycogen accumulation obtained from the global resource allocation problem over two diurnal periods . figure [ fig : glycogen]b shows the optimal carbon partitioning during the light period . storedglycogen increases linearly within the light period , in good agreement with recent data from _ synechococcus elongatus _7942 , and _ synechocystis _ sp . pcc 6803 .we note that a linear slope is not self - evident , but emerges as a trade - off between at least two conflicting objectives : minimal withdrawal of carbon during the early growth period ( favoring carbon withdrawal later in the day ) versus a minimal capacity requirement for the synthesis pathway ( favoring constant withdrawal throughout the light period ) . to further highlight glycogen accumulation as a systemic property, we evaluate the minimal amount of accumulated glycogen for different light periods .figure [ fig : minglycogen ] shows the results for different lengths of day versus night periods . if the night period is doubled , slightly less than twice the glycogen is required to sustain night metabolism and the amount of glycogen required at dusk exhibits a certain plasticity .the latter fact corresponds to differences in resource allocation with no discernible effect on overall growth : certain synthesis tasks , in particular lipid synthesis , can be relegated to the end of the night period , thereby requiring less enzyme capacity during the day at the expense of an increased glycogen requirement at dusk ( supplementary figure s11-s12 ) .phototrophic growth under diurnal conditions requires a precise coordination of metabolic processes which is challenging to describe using conventional fba and related constraint - based approaches . in this work, we developed a genome - scale model that allows us to evaluate the stoichiometric and energetic constraints of diurnal phototrophic growth in the context of a global diurnal resource allocation problem . building upon previous works ,our approach is based on the fact that growth is inherently autocatalytic : the cellular machinery to sustain metabolism is itself a product of metabolism .our focus were the net stoichiometric and energetic implications of diurnal growth on a time - scale of several hours , in particular related to the _ de novo _ synthesis of proteins and other cellular macromolecules .faster time - scales , in particular a detailed representation of macromolecular assembly , were not considered .we consider our approach to be appropriate for cells with a division time of approximately 24h or faster under diurnal light conditions. for very slow growing cells , the importance of _ de novo _ synthesis of proteins is likely diminished , and other cellular processes become dominant , such as protein turnover , maintenance and repair mechanisms . given these limits of applicability ,our aim was an _ ab initio _ prediction of optimal diurnal resource allocation : how is metabolism organized over a full diurnal cycle ? how are the synthesis reactions of cellular macromolecules organized over a full diurnal cycle ?what is the optimal timing of glycogen accumulation during the light phase ?importantly , from the perspective of resource allocation , these questions can be evaluated without extensive knowledge of kinetic parameters and regulatory interactions .our analysis is based solely on knowledge of the stoichiometric compositions and the turnover numbers of catalytic macromolecules reasonable estimates for both quantities are available and the respective values were sourced from the primary literature and databases . our results , similar to time - independent fba ,are based on the assumption of optimality , and hence allow us to pinpoint the energetic trade - offs and constraints relating to diurnal growth .overall , the _ ab initio _ results obtained from the global resource allocation problem are in good agreement with previous knowledge and experimental observations about flux partitioning in _ synechococcus elongatus _ 7942 .growth predominantly takes place during the light phase . in the absence of light ,almost all metabolic activity ceases , and cellular metabolism is dominated by respiratory activity .carbon fixation and central metabolism largely follow light availability , whereas other synthesis reactions follow a specific temporal pattern including synthesis of macromolecules well before their utilization .we note that cessation of metabolic activity during darkness is itself already a result of a trade - off between idle enzymatic capacity versus the energy requirements for synthesis reactions .as shown , similar to the observation in a previous minimal model , an _ in silico _ experiment with artificially lowered enzyme costs for glycogen synthesis and mobilization , results in increased utilization of synthesis reactions at night thereby minimizing requirements for enzyme capacity at the expense of additional storage capacity . in this respect ,the function of glycogen is analogous to a cellular battery or capacitor and the timing of glycogen synthesis results as a trade - off between conflicting objectives : early withdrawal of carbon from an auto - catalytic system versus minimizing glycogen synthesis capacity versus extending the time span of enzyme utilization .we consider our approach to be a suitable general framework to evaluate the optimality of diurnal phototrophic growth . as a first test , we considered the maximal growth rate , as predicted by optimal resource allocation using independently sourced parameters only .the results show that model - derived values are indeed within the typical range of cyanobacterial growth rates .since several detrimental factors , such as possible photoinhibition , are not explicitly considered within our model , the close correlation between observed and model - derived growth rates suggests that cyanobacterial metabolism indeed operates close to optimality , in particular when considering high growth rates . in this respect ,an unknown factor is the relative amount of non - catalytic ( quota ) proteins , estimated to be up to 55% of total protein for slow growing cells .we conjecture that for fast growing cells this percentage is considerably lower .indeed , the importance of non - catalytic ( quota ) proteins as ( environment - specific ) niche - adaptive proteins ( naps ) on the maximal growth rate was already discussed by burnap . in future iterations, our approach can be significantly improved upon . of particular interestare the energetic implications of carbon cycling in growing cells , light damage and its repair , as well the temporal coordination of nitrogen fixation in certain cyanobacteria .more generally , we conjecture that the global resource allocation problem described here allows us to evaluate the cost of individual genes and genomes in the context of a growing cell , and thereby allows us to evaluate metabolic adaptations and the diversity of cyanobacterial metabolism ultimately aiming to understand the limits of phototrophic growth in complex environments .all simulations are based on a genome - scale conditional fba ( cfba ) model .the model is derived from a genome - scale metabolic reconstruction of the cyanobacterium _ synechococcus elongatus _the reconstruction covers genes and consists of metabolic reactions and metabolites .the reconstruction process was analogous to previous reconstructions .the original metabolic network reconstruction is provided as supplementary file 2 ( sbml ) .the cfba model consists of three types of components : steady - state metabolites , quota components , and components with catalytic function .quota components have no explicit catalytic function within the model but their synthesis contributes to overall energy and carbon expenditure .we note that , different from me models , we do not aim for a mechanistic representation of processes such as transcription , translation or assembly of macromolecules .rather , we focus on overall energetic and stoichiometric constraints on diurnal time - scales ( several hours ) . enzymes , ribosomes and several macromolecules are denoted as components with catalytic function . for each of these componentsa synthesis reaction is implemented .macromolecules ( e.g. photosystems ) assemble once all constituent compounds ( amino acids or protein subunits ) are available .all components are synthesized using their molecular stoichiometry , as derived from the amino acid sequence .special attention is paid to the stoichiometries of important photosynthesis and respiration complexes , such as the photosystems or the atpase .the respective stoichiometries are listed in the supplementary material tables a1-a8 .the amounts of all components with catalytic activity are time - dependent quantities and at each point in time their amount provides an upper limit to the rates of the reactions they catalyze . assuming that a component ( e.g. an enzyme ) catalyzes a reaction , we impose the capacity constraint where denotes the flux through reaction at time , denotes the concentration of enzyme at time , and is the turnover number of the enzyme for reaction . in case several reactions are catalysed by the same component or enzyme , the sum of their fluxes , weighted by the turnover rates , is bound by the enzyme amount .the capacity constraint holds analogously for all macromolecules , including the components of the electron transport chain and ribosomes .main quota components are the vitamins , several cofactors , lipids , cell wall , inorganic ions , dna , rna , as well as non - metabolic proteins .these components have to be produced at the same rate as catalytic components , although they do not reinforce the autocatalytic cycle .we enforce their synthesis by imposing an initial amount proportional to their fraction of the whole cell weight and require balanced growth ( equation [ [ eq : growth ] ] ) .non - metabolic ( quota ) proteins compete with catalytic proteins for ribosomal capacity .turnover of metabolic reactions is considerably faster than the _ de novo _ synthesis of proteins .following earlier work , we therefore assume internal metabolites to be at quasi - steady - state .the concentrations of internal ( non - exchange ) metabolites are not explicitly represented in equation [ [ eq : growth ] ] and the metabolic network is assumed to be balanced at all time points .similar to conventional fba , we neglect dilution by growth of internal metabolites .the turnover rates used in the capacity constraint equation [ [ eq : enzymecapacity ] ] are sourced from the brenda database .we computationally retrieved all wild type values from all organisms for each enzyme and assigned the median of the corresponding retrieved values as the turnover number of the respective enzyme . for enzymes with no turnover numbers available, we followed and assigned the median of all retrieved turnover numbers .turnover numbers for the macromolecules of the etc were sourced from the primary literature and are listed in table [ catatable ] .the ribosomal capacity is assumed to be amino acids per second ..parameters for the cfba model . solving the global resource allocation problemrequires knowledge of the catalytic turnover numbers of macromolecules .all values are sourced from the primary literature . [ cols="<,>,^",options="header " , ]
cyanobacteria are an integral part of the earth s biogeochemical cycles and a promising resource for the synthesis of renewable bioproducts from atmospheric co . growth and metabolism of cyanobacteria are inherently tied to the diurnal rhythm of light availability . as yet , however , insight into the stoichiometric and energetic constraints of cyanobacterial diurnal growth is limited . here , we develop a computational platform to evaluate the optimality of diurnal phototrophic growth using a high - quality genome - scale metabolic reconstruction of the cyanobacterium _ synechococcus elongatus _ pcc 7942 . we formulate phototrophic growth as a self - consistent autocatalytic process and evaluate the resulting time - dependent resource allocation problem using constraint - based analysis . based on a narrow and well defined set of parameters , our approach results in an _ ab initio _ prediction of growth properties over a full diurnal cycle . in particular , our approach allows us to study the optimality of metabolite partitioning during diurnal growth . the cyclic pattern of glycogen accumulation , an emergent property of the model , has timing characteristics that are shown to be a trade - off between conflicting cellular objectives . the approach presented here provides insight into the time - dependent resource allocation problem of phototrophic diurnal growth and may serve as a general framework to evaluate the optimality of metabolic strategies that evolved in photosynthetic organisms under diurnal conditions . cyanobacterial photoautotrophic growth requires a highly coordinated distribution of cellular resources to different intracellular processes , including the _ de novo _ synthesis of proteins , ribosomes , lipids , as well as other cellular components . for unicellular organisms , the optimal allocation of limiting resources is a key determinant of evolutionary fitness in almost all environments . owing to the importance of cellular resource allocation for understanding cellular trade - offs , as well as its importance for the effective design of synthetic properties , the cellular economy and its implication for bacterial growth laws have been studied extensively albeit almost exclusively for heterotrophic organisms under stationary environmental conditions . for photoautotrophic organisms , including cyanobacteria , growth - dependent resource allocation is further subject to diurnal light - dark ( ld ) cycles that partition cellular metabolism into distinct phases . recent experimental results have demonstrated the relevance of time - specific synthesis for cellular growth . nonetheless the implications and consequences of growth in a diurnal environment on the cellular resource allocation problem are insufficiently understood , and computational approaches hitherto developed for heterotrophic growth are not straightforwardly applicable to phototrophic diurnal growth . here , we propose a computational framework to evaluate the optimality of diurnal resource allocation for diurnal phototrophic growth . we are primarily interested in the stoichiometric and energetic constraints that shape the cellular protein economy , that is , the relationship between the average growth rate and the relative partitioning of metabolic , photosynthetic , and ribosomal proteins during a full diurnal period . beyond the established constraint - based reconstruction and analysis methodologies , we aim to obtain an _ ab initio _ prediction of emergent properties that arise from a narrow and well - defined set of assumptions and parameters about cyanobacterial diurnal growth and to contrast these emergent properties with known and observed cellular behavior . to this end , we assemble and evaluate an auto - catalytic genome - scale model of cyanobacterial growth , based on a high - quality metabolic reconstruction of the cyanobacterium _ synechococcus elongatus _ pcc 7942 . our evaluation significantly improves upon a previous model of diurnal cyanobacterial growth and takes into account recent developments in constraint - based analysis . our approach is closely related to resource balance analysis ( rba ) , dynamic enzyme - cost flux balance analysis ( defba ) , as well as integrated metabolism and gene expression ( me ) models , but explicitly accounts for diurnal phototrophic growth . using _ synechococcus elongatus _ pcc 7942 as a model system , our starting point is the observation that almost all cellular processes are dependent upon the presence of catalytic compounds , typically enzymes and other cellular macromolecules . hence , a self - consistent description of cyanobacterial growth must take the synthesis of these macromolecules into account and reflect the fact that the abundance of these macromolecules limits the capacity of cellular metabolism at all times . _ de novo _ synthesis of cellular macromolecules increases the metabolic capacity the timing and amount of the respective synthesis reactions can therefore be described as a cellular resource allocation problem : what is the amount and temporal order of synthesis reactions to allow for maximal growth of a cyanobacterial cell in a diurnal environment ? to evaluate the respective stoichiometric and energetic constraints , we only require knowledge about the stoichiometric composition , and the catalytic efficiency of macromolecules quantities for which reasonable estimates are available . we therefore seek to evaluate the emergent properties of phototrophic diurnal growth , based on best _ a priori _ estimates of relevant parameters only . our key results include ( i ) a prediction of the timing of intracellular synthesis reactions that is in good agreement with known facts about metabolite partitioning during diurnal growth , ( ii ) limits on the estimated maximal rate of phototrophic growth that are close to observed experimental values , suggesting a highly optimized metabolism , ( iii ) a predicted optimal timing of glycogen accumulation that is in good agreement with recent experimental findings .
heavy snow events are among the most severe natural hazards in mountainous countries . every year, winter storms can hinder mobility by disrupting rail , road and air traffic .extreme snowfall can overload buildings and cause them to collapse , and can lead to flooding due to subsequent melting .deep snow , combined with strong winds and unstable snowpack , contributes to the formation of avalanches , and can cause fatalities and economic loss due to property damage or reduced mobility .the quantitative analysis of extreme snow events is important for the dimensioning of avalanche defence structures , bridges and buildings , for flood protection measures and for integral risk management .compared to phenomena such as rain , wind or temperature , extreme - value statistics of snow has been little studied . and analyzed three - day snowfall depth in the italian and swiss alps , and more recently analyzed extreme snowfall in switzerland .these articles derive characteristics of extreme snow events based on univariate extreme - value modeling which does not account for the dependence across different stations .the spatial dependence of extreme snow data has yet to be discussed in the literature .statistical modeling with multivariate extreme value distributions began around two decades ago with publications such as and , and has subsequently often been used for quantifying extremal dependence in applications .financial examples are currency exchange rate data [ ] , swap rate data [ ] and stock market returns [ poon , rockinger and tawn ( ) ] , and environmental examples are rainfall data [ schlather and tawn ( ) ] , oceanographical data [ ; ] and wind speed data [ ; ]. none of these articles treats the process under study as a spatial extension of multivariate extreme value theory . until recently ,a key difficulty in studying extreme events of spatial processes has been the lack of flexible models and appropriate inferential tools .two different approaches to overcome this have been proposed .the first and most popular is to introduce a latent process , conditional on which standard extreme models are applied [ ; ; ; ; ; ; ] .such models can be fitted using markov chain monte carlo simulation , but they postulate independence of extremes conditional on the latent process , and this is implausible in applications .one approach to introducing dependence is through a spatial copula , as suggested by , but although this approach is an improvement , show that it can nevertheless lead to inadequate modeling of extreme rainfall . a second approach now receiving increasing attention rests on max - stable processes , first suggested by and developed by , for example , and .recent applications to rainfall data can be found in , , and , and to temperature data in .max - stable modeling has the potential advantage of accounting for spatial dependence of extremes in a way that is consistent with the classical extreme - value theory , but is much less well developed than the use of latent processes or copulas . in the present paper ,we use data from a denser measurement network than for previous applications .owing to complex topography and weather patterns , the processes of and can not account for the joint distribution of the extremes , and we therefore propose more complex models . we begin with an exploratory analysis highlighting some of the peculiarities of the data , andthen in section [ sec : spat_max ] present the max - stable processes of and , which are extended in section [ sec : maxstab_snow ] to our extreme snow depth data . asfull likelihood inference is impossible for such models , in section [ sec : estimation ] we discuss how composite likelihood inference may be used for model estimation and comparison .the results of the data analysis are presented in section [ sec : appli ] and a concluding discussion is given in section [ sec : discussion ] . km grid ( left ) and of the stations ( right ) .color indicates altitude in meters above mean sea level . among the stations , ( denoted by circles in the map on the right and by the dashed part of the right - hand histogram )are excluded from the analysis for validation . dashed lines in the maps delimit the northern and southern slopes of the alps . ]we consider annual maximum snow depth from the stations whose locations are shown in figure [ fig : map_swiss ] .the stations belong to two networks run by the wsl institute for snow and avalanche research ( slf ) and the swiss federal office for meteorology and climatology ( meteoswiss ) .annual maxima are extracted from daily snow depth measurements , which are read off a measuring stake at around 7.30 am daily from november to april , for the 43 winters 19651966 to 20072008 ; we use the term `` winter '' for the months november to april , and so forth .examples of such time series can be found in the supplementary materials , .as figure [ fig : map_swiss ] shows , the stations are denser in the alpine part of the country , which has high tourist infrastructure and increased population density and traffic during the winter months .their elevations range from m to m above mean sea level , with only two stations above m. in order to validate our final model , we used stations to choose and fit the model and retained 15 stations for model validation .let denote the annual maximum snow depth at station of the set , which here denotes switzerland .data are only available at the stations , so modeling involves inference for the joint distribution of based on observations from , and extrapolation to the whole of .in particular , as the station elevations lie mainly below m , any results must be extrapolated to elevations higher than m. daily snow depths at a given location are obviously temporally dependent .however , time series analysis suggests that , for every location and every winter , daily snow depths show only short - range dependence .hence , distant maxima of daily snow depths seem to be near - independent and , therefore , the condition for independence of extremes that are well separated in time [ , section 3.2 ] should be satisfied .extreme value theory is then expected to apply to annual maximum snow depth : at a location may be expected to follow a generalized extreme - value ( gev ) distribution [ ] ,\ ] ] where and , and are , respectively , location , scale and shape parameters .characterizing the probability distribution of for all is equivalent to characterizing the probability distribution of for any bijective function , which may be easier for a well - chosen .a first step in our analysis is to transform the data at the stations to the unit frchet scale . whatever the values of the gev parameters , and , taking transforms into a spatial process having unit frchet marginal distributions , . as it is easier to deal with in general discussion , we will assume below that the time series at each station has been transformed in this way .to do so , one might model the gev parameters , and as smooth functions of covariates indexed by , such as longitude , latitude and elevation [ ] .however , due to the very rough topography of switzerland and the influence of meteorological variables such as wind and temperature , snow depth exhibits strong local variation and additional covariates are necessary . a systematic discussion of such covariates and associated smoothing is given by .the focus in the present paper is spatial dependence , so rather than adopt their approach , here we simply use gev fits for the individual stations to transform at station into . diagnostic tools such as qq - plots showed a good fit even at low altitudes .a simple measure of the dependence of spatial maxima at two stations is the extremal coefficient .if is the limiting process of maxima with unit frchet margins , then [ , chapter 5 ] one interpretation of appears on noting that if , then the maxima at the two locations are perfectly dependent , whereas if , they are asymptotically independent as , so very rare events appear independently at the two locations .although they do not fully characterize dependence , such coefficients are useful summaries of the multidimensional extremal distribution .in particular , it may be informative to compute all extremal coefficients for a given station to see how extremal dependence varies .figure [ fig : krigmap_extcoeff ] depicts such maps for the snow depth data , for four different reference stations .extremal coefficients were estimated by the madogram - based estimator of , and then kriged to the entire area using a linear trend on absolute altitude difference between and .similar maps have been proposed for gridded data by .much information can be gleaned from figure [ fig : krigmap_extcoeff ] .a strong elevation effect is clearly visible .the map for adelboden also suggests a directional effect : for this mid - altitude station in the alps , there is more dependence with other middle - altitude stations in a roughly north - easterly direction .another striking feature visible in the two lower maps is near - independence between the northern and southern slopes of the alps .further such maps suggest the presence of the two weakly dependent regions separated by the black dotted line in figure [ fig : map_swiss ] .a similar north / south separation was seen in , for good reason : extreme snowfall events occurring in these two regions typically do not stem from the same precipitation systems . whereas extreme snowfall events on the northern slope of the alps usually arise from northerly or westerly airflows [ ] ,those in the southern slope usually come from the south or south - west . these are less frequent , but when they occur they can be very severe , due to the proximity of the mediterranean sea .as snow cover results from the accumulation of many snowfall events during the winter , one can expect annual maximum snow depths on the northern and southern slopes of the alps to be somewhat disconnected .the winter of illustrates this : little snow fell on the southern slope of the alps , while the northern slope received large amounts .figure [ fig : krigmap_extcoeff ] nevertheless suggests that these two regions are asymptotically weakly dependent , since is generally larger than , but not necessarily asymptotically independent . even between well - separated stations , is rarely very close to 2 , perhaps owing to the rather small area under study , in which the largest distance between stations is around km .the spatial dependence highlighted in section [ sec : spatial_dpce ] suggests that we model as a spatial process of extremes .a max - stable process with unit frchet margins is a stochastic process with the property that , if are independent copies of the process , then [ ] a consequence of this definition is that all finite - dimensional marginal distributions are max - stable : if is a finite subset of , then for all , such processes have several representations , two of which we now sketch .a general method of constructing max - stable processes is due to .let denote the points of a poisson process on with intensity , where is an arbitrary measurable set and is a positive measure on .let denote a nonnegative function for which , for all , then the random process is max - stable with unit frchet margins . gives a rainfall - storms interpretation of this construction .he suggests regarding as a space of storm centers , of as the shape of a storm centered at , and of as a storm magnitude .then represents the amount of rainfall received at location for a storm of magnitude centered at and in ( [ eq : constr_dehaan ] ) is the maximum rainfall received at over an infinite number of independent storms .additional assumptions are needed to get useful models from ( [ eq : constr_dehaan ] ) . proposes taking , letting be the lebesgue measure and be a multivariate normal density with mean and covariance matrix , that is , the resulting bivariate distribution of defined by ( [ eq : constr_dehaan ] ) at two stations and is then \\[-8pt ] & & \qquad = \exp\biggl\ { -\frac{1}{z_1 } \phi\biggl(\frac{a}{2}+\frac{1}{a}\log\frac{z_2}{z_1 } \biggr ) -\frac{1}{z_2 } \phi\biggl(\frac{a}{2}+\frac{1}{a}\log\frac{z_1}{z_2 } \biggr ) \biggr\},\nonumber\end{aligned}\ ] ] where is the standard normal distribution function and is the mahalanobis distance given by below we will call this model the smith process . .upper left and ( isotropic case ) .upper right , and ( anisotropic case ) .lower images : corresponding pairwise extremal coefficient . ]two simulated smith processes with different matrices are shown in the top row of figure [ fig : simu_smith ] .the anisotropic case arises when is not spherical , that is , not of the form , where and is the identity matrix of side .the resulting geometric anisotropy [ e.g. , ] can easily be seen by computing pairwise extremal coefficients .taking in ( [ eq : biv_smith ] ) gives , according to ( [ eq : extrcoeff ] ) , the mahalanobis distance appearing in ( [ eq : extcoeff_smith ] ) gives different weights to the different components of the vector .the limiting cases and correspond , respectively , to perfect dependence , , and independence , . for a given station ,surfaces are , according to ( [ eq : extcoeff_smith ] ) , such that ( [ eq : mahaldist ] ) is constant .if is spherical , then such surfaces are circles in two dimensions and spheres in three dimensions . otherwise , they are ellipses and ellipsoids , respectively .a second method of construction of max - stable processes was proposed by .let denote the points of a poisson process on with intensity .let be a stationary nonnegative process that satisfies =1 ] for all .he shows that the corresponding bivariate distribution of at two stations and is \\[-8pt ] & & \qquad = \exp\biggl\{-\frac{1}{2 } \biggl(\frac{1}{z_1}+\frac{1}{z_2 } \biggr ) \biggl(1+\sqrt{1 - 2 ( \rho(h)+1 ) \frac{z_1 z_2}{(z_1+z_2)^2 } } \biggr)\biggr\},\nonumber\end{aligned}\ ] ] where is the euclidean distance between the two stations .below we call this max - stable model schlather s process .. left corresponding to switzerland is shown in figure [ fig : simu_schlather ] .the isotropy can be easily seen by computing pairwise extremal coefficients .taking in ( [ eq : biv_schlather ] ) gives , according to ( [ eq : extrcoeff ] ) , here the extremal coefficients involve the euclidean distance between the two locations . for a given station ,surfaces with the same extremal coefficents ] for a group of stations and different high levels .figure [ fig : risk_groupstat_bestmod ] plots such probabilities for different groups when is the -year return level of the unit frchet distribution .by back - transformation from equation ( [ eq : transf_frechet ] ) , this is equivalent to computing the joint survival distributions $ ] where denotes the -year return level at station , that is , the probability that all stations in receive more snow a given year than their -year return level . under independence , this probability equals for any possible set , where is the number of stations in , whereas it equals under full dependence .figure [ fig : risk_groupstat_bestmod ] shows very good agreement between the observed and predicted distributions using the model , whereas the risk is underestimated under the hypothesis of independent stations and overestimated under the hypothesis of full dependence .the underestimation is more striking for quite dependent stations , such as those in the left - hand panel of figure [ fig : risk_groupstat_bestmod ] . when distance increases , the difference between the dependent and independent cases is less striking but our max - stable model fits better even for pairs of stations that are climate distance units apart ; this is almost the largest climate distance between pairs of stations .the right - hand panel corresponds to a group of seven stations in the eastern plateau .our model clearly gives more realistic risk probabilities than does the independence assumption .extreme snow events in the low - elevation plateau generally occur over a large region due to the easy weather circulation .a typical example is the extraordinary snowfall event that occurred on march 5th 2006 over the entire plateau , with snow measurements of cm at zurich , cm at basel and cm at sankt gallen .this was the largest snow depth recorded since 1931 [ ] .stations indicated in green were not used for fitting . ]the models discussed here are a step toward modeling spatial dependence of extreme snow depth .they are based on the and max - stable representations , designed to model extreme snow depth explicitly . in particular , they can account in a flexible way for the presence of weakly dependent regions .they involve a climate transformation that enables the modeling of directional effects resulting from phenomena such as weather system movements . in the proposed methodology ,model fitting is performed by using a profile - like method for maximizing the pairwise likelihood function , and model selection is performed using an information criterion .we applied this methodology to stations with recorded snow depth maxima .performance of the selected model at small and large scales was assessed on these stations , together with other stations , by comparing empirical and predicted distributions of group of stations . by accounting for spatial dependence ,our model gives clearly more realistic probabilities of extreme co - occurrence than would a nonspatial model .such quantities are important for adequate risk management . considered as a whole ,the max - stable models proposed in this paper constitute a family of flexible models that could potentially be applied to other kinds of climate data , in particular , extreme precipitation and temperature .further improvements could nevertheless be investigated , as discussed below . in this paperwe focus on modeling the spatial dependence of extremes , rather than on the marginal distributions .a first step was thus to transform maxima from their original scale to a common unit frchet distribution . in the application to snow depth data ,this transformation was done by using the gev distributions fitted to the time series , considered separately .a fuller spatial model would consider the three gev marginal parameters as response surfaces .using the models presented in this paper , one could then simultaneously estimate the spatial dependence and the spatial intensity of maxima , following and .these authors use simple functions of longitude , latitude and elevation , but the very complex alpine topography results in an extremely variable pattern of snow , and we were unable to find satisfactory marginal response surfaces for our application . describe other approaches that appear to be more satisfactory , but modeling of the margins requires more investigation .time could be used as a covariate in order to allow for the potential impact of climate change on extreme snow events ; for example , the retreat of the glaciers is strongly affecting microclimates at high altitudes .this notwithstanding , exploratory work suggests that although climate change has affected mean snow levels [ ] , its effect on extreme snow events is not yet discernible , except possibly at low elevation [ ] . a second improvement might be the consideration of event times , which could be incorporated into the pairwise maximum likelihood procedure [ stephenson and tawn , ( ) ] . for our data ,the co - occurrence of annual maxima is quite variable . for winters such as those of 1975 and 2006 ,snow depth reached its maximum almost simultaneously all over switzerland .for winters such as those of 1980 , 2007 and 2008 , the annual maxima occurred at quite different dates ; see the supplementary materials , . including this information by modifyingthe pairwise likelihood contribution of maxima occurring simultaneously at two stations might yield more precise inferences , as shown in .last but not least , this article has used only snow data gathered from measurements in flat , open and not too exposed fields .extrapolation to steep , windy and forest terrains may thus be unsatisfactory .in particular , preferential deposition of snow [ ] may imply that snow depth on slopes is more extreme than on representative flat fields .this could have important implications for avalanche risk [ ] but could not be considered here due to lack of data .this could be investigated using data from automatic stations located at higher elevations , mostly above 2,200 m , and in various terrains , though such data are unfortunately available only for about ten years .a spatial model for exceedances over high thresholds [ ] would be a valuable addition to the extreme - value toolkit for dealing with spatially - dependent short time series .we thank two referees , an associate editor , the editor and the other project participants , particularly michael lehning , christoph marty , simone padoan and mathieu ribatet , for helpful comments .most of the work of juliette blanchet was performed at the institute for snow and avalanche research , slf davos .
the spatial modeling of extreme snow is important for adequate risk management in alpine and high altitude countries . a natural approach to such modeling is through the theory of max - stable processes , an infinite - dimensional extension of multivariate extreme value theory . in this paper we describe the application of such processes in modeling the spatial dependence of extreme snow depth in switzerland , based on data for the winters 19662008 at 101 stations . the models we propose rely on a climate transformation that allows us to account for the presence of climate regions and for directional effects , resulting from synoptic weather patterns . estimation is performed through pairwise likelihood inference and the models are compared using penalized likelihood criteria . the max - stable models provide a much better fit to the joint behavior of the extremes than do independence or full dependence models . and .
the calculation of the basic properties of quantum systems ( eigenstates and eigenvalues ) remains a challenging problem for computational science .one of the most significant issues is the exponential scaling of the computational resource requirements with the number of particles and degrees of freedom , which for even a small number of particles ( ) exceeds the capabilities of current computer systems . in 1982feynman addressed this problem by proposing that it may be possible to use one quantum system as the basis for the simulation of another .this was the early promise of quantum simulation , and one of the original motivations for quantum computing . since that time, many researchers have investigated different approaches to quantum simulation .for example , abrams and lloyd have proposed a quantum algorithm for the efficient computation of eigenvalues and eigenvectors using a quantum computer .many of the investigations into quantum simulation have assumed ideal performance from the underlying components resulting in optimistic estimates for the quantum computer resource requirements ( number of qubits and time to completion ) .it is well known , however , that in order to address the effects of decoherence and other sources of faults and errors in the implementation of qubits and gates it is necessary to incorporate fault - tolerant quantum error correction into an estimate of the resource requirements . in this paperwe estimate the resource requirements for a quantum simulation of the ground state energy for the 1-d quantum transverse ising model , specifically incorporating the impact of fault - tolerant quantum error correction .we apply the general approach of abrams and lloyd , and compute estimates for the total number of physical qubits and computational time as a function of the number of particles ( n ) and required numerical precision ( m ) in the estimate of the ground state energy .we have chosen to study the resource requirements for computing the ground state energy for the 1-d quantum tim since this model is well studied in the literature and has an analytical solution .the relevant details of the tim are summarized in section [ sec : ising ] . in section [ sec : qsim ] , we map the calculation of the tim ground state energy onto a quantum phase estimation circuit that includes the effects of fault - tolerant quantum error correction .the required unitary transformations are decomposed into one qubit gates and two - qubit controlled - not gates using gate identities and the trotter formula .the one - qubit gates are approximated by a set of gates which can be executed fault - tolerantly using the solovay - kitaev theorem . in section[ sec : faulttolerance ] , the quantum circuit is mapped onto the quantum logic array ( qla ) architecture model , previously described by metodi , et al . .our final results , utilizing the qla architecture , are given in section [ sec : oned ] and a discussion of how improving the state of the art in the underlying technology affects the performance for executing the tim problem . in section [ sec : higherdim ] , we extend our resource estimate from 1-d to higher dimensions .since the qla architecture was developed to study the fault - tolerant resource requirements for shor s quantum factoring algorithm , we compare our present results for the tim quantum simulation with previous analysis of the the resource requirements for shor s algorithm , in section v. finally , our conculsions are presented in section [ sec : conclusion ] .the 1-d transverse ising model is one of the simplest models exhibiting a quantum phase transition at zero temperature .the calculation of the ground state energy of the tim varies from analytically solvable in the linear case to computationally inneficient for frustrated 2-d lattices . for example, the calculation of the magnetic behavior of frustrated ising antiferromagnets requires computationally intensive monte - carlo simulations .given the difficulty of the generic problem and the centrality of the tim to studies of quantum phase transitions and quantum annealing , the tim is a good benchmark model for quantum computation studies .the 1-d transverse ising model consists of -spin-1/2 particles at each site of a one dimmensional linear lattice ( with the spin axis along the -axis ) in an external magnetic field along to -axis .the hamiltonian for this system , , may be written as : where is the spin - spin interaction energy , is the coupling constant and related to the strength of the external magnetic field along the -direction , and implies a sum only over nearest - neighbors . and are the pauli spin operators for the spin , and we set throughout this paper . in present work we focus is on the 1-d linear chain tim of n - spins with constant ising interaction energy .the ground state of the system is determined by the ratio of . for the large magnetic field case , the system is paramagnetic with all the spins aligned along the axis , and in the limit of small magnetic field , , the system has two degenerate ferromagnetic ground states , parallel and anti - parallel to the axis . in the intermediate range of magnetic field strength the linear 1-d tim exhibits a quantum phase transition at .the tim hamiltonian in equation [ eqn : isinghamiltonian ] , for the 1-d case with constant coupling can be rewritten as : where the pauli spin operators are replaced with their corresponding matrix operators . for the 1-d tim, the ground state energy can be calculated analytically in the limit of large n . in the case of a finite number of spins with non - uniform spin - spin interactions ( not constant ) ,it is possible to efficiently simulate the tim using either the monte - carlo method or the density matrix renormalization group approach .the challenge for classical computers comes from the 2-d tim on a frustrated lattice where the simulation scales exponentially with . applying the quantum phase estimation circuit to calculate the ground state energy of the tim requires physical qubit resources , which scale polynomially with , andthe number of computational time steps is also polynomial in .in addition , just as the complexity of the problem is independent of the lattice dimension and layout when applying classical brute force diagonalization , the amount of resources required to apply the quantum phase estimation circuit is largely independent of the dimensionality of the tim hamiltonian .our approach to estimating the resource requirements for the tim ground - state energy calculation with hamiltonian involves two steps .first , we follow the approach of abrams and lloyd and map the problem of computing the eigenvalues of the tim hamiltonian in equation [ eqn : isinghamiltonian2 ] onto a phase estimation quantum circuit .second , we decompose each operation in the phase estimation circuit into a set of universal gates that can be implemented fault - tolerantly within the context of the qla architecture .this allows us an accurate estimate of the resources in a fault - tolerant environment .the phase estimation algorithm allows one to calculate an -bit estimate of the phase of the eigenvalue of the time evolution unitary operator , where the time is constant throughout the implementation of the phase estimation algorithm .the desired energy eigenvalue of can be computed using by calculating .the value of is determined by the fact that the output from the phase estimation algorithm is the binary fraction , which is less than one . in order to ensure that this result is a valid approximation of the phase , we must set the parameter such that , which corresponds to . for the 1-d tim , the magnitude of the ground - state energy bounded by . in the region near the phase transition , we choose = , which satisfies .the quantum circuit for implementing the phase estimation algorithm is shown in figure [ fig : onecontrolqubit ] .the circuit consists of two quantum registers : an -qubit input quantum register prepared in an initial quantum state , and an output quantum register consisting of a single qubit recycled times .each of the qubits in the input register corresponds to one of the spin- particles in the tim model . at the beggining of each of the steps in the algorithm ,the output qubit is prepared into the state using a hadamard ( h ) gate .the h gate is followed by a controlled power of , denoted with , applied on the input register , where .letting denote to the step in the circuit , each time the output qubit is measured ( meter symbols ) the result is in the bit in the estimate of , following the rotation of the output qubit via the gate : where the gate corresponds to the application of the quantum fourier transform on the output qubit at each step .the result after each of the measurements is an -bit binary string , which corresponds the -bit approximation of given by . using this estimate of , the corresponding energy eigenvalue will be the ground - state energy with probability equal to , where is the ground eigenstate of . to maximize the probability of success , the initial quantum state should be an approximation of the ground state . for arbitrary hamiltoniansthe preparation of an approximation to is generally computationally difficult . for certain cases ,the preparation can be accomplished using classical approximation techniques to calculate an estimated wavefunction or adiabatic quantum state preparation techniques .if the state can be prepared adiabatically , the resource requirements for preparing are comparable in complexity to the resource requirements for implementing the circuit for the phase estimation algorithm shown in figure [ fig : onecontrolqubit ] .for this reason , we focus our analysis on estimating the number of computational time steps and qubits required to implement the circuit , assuming that the input register has been already prepared into the -qubit quantum state .figure [ fig : onecontrolqubit ] in section [ sec : peca ] shows the tim circuit at a high - level , involving unitary operators . in this section, each unitary operation of the circuit is decomposed into a set of basic one and two qubit gates which can be implemented fault - tolerantly using the qla architecture .the set of basic gates used is where measure is a single qubit measurement in the basis , cnot denotes the two - qubit controlled - not gate , and and gates are single - qubit rotations around the -axis by and radians respectively . the high - level circuit operations which require decomposition are the controlled- gates and each gate .the controlled- gate can be decomposed using the second - order trotter formula .first , is broken into two terms : , representing the transverse magnetic field , and , representing the ising interactions . by considering the related unitary operators where we set , as discussed in section [ sec : ising ] .we can construct the totter approximation of , denoted by as : ^k + \epsilon_t\\ & = & \tilde{u}(2^m\tau ) + \epsilon_t,\end{aligned}\ ] ] where and is the trotter approximation error , which scales as .the trotter approximation error can be made arbitrarily small by increasing the integer trotter parameter .since the controlled- corresponds to the bit , must be less than , which is the precision of the measured bit in the binary fraction for the phase .thus , when approximating , is increased until is less than . for a given , we estimate a numerical value for the trotter parameter as a function of , with the constraint that .we thus find that for fixed , scales as .we use the exponent based on to extrapolate for larger . for , we set , which will satisfy the error bound based on the scaling of with . approximated using the trotter formula.,scaledwidth=80.0% ] the circuit corresponding to the trotter approximation of is shown in figure [ fig : utrotter ] , where it can be seen that the controlled- is composed of two controlled- operations and a controlled- operation , repeated times and controlled on the instance of the output qubit denoted with . expanding the circuit in figure [ fig : utrotter ] , we can express as : ^{k-1}\ , u_{zz}(2\theta ) \, u_x(\theta),\ ] ] which shows that , approximating will require the sequential implementation of controlled- gates , controlled- gates , and two instances of controlled- gates , all controlled on the instance of the output qubit . into single - qubit gates and cnot gates.,scaledwidth=100.0% ] gate into single - qubit gates and cnot gates.,scaledwidth=100.0% ] the quantum circuits for the decomposition of the controlled- and controlled- gates are shown in figures [ fig : controlled_ux ] and [ fig : uzz ] , respectively .the gates are decomposed into rotations about the -axis , and cnot gates . qubits are used to prepare an -qubit cat state in order to parallelize each of the gates .the preparation of an -qubit cat state requires cnot gates , which can be implemented in time steps in parallel with the gates in figure [ fig : controlled_ux ] and in parallel with the gates in figure [ fig : uzz ] .the three single - qubit gates ( , , and ) can be approximated using basics gates ( , , ) with the solovay - kitaev theorem .the solovay - kitaev error ( ) is equivalent to a small rotation applied to the qubit . using the results of dawson and nielsen and , to compute the sequence of , , and gates required to approximate each of the three gates . we define as the length of the longest of these three sequences . for =30 , for example , we find that , requiring a sixth order solovay - kitaev approximation .the results of this calculation show that the solovay - kitaev error , in order that the total error , is less than the required precision , when we approximate . as a result scales as .we now have a complete decomposition of the controlled- into the basic gates given in equation [ eqn : basicgates ] . as a function of ,the number of time steps required to implement controlled- and is equal to , and , respectively .following equation [ eqn : utrotter2 ] , the number of time steps required to implement the entire controlled- is , where .each gate in figure [ fig : onecontrolqubit ] is equivalent to at most a rotation by and requires less than gates . in the next sectionwe include fault - tolerant qec into our circuit model and determine the resulting resource requirements , and .we also provide an estimate on how long it could take to implement the tim problem in real - time by taking into account the underlying physical implementation of each gate and qubit in the context of the qla architecture .incorporating quantum error correction and fault - tolerance into the tim circuit design will impact the resource requirements in two ways .first , each of the qubits becomes a _logical qubit _ , that is encoded into a state using a number of lower - level qubits .second , each gate becomes a _ logical gate _, realized via a circuit composed of lower - level gates applied on the lower - level qubits that make - up a logical qubit .each lower - level qubit may itself be a logical qubit all the way down to the physical level .thus , quantum error correction and fault - tolerance increases the number of physical time steps and qubits required to implement each basic gate and may even require additional logical qubits , depending on how each gate is implemented fault - tolerantly and the choice of error correcting code .the resource requirements necessary to implement encoded logical qubits and gates will depend on the performance parameters of the underlying physical technology , the type of error correcting code used , and the level of reliability required per logical operation .the physical technology performance parameters that are taken into account in the design of the qla architecture are the physical gate implementation reliability , time to execute a physical gate , and the time it takes for the state of the physical qubits to decohere .the qla architecture is a tile - based , homogeneous quantum computer architecture based on ion trap technology , employing 2-d surface electrode trap structures .each tile represents a single computational unit capable of storing two logical qubits and executing fault - tolerantly any logical gate from the basic gate set given in equation [ eqn : basicgates ] .one of the key features of the qla architecture is the teleportation - based logical interconnect which enables logical qubit exchange between any two computational tiles .the interconnect uses the entanglement - swapping protocol to enable logical qubit communication without adding any overhead to the number of time required to implement a quantum circuit .the qla was originally designed based on the requirement to factor -bit integers .this requirement resulted in the need to employ the second order concatenated steane \!]\;]codethis value was derived by metodi , et al , by analysis of the ion - trap - based geometrical layout of each logical qubit tile .the \!]\;]code moves an average of steps during error correction , we find that each level gate has a failure probability of and each level gate has a failure probability of . in our failure probability estimates, we have assumed optimistic physical ion trap gate error probabilities of per physical operation , consistent with recent ion - trap literature .we also determine the physical resources required for each logical qubit .each level qubit requires ion - trap qubits ( data qubits and ancilla to facilitate error correction ) and each level qubit requires level qubits .given that the duration of each physical operation on an ion - trap device is currently on the order , the time required to complete a single error correction step is approximately ms at level and seconds at level . , scaledwidth=60.0% ] the number of logical qubits directly maps to the number of computational tiles required by the qla , allowing us to estimate the size of the physical system .similarly , the number of time steps maps directly to the time required to implement the application since the duration of a single time step in the qla architecture is defined as the time required to perform error correction , as discussed in reference .we define an aggregated metric called the problem size equal to , which is an upper bound on the total number of logical gates executed during the computation .the inverse of the problem size , , is the maximum failure probability allowed in the execution of a logical gate , which ensures that the algorithm completes execution at least % of the time .taking into consideration the failure probabilities per logical gate , the maximum problem size which can be implemented in the qla architecture is at level error correction , at level , and at level .level error correction is not described in the design of the qla architecture , however , its implementation is possible since a level qubit is simply a collection of level qubits and the architecture design does not change . the estimated failure probability for each level logical gate is .the parameters and for the tim problem were estimated in section [ sec : dftg ] , where was found to be and is on the order of .the fault - tolerant implementation of the gate , however , requires an auxiliary logical qubit prepared into the state for one time step followed by four time steps composed of , cnot , , and measure gates , causing the value of and to increase . since many of the gates in the solovay - kitaev sequences approximating the gates are gates , when calculating using equation [ eqn : timcost ] , the value of must take into consideration the increased number of cycles for each gate .all other basic gates are implemented transversally and require only one time step . the resulting functional layout for the qla architecture for the tim problem is shown in figure [ fig : isingarch ] .the architecture consists of logical qubit tiles .the tiles labled with through are the data tiles which hold the logical qubits used in the -qubit input register and the `` out '' tile is for the output register .the tiles labled with through are the qubit tiles for the cat state .the tiles are for the preparation of the auxiliary states in the event that gates are applied on any of the data qubits .all tiles are specifically arranged as shown in figure [ fig : isingarch ] in order to minimize the communication required for each logical cnot gate between the control and target qubits .for example , when preparing the cat state using all tiles and the `` out '' tile , cnot gates are required only between the `` out '' tile , , and .similarly , interacts via a cnot gate only with , while interacts only with , during the cat state preparation .( solid line ) and days of computation necessary assuming spin tim problem as a function of the desired maximum precision .,scaledwidth=100.0% ] the resource requirements for implementing the 1-d tim problem using the qla architecture are given in figure [ fig : ising100_20 ] , where we show a logarithmic plot of the number of time steps ( calculated using equation [ eqn : timcost ] ) as a function of the energy percision , assuming .the figure clearly shows s exponential dependence on .the dependence of on the number of spins is negligible and appears only in the term in equation [ eqn : timcost ] as , as discussed in section [ sec : dftg ] .in fact , since , we expect very little increase in the value of the total problem size as increases .we see that for no error correction is required .this is because the required reliability per gate of is still below the physical ion - trap gate reliability of . without error corection , the architectureis composed entirely of physical qubits and all gates are physical gates .this means that each single - qubit gate can be implemented directly without the need to approximate it using the solvay - kitaev theorem , resulting in in equation [ eqn : timcost ] , and the total number of qubits becomes instead of . for error correction is required , resulting in a sudden jump in the number of timesteps at , with an additional scaling factor of in due to s dependence on .in fact , increases so quickly that at that level error correction is required instead of level . at level error correction is required and while there is no increase in , each time step is much longer , so there is a jump in the number of days of computation .the solovay - kitaev order for is three and increases to order five for .our resource estimates for the 1-d tim problem indicate that multiple levels of error correction , even for modest precision requirements , results in long computational times .as shown in figure [ fig : ising100_20 ] , it takes longer than days , even for , when level error correction is required .when level error correction is required the estimated time is greater than _years_. the number of logical cycles , which grows exponentially with , contributes to the long computational times .however , the primary factor contributing to the long computational time is the time it takes to implement a single logical gate using error correction .presently , it is difficult to see how one might reduce the value of short of implementing a different approach for solving quantum simulation problems . on the other hand , the logical gate time can be improved by implementing small changes in three parameters : decreasing the physical gate time , increasing the threshold failure probability , and decreasing the underlying physical failure probability .( square markers ) , physical failure probability ( starred markers ) , threshold failure probability ( diamond markers ) , and all together ( circular markers ) by a factor of two over 10 iterations.[fig : days_scale],scaledwidth=90.0% ] the effect of these three parameters on the overall computational time for the 1-d tim problem is shown in figure [ fig : days_scale ] . the figure shows how the total time , in days , for varies as we improve each of the three parameters by a factor of 2 during each of the iterations shown .the starting values for each parameter in the figure are for , for and for . decreasing the physical failure probability and increasing the threshold values by a factor of during each iteration causes the number of days to decrease quadratically whenever lower error correction level is required , otherwisethe number of days remains constant from one iteration to the next . a single change in the error correction level from level to level occurs by increasing by a factor of but there is no gain from additional increases in the threshold alone .decreasing only by a factor of 512 yields two changes in the error correction level . from this analysis, we see that in order to reach a computational time on the order of days with only level error correction , we need to achieve parameter values of , , and ns , or better .this provides goals for the improvement in the device technologies necessary for quantum simulation .it should also be noted that these parameters are not completely independent and improvements in one of them may result in improvements in the others .for example , improving the physical failure probability may lead to better threshold failure probability by allowing some of the underlying operations to be weighted against one another . similarly improving the threshold failure probability , may require choosing a more efficient quantum error correcting code which may have fundamentally shorter logical time .the 1-d tim ground state energy can be efficiently computed using classical computing resources by taking advantage of the linear geometry of the spin configuration and significantly reducing the effective state space to a polynomial in .a 2-d tim with ferromagnetic and antiferromagnetic ising couplings can be difficult to solve due to spin frustration .many reductions to this problem still yield an exponential number of states with near degenerate energy . as a result ,the problem size scales exponentially with the size of the lattice .in contrast , the implementation of the quantum phase estimation circuit in figure [ fig : onecontrolqubit ] is largely independent of the geometry of the spin states and the values of and , which suggests that it can be used for implementing efficiently higher - dimensional tim problems .consider , for example , the calculation of the ground state energy for the 2-d villain s model using the phase estimation circuit .villain s model is a 2-d square lattice ising model with spin sites in which the rows have all ferromagnetic coupling and the columns alternate between ferromagnetic and antiferromagnetic .each of the sites in villain s model are represented by qubits in a grid .the only change to the circuit for the phase estimation algorithm is the application of ising interaction , which must be decomposed into two successive steps .first the rows of spin states are treated as the 1-d tim problem in parallel , followed by the columns .since the operations within each step are done in parallel , we still require additional qubits for the cat - states .given that the remaining operations , including the quantum fourier transform implementation , remain the same , the increase in the number of time steps to implement an -spin 2-d tim problem , compared to the 1-d tim problem , is by less than a factor of two .similarly , the increase in the resource requirements between a 1-d and a 3-d tim problem will be by less than a factor of three .since the qla architecture was initially evaluated in the context of shor s quantum factoring algorithm , it would be interesting to consider how the resource requirements for implementing the tim problem compare to those for implementing the factoring algorithm . in this section, we compare the implementation of the two applications on the qla architecture and highlight some important differences between each application .even though both applications employ the phase estimation algorithm , there are several important differences .first , the precision requirements are different . for shor s quantum factoring algorithm ,the precision must scale linearly with the size of the -bit number being factored , where for modern cryptosystems . for quantum simulations , the desired precision is independent of the system size n , and the required m is small compared to factoring .the second difference lies in the implementation cost of the repeated powers of the controlled- gates for each application . in shors algorithm , the gate is defined as .higher order powers of the unitary can be generated efficiently via modular exponentiation .the result is that the implementation of requires times the number of gates used for . for generic quantum simulation problems ,the implementation cost of equals times , because of the trotter parameter .the implementation of the control unitary gates for quantum simulation is not as efficient as that for the modular exponentiation unitary gates .the third difference lies in the preparation of the initial -qubit state .the preparation of for the tim problem by adiabatic evolution is comparable in resource requirements to the phase estimation circuit . for shor s quantum factoring algorithm in the computational basis and is easily prepared. corresponds to decimal precision of digits , respectively ., scaledwidth=100.0% ] finally , factoring integers large enough to be relevant for modern cryptanalysis requires several orders of magnitude more logical qubits than the scale of quantum simulation problems considered in this paper . at minimum ,the factoring of an -bit number requires qubits , using the same one - control qubit circuit given in figure [ fig : onecontrolqubit ] .as shown later in this section , however , choosing to use only the minimum number of qubits required for factoring leads to very high error correction overhead .a more reasonable implementation of the factoring algorithm requires number of logical qubits , which corresponds to millions of logical qubits for factoring a -bit number .quantum simulation problems require significantly less computational space and the problems considered in this paper require less than logical qubits .we examine how these differences affect the relative size of the qla architecture required to implement each application . in particular , figure [ fig : testplot ] shows the performance of qla - based quantum computers in kq space with fixed physical resources .each horizontal line corresponds to the kq limit for a qla - based architecture modeled for factoring a -bit number ( top - most horizontal dashed line ) , a -bit number , a -bit number , and an -bit number , respectively .the physical resources for each qla - n quantum computer ( where bits ) are determined by how many logical qubits at level error correction are required to implement the quantum carry look - ahead adder ( qcla ) factoring circuit , which requires logical qubits and logical cycles .the plateaus in each qla- line of figure [ fig : testplot ] represent using all of the qubits at a specific level of encoding , with the top - most right - hand plateau representing level .where the lines are sloped , the model is that only a certain number of the lower level encoded qubits can be used . once this reaches the number of qubits that can be encoded at the next level , the quantum computer is switched from encoding level to by using all the available level qubits .figure [ fig : testplot ] shows that a qla- quantum computer is capable of executing an application using level encoded qubits if the application instance is mapped _ underneath _ the line representing the computer at level .factoring a -bit number , for example , falls directly on the level portion of the qla- line ( see the square markers ) .anything above that line can not be implemented with the qla- computer .similarly , factoring a -bit number maps under the qla- line , but can be accomplished using level qubits .the tim problem is mapped onto figure [ fig : testplot ] for , , and several binary precision instances : .as expected , factoring requires many more logical qubits , however , both applications require similar levels of error correction .a decimal precision of up to digits of accuracy ( ) can be reached by using a quantum computer capable of factoring an -bit number at level error correction , however higher precision quickly requires level error correction .the resources for implementing quantum factoring with one - control - qubit were calculated following the circuit in figure [ fig : onecontrolqubit ] , where the unitary gates are replaced with the unitary gates corresponding to modular exponentiation , as discussed in reference .the results are shown with the diamond - shaped markers in figure [ fig : testplot ] .while this particular implementation is the least expensive factoring network in terms of logical qubits , the high precision requirement of makes this network very expensive in terms of time steps .in fact , the number of time steps required pushes the reliability requirements into level error correction for factoring even modestly - sized numbers .in this paper , the tim quantum simulation circuit was decomposed into fault - tolerant operations and we estimated the circuit s resource requirements and number of logical cycles as a function of the desired precision in the estimate of the ground state energy .our resource estimates were based on the qla architecture and underlying technology parameters of trapped ions allowing us to estimate both , as a function of the level of the error correction level , and the total length of the computation in real - time .our results indicate that even for small precision requirements is large enough to require error correction .the growth of is due to its linear dependence on the the trotter parameter , which scales exponentially with the maximum desired precision . in order for to scale polynomially with the precision ,new quantum simulation algorithms are required or systems must be chosen where the phase estimation algorithm can be implemented without the trotter formula . the linear dependence of the number of time steps on is due to the fact that and the do not commute .however , there are some physical systems , whose hamiltonians are composed of commuting terms , such as the nontransversal classical ising model , which has a solution to the partition function in two dimensions but is np - complete for higher dimensions . in those cases ,trotterization is unnecessary . in future work , we intend to generalize the calculations of the resource requirements to other physical systems and consider different ways to implement the phase estimation algorithm that limit its dependence on the trotter formula .
we estimate the resource requirements , the total number of physical qubits and computational time , required to compute the ground state energy of a 1-d quantum transverse ising model ( tim ) of spin-1/2 particles , as a function of the system size and the numerical precision . this estimate is based on analyzing the impact of fault - tolerant quantum error correction in the context of the quantum logic array ( qla ) architecture . our results show that due to the exponential scaling of the computational time with the desired precision of the energy , significant amount of error correciton is required to implement the tim problem . comparison of our results to the resource requirements for a fault - tolerant implementation of shor s quantum factoring algorithm reveals that the required logical qubit reliability is similar for both the tim problem and the factoring problem .
the present document introduces the reader to the angle , where is the golden ratio , and its involvement , most notably , in the construction of several interesting aggregates of regular tetrahedra . in the sections below, we will perform geometric rotations on tetrahedra arranged about a common central point , common vertex , common edge , as well as those of a linear , helical arrangement known as the boerdijk - coxeter helix ( tetrahelix ) . in each of these transformations, the angle above appears in the projections of coincident tetrahedral faces .noteworthy about these transformations is that they have a tendency to bring previously separated faces of `` adjacent '' tetrahedra into contact and to impart a periodic nature to previously aperiodic structures .additionally , after performing the rotations described below , one observes a reduction in the total number of _ plane classes _ , defined as the total number of distinct facial or planar orientations in a given aggregation of polyhedra .in this section we describe the construction of several interesting aggregates of regular tetrahedra .the aggregates of sections [ s : comedge ] and [ s : comvertex ] initially contain gaps of various sizes . by performing special rotations of these tetrahedrathese gaps are `` closed '' ( in the sense that faces of adjacent tetrahedra are made to touch ) , and , in each case , the resulting angular displacement between coincident faces is either identically equal to or is closely related . in section [ s : helix ] , a rotation by is imparted to tetrahedra arranged in a helical fashion in order to introduce a periodic structure and previously unpossessed symmetries . 0.3 is produced in the projection of a `` face junction.'',title="fig : " ] 0.3 is produced in the projection of a `` face junction.'',title="fig : " ] 0.3 is produced in the projection of a `` face junction.'',title="fig : " ] consider aggregates of _ n _ regular tetrahedra , , arranged about a common edge ( so that an angle of is subtended between adjacent tetrahedral centers , see figure [ f : fga ] for an example with five tetrahedra ) . in each of these structures , gaps exist between tetrahedra that may be `` closed '' ( i.e. , faces are made to touch ) by performing a rotation of each tetrahedron about an axis passing between the midpoints of its central and peripheral edges through an angle given by where is the tetrahedral dihedral angle and .when this is done , an angle , , is established in the `` face junction '' between coincident pairs of faces such that [ f : tg ] 0.3 case ) to `` close up '' gaps between adjacent tetrahedra .when this operation is performed , the angle is produced in the projection of a `` face junction.'',title="fig : " ] 0.3 case ) to `` close up '' gaps between adjacent tetrahedra .when this operation is performed , the angle is produced in the projection of a `` face junction.'',title="fig : " ] 0.3 case ) to `` close up '' gaps between adjacent tetrahedra .when this operation is performed , the angle is produced in the projection of a `` face junction.'',title="fig : " ] [ f : twg ] 0.3 about an axis passing from the central vertex through each tetrahedron s exterior face .when this is done , an angle of is produced in the projection of faces in a `` face junction.'',title="fig : " ] 0.3 about an axis passing from the central vertex through each tetrahedron s exterior face .when this is done , an angle of is produced in the projection of faces in a `` face junction.'',title="fig : " ] 0.3 about an axis passing from the central vertex through each tetrahedron s exterior face .when this is done , an angle of is produced in the projection of faces in a `` face junction.'',title="fig : " ] the present document is focused on the angle , which is , in fact , the angle obtained in the `` face junction '' produced by executing the above procedure for tetrahedra ( see figure [ f : fivegroup ] ) .it is interesting , however , that a simple relationship may be established between this angle and . by evenly arranging three tetrahedra about an edge and rotating each through the axis extending between the central and peripheral edge midpoints , the face junction depicted in figure [ f : tgc ]is obtained .the angle between faces in this junction , , may be related to in the following way : to see this , note that gives the solution , which reduces the right hand side of to . in this section , we have produced two aggregates of tetrahedra whose face junctions bear a relationship to the angle . in the section that follows, we will locate this angle in the face junctions produced through rotations of tetrahedra about a common vertex .0.3 0.3 0.3 consider the icosahedral aggregation of 20 tetrahedra depicted in figure [ f : twga ] .the face junction of figure [ f : twgc ] is obtained when each tetrahedron is rotated by an angle of about an axis extending between the center of its exterior face and the arrangement s central vertex .as above , this operation `` closes '' gaps between tetrahedra by bringing adjacent faces into contact .interestingly , the face junction obtained here consists of tetrahedra with a rotational displacement equal to the one obtained in the case of five tetrahedra arranged about a common edge above , i.e. , .( it should be noted , however , that and are not produced by equations and , respectively , as those formul are only valid for . ) in all of the cases described above , gaps are `` closed '' and `` junctions '' are produced between adjacent tetrahedra in such a way that the angle appears in some fashion in the angular displacement between coincident faces . for the case of 5 tetrahedra about a central edge and 20 tetrahedra about a common vertex , this angle is observed directly . for the case of 3 tetrahedra about a central edge , the angular displacement between faces is closely related : .we now turn to an arrangement obtained by directly imparting an angular displacement of between adjacent pairs of tetrahedra in a linear , helical fashion known as the boerdijk - coxeter helix. an interesting result of performing this action is that a previously aperiodic structure is transformed into one with translational and rotational symmetries .0.3 0.3 0.3 in sections [ s : comedge ] and [ s : comvertex ] we described a procedure by which initial arrangements of tetrahedra were transformed so that adjacent pairs of tetrahedra were brought together to touch . in each of these structures ,coincident faces are displaced by an angle equal or closely related to . here, will will construct two periodic , helical chains of tetrahedra by directly inserting an angular offset by between each successive member of the chain . for their close relationship with the boerdijk - coxeter helix, we refer to these structures by the term _ modified bc helices_. the construction of a modified bc helix is depicted in figure [ f : philix ] . starting from a tetrahedron , a face is selected onto which an interim tetrahedron , , is appended .the tetrahedron is obtained by rotating through an angle about an axis normal to , passing through the centroid of .( note that this automatically produces an angular displacement of between two faces in a `` junction , '' see figure [ f : bcjunction ] . )the structure that results from this process depends on the sequence of faces selected in order to construct the helical chain .this sequence determines an _ underlying chirality _ of the helix i.e . , the chirality of the helix formed by the tetrahedral centroids and plays a pivotal role in the determination of the structure s eventual symmetry .( however , it should be noted , of course , that some sequences of faces do not result in helical structures .faces can not be chosen arbitrarily or randomly ; they must be selected so as to build a helix . ) 0.3 0.3 0.3 by performing the procedure depicted in figure [ f : philix ] , using an angular displacement of between successive tetrahedra , periodic structures are obtained with 3 or 5fold symmetry ( upon their projections , see figures [ f:5bchelix ] and [ f:3bchelix ] ) , depending on the relative chiralities between the rotational displacement and the underlying helix : when * like * chiralities are used one obtains 5fold symmetry ; when * unlike * chiralities are used one obtains 3fold symmetry .in addition to rotational symmetry , these structures are given a linear period , which we quantify here as the number of appended tetrahedra necessary to return to an initial angular position on the helix .for a modified bc helix with a period of _ m _ tetrahedra , we use the term _m_-bc helix .accordingly , the procedure described above produces 3 and 5bc helices , which are shown in figure [ f : mbchelices ] .( see for a proof of these structures symmetries and periodicities . ).plane class numbers for aggregates described in section [ s : aggregates ] . [ cols="^,^,^,^,^ " , ]we have seen the construction of several aggregates of tetrahedra .each of these structures contains tetrahedra with coincident faces , offset angularly by or a closely related angle ( ) .we will now explore some of the interesting features of these structures . as already noted , it is interesting that appears in the face junctions of structures generated by `` closing '' gaps between tetrahedral aggregates .it is additionally interesting that when this angle is employed in the construction of a helical chain of tetrahedra , a periodic structure emerges ( whereas the canonical bc helix has no non - trivial translational or rotational symmetries ) . the features we will highlight in this section involve the reduction in the overall number of `` plane classes '' and the linear displacements between facial centers in a face function . here, we say that two planes belong to the same _ plane class _ if and only if their normal vectors are parallel .the _ number of plane classes _ for a collection of tetrahedra , then , is defined as the number of distinct plane classes comprising the collection s two - dimensional faces . by rotating tetrahedra so as to bring faces into contact ( as in sections [ s : comedge ] and [ s : comvertex ] ) , or rotating tetrahedra to obtain periodicity ( as in section [ s : helix ] ) the overall number of plane classes for an aggregateis reduced .clearly , as the values of , , and are such that they bring faces of adjacent tetrahedra into contact , we would expect to see a reduction in the number of plane classes in the corresponding aggregations of tetrahedra featured in sections [ s : comedge ] and [ s : comvertex ] .it is interesting , however , that rotation of the tetrahedra in a bc helix by ( observed in the face junctions of figures [ f : fgc ] , [ f : tgc ] , and [ f : twgc ] ) obtains a reduction from an arbitrarily large number of plane classes ( , where is the number of tetrahedra in the helix ) to relatively small numbers : 9 plane classes in the case of the 3-bc helix , 10 plane classes in the case of the 5-bc helix .table [ t : planeclasses ] provides the numbers of plane classes for the tetrahedral aggregates described in section [ s : aggregates ] before and after their transformations . , where is the tetrahedron edge length , the displacement between tetrahedra of a face junction for 5 tetrahedra about a common edge ( pictured center right ) .from left to right the displacements between the tetrahedra are , , , and . ] finally , an appealing feature is observed in the face junction projections of the tetrahedral aggregates discussed in this paper .figure [ f : facejunctions ] provides a side - by - side comparison of these face junctions .as the angular displacement between tetrahedra in all face junctions is related to , we can see that translation of a tetrahedron in one junction can produce any of the other junctions .let the displacement between tetrahedra in the face junction of 5 tetrahedra about a common edge be denoted by , where is the tetrahedron edge length .starting from the face junction of a 3 or 5-bc helix , the remaining face junctions corresponding to 20 tetrahedra about a vertex , 5 tetrahedra about an edge , and 3 tetrahedra about an edge may be obtained by translating a tetrahedron of the junction by , , and , respectively .in this paper we have presented the construction of several aggregates of tetrahedra . in each case , the construction process involved rotations of tetrahedra by a value related to .the structures produced here have several notable features : faces of tetrahedra are made to touch ( `` closing '' previously existing gaps between tetrahedra ) , aperiodic structures are imparted with periodicity , and the total number of plane classes is reduced .the purpose of the present document , however , is not merely descriptive ; it is hoped that these notable features have generated interest in the reader of rotational transformations of tetrahedra involving the angle .in particular , it is desired that further observations may be found which transform aggregates of tetrahedra such that faces are brought into contact and the number of plane classes is reduced .
in this paper we present the construction of several aggregates of tetrahedra . each construction is obtained by performing rotations on an initial set of tetrahedra that either ( 1 ) contains gaps between adjacent tetrahedra , or ( 2 ) exhibits an aperiodic nature . following this rotation , gaps of the former case are `` closed '' ( in the sense that faces of adjacent tetrahedra are brought into contact to form a `` face junction '' ) while translational and rotational symmetries are obtained in the latter case . in all cases , an angular displacement of ( or a closely related angle ) , where is the golden ratio , is observed between faces of a junction . additionally , the overall number of _ plane classes _ , defined as the number of distinct facial orientations in the collection of tetrahedra , is reduced following the transformation . finally , we present several `` curiosities '' involving the structures discussed here with the goal of inspiring the reader s interest in constructions of this nature and their attending , interesting properties . = 1
auction theory ( cf . ) typically seeks to optimize the seller s _ expected _ revenue , which presumes that the seller is _ risk - neutral_. the focus of this work is to identify good auction mechanisms for sellers who care about the riskiness of the revenue in addition to the magnitude of the revenue .there is an inherent trade - off between the magnitude and riskiness of revenue .consider the auction of a single - item to a bidder whose valuation is drawn from the uniform distribution over the interval ] .the concavity of the utility function models _ risk - aversion_. for instance , the optimal single - bidder mechanism for the utility function sets a price and maximizes the expected utility .increasing the concavity of the utility function increases the emphasis on risk - aversion the optimal price for the cube - root utility function is .the linear utility function models a _ risk - neutral _ seller . _the goal of this paper is to identify truthful mechanisms that are simultaneously good for the class of all risk - averse agents , i.e. , we look for mechanisms that yield near - optimal expected utility for all possible concave utility functions ._ a useful byproduct of such a guarantee is that we do not need to know the seller s utility function in order to deploy the mechanism .this is useful when the auctioneer is conducting the auction on behalf of a seller ( as in the case of ebay ) , when the seller does not know its utility function precisely , or when the seller s risk attitude changes with time .the following example illustrates the challenge in the context of a single - item single - bidder auction .consider two sellers with utility functions , which expresses risk - neutrality , and for some very small , which expresses strong risk - aversion .suppose , as before , that there is a single bidder whose valuation is drawn from the uniform distribution with support ] .notice that the expectation is over the bids ( or valuations ) , which is the standard auction objective in bayesian revenue maximization .we model the risk - attitude of a specific seller by endowing the seller with a concave utility function .we will assume throughout that this utility function is monotone and normalized in the sense that .then the expected utility of w.r.t .a utility function is ] ( or ] , i.e. , at a price of at least , which implies that we get at most utility for .we first show that the virtual value based approach employed by myerson for the risk - neutral case extends to risk - averse single - item auctions , but not ( to the best of our knowledge ) to auctions of two or more items ( see section [ sec : benchmark ] ) .we then present three results .first , when the supply is unlimited ( or equivalently , the number of items is equal to the number of bidders ) , we identify a mechanism called the hedge mechanism that is a universal -approximation ( see theorem [ thm : unlimited ] ) .the ratio improves to nearly with the assumption that the distribution satisfies a standard hazard rate condition .the hedge mechanism is a posted - price mechanism , which offers every bidder a take - it - or - leave - it offer in a sequential order so long as supply lasts .we choose the price to be less than the optimal price for a risk - neutral seller so as to guarantee a good probability of sale to any bidder at a good revenue level .moreover , this mechanism is the best possible in the sense that no mechanism can be a universal -approximation for ( see theorem [ thm : lb ] ) .this impossibility result identifies a certain heavy - tailed regular distribution , called the left - triangle distribution that exhibits the worst - case trade - off between riskiness and magnitude of revenue over all regular distributions .second , when the supply is limited ( number of items is less than the number of bidders ) , we identify a sequential posted - price mechanism that gives a universal -approximation by modifying the hedge mechanism to handle the supply constraint ( see theorem [ thm : limited - regular-1 ] ) .the key to this modification is to use a certain limited supply auction to guide the choice of the posted price .third , we will show that the vcg mechanism yields a universal approximation ratio close to under moderate competition , i.e. , when is a reasonable multiple of ( see theorem [ thm : vcg ] ) .recall that for a -item auction the vcg mechanism is a -st price auction , in which the top bidders win and get charged the -st highest bid .we prove our result by establishing a probability bound for the -st order statistic of i.i.d. draws from a regular distribution .myerson identifies the optimal single - item mechanism for a risk - neutral seller and has inspired a large body of work ( cf .chapter 13 from ) .there is some work that deals with risk in the context of auctions .eso identifies an optimal mechanism for a risk - averse seller , which always provides the same revenue _ at every _bid vector by modifying myerson s optimal mechanism ; unfortunately , this mechanism does not satisfy ex - post ( or even ex - interim ) individual rationality , and charges bidders even when they lose .maskin and riley identifies the optimal bayesian - incentive compatible mechanism for a risk - neutral seller when the _ bidders _ are risk - averse . in our model , we identify mechanisms that are ex - post incentive compatible .so the buyers optimize their utility bidding truthfully for every realization of the valuations , and thus have no uncertainty or risk to deal with .hu et al . studies risk - aversion in single - item auctions .specifically , they show for both the first and second price mechanisms that the optimal reserve price reduces as the level of risk - aversion of the seller increases .in contrast , we identify the optimal truthful mechanism for a risk - averse seller in a single - item auction in section [ sec : benchmark ] ( it happens to be a second price mechanism with a reserve ) , study auctions of two or more items and identify mechanisms that are simultaneously approximate for all risk - averse sellers . an alternative simpler model of risk different from the one we adopt is to optimize for a trade - off between the mean and the variance of the auction revenue , i.e. , - t \cdot var[r] ] , is equal to the expected virtual valuation served ] , which is equal to , which is ] is upper - bounded by the utility function applied to the expected revenue ) ] , because a utility function is monotone .so we have the following : [ fact : upperbound ] for any mechanism , and any concave utility function , the expected utility of is upper - bounded by the utility function applied to the expected revenue of myerson s mechanism , i.e. , \leq u(e_{\mathbf{v}}[rev(mye,\mathbf{v})]) ] .we now use the bounds in the previous two lemmas to complete the proof of the theorem .[ thm : unlimited]in a multi - unit auction with unlimited supply , where bidders valuations are drawn i.i.d . from a regular ( or m.h.r ) distribution ,the mechanism is a universal ( or )-approximation .we prove for the regular case ; for the proof of the m.h.r .case we simply use the bound from lemma [ lem : mhr ] instead of the bound from lemma [ lem : regular ] . fix a concave utility function . for each bidder ,let 0 - 1 random variable indicate whether bidder s bid is at least . \\ & \geq & e[\frac{\sum_{i}x_{i}}{n}]\cdot u(np^{*}q^ { * } ) \\ & \geq & 0.5\cdot u(np^{*}q^ { * } ) \\ & \geq & 0.5\cdot \text{optimal expected utility}\end{aligned}\ ] ] the first step is because the sale price is .the second step is by monotonicity and concavity of and because .the third step is by lemma [ lem : regular ] , and hence \geq n/2 ] by . as revenue is monotonically decreasing as price goes down from to ,the revenue of is minimized when the offer price is . by lemma [ lem : regular ]the resulting revenue is at least ; integrating over all and completes the proof . [ cla : q_hat]let be the allocation probability of any fixed bidder .then lies in the interval ] and hence , . by definition of ,each bidder s bid is at least with probability at least , and so , \geq 0.5 n ] .now we can define our mechanism ( for the limited - supply case ) .the hedge mechanism is an spm which makes a take - it - or - leave - it offer at price to bidders one by one , as long as the supply lasts .[ thm : limited - regular-1]in a multi - unit auction with items and bidders , where bidders valuations are drawn i.i.d . from a regular distribution ,the mechanism is a universal -approximation to optimal expected utility .notice that the revenue of is , where is the number of bidders who bid at least , which is a binomial variable with parameter .hence =qn\geq 0.5 k ] .clearly =qn\geq 0.5 k ] , which is at least .next let , and hence . by a result of , one of the median of , and hence \geq0.5 ] , and our claim follows .we now complete the proof of theorem [ thm : limited - regular-1 ] .( of theorem [ thm : limited - regular-1 ] ) the expected utility of : & \geq & e_{\mathbf{v}}[u(p\cdot\min(y , qn ) ) ] \\ & \geq & e_{\mathbf{v}}[u(pqn)\cdot\frac{\min(y , qn)}{qn}]\\ & \geq & 1/4 \cdot u(pqn ) \\ & \geq & 1/4 \cdot u(rev(vcg_{r } ) ) \\ & \geq & 1/8 \cdot u(rev(vcg_{p^{*}}))\\ & \geq & 1/8 \cdot \mbox{optimal expected utility } \\\end{aligned}\ ] ] the second step is by concavity of , the fourth step is by monotonicity of the utility function with the following additional justification . for any bidder , she wins with probability in . on the other hand ,the optimal way to maximize expected revenue subject to the constraint that she wins with probability is to set a single price and get expected revenue .the fifth step is by lemma [ lem : vcg - discount ] .applying fact [ fact : upperbound ] completes the proof .we do not have an analog of theorem [ thm : lb ] for the limited supply case .we do not know if our analysis is tight ( though we can tweak various parameters to improve the ratio slightly ) or if it possible to identify a better posted - price mechanism .in this section , we quantify the universal approximation ratio of the vcg mechanism in multi - unit auctions .this is useful because the vcg mechanism ( -st price auction ) or a variation of it with a reserve price is often used in practice .we first restrict our attention to single - item auctions .the main result of this subsection is that the vickrey mechanism is a universal -approximation when there are bidders .[ thm : vickrey ] for a single item auction with bidders , when valuations are drawn i.i.d . from a regular distribution ,the vickrey mechanism is a universal -approximation to optimal expected utility .this theorem is a generalization of a result of dughmi et al . , which was for the risk - neutral case .most of the proof steps are similar , and so we only mention the proof structure , which is also used in the next section .let be the mechanism which first runs the utility - optimal mechanism on the bidders , and then allocates the item for free to the other bidder in case it is still available .our theorem follows from three statements .first , the revenue ( and hence utility ) of on bidders is equal to that of on bidders .second , among all mechanisms that always sell the item , including vickrey and , vickrey maximizes the winner s valuation and hence virtual utility , and therefore by the characterization of lemma [ lem : virtual_utility ] , vickrey on bidders has a higher expected utility than that of on bidders .third , as we will show more more generally in lemma [ lem : removing - k - bidders ] , the optimal expected utility from bidders is at least fraction of that from bidders .these three statements altogether imply our theorem . in this sectionwe prove a result analogous to theorem [ thm : vickrey ] for multi - unit auctions .[ thm : vcg]in a multi - unit auction with items and bidders , where bidders valuations are drawn i.i.d . from a regular distribution ,the vcg mechanism is a universal -approximation to optimal expected utility .the result implies that as long as the number of bidders is a small multiple of the number of items , the universal approximation ratio of vcg mechanism is close to .the proof structure is similar to that of theorem [ thm : vickrey ] , but the details are different because lemma [ lem : virtual_utility ] does not extend to the multi - unit case ( as discussed in section [ sec : benchmark ] ) .recall that the revenue of the vcg mechanism is exactly times the -st highest bid ( let the -th highest bid be 0 ) .the following probability bound on the -st highest bid is crucial to our analysis .[ lem : tail ] for any regular distribution , and , let be the -th largest of i.i.d .random draws from , then \geq 1/4 ] , where is the -th largest valuation of i.i.d. draws from .this new distribution has corresponding revenue function for ] for such distributions . given any regular distribution , let ) ] .in other words , is the line segment that is tangent with at . by concavity of , we have for all ] .therefore for all , =pr[q_{t , n}\leq 1-f(y)] ] . then to show that \geq pr[\tilde{y}\geq e[\tilde{y}]] ] , or simply that .recall that for all .therefore for all , and hence \geq e[y] ] .so .now we prove that \geq 1/4 ] .therefore )=\frac{t-1}{n} ] .note that for i.i.d. draws from the uniform distribution over ] , and by properties of binomial distribution , ] , which by the classic bulow - klemperer result and the monotonicity of is at most ) ] . by lemma [ lem :tail ] , we have \geq1/4 ] .the following claim bounds the loss of optimal utility in dropping bidders .[ lem : removing - k - bidders]suppose valuations of bidders are drawn i.i.d . from a regular distribution .the optimal expected utility when selling items to bidders is at least fraction of the optimal expected utility when selling bidders to bidders .let be a utility - optimal mechanism for selling items to bidders .for any subset of bidders , let random variable be the revenue we collect from in .then the expected utility of running on all bidders is $ ] .suppose we randomly select a set of size .then we have : \\ & \geq & e_{\mathbf{v}}[e_s[u(r_{n})\cdot\frac{r_{s}}{r_{n}}]]\\ & = & e_{\mathbf{v}}[u(r_{n})\cdot e_s[\frac{r_{s}}{r_{n}}]]\\ & = & ( 1-\frac{k}{n})\cdot e_{\mathbf{v}}[u(r_{n})]\end{aligned}\ ] ] here the inequality is by the concavity of and that , and the second equality is due to the fact that every bidder s revenue is accounted in with probability . by an averaging argument , for some set of bidders , and for some fixed bids of bidders outside of , the mechanism induced on has expected utility that is at least fraction of the expected utility of running on all bidders .our lemma follows because the utility - optimal mechanism on bidders can only do better than this induced mechanism .now theorem [ thm : vcg ] follows by chaining the inequalities from lemma [ lem : bk - utility ] and claim [ lem : removing - k - bidders ] .in this paper , we identify truthful mechanisms for multi - unit auctions that offer universal constant - factor approximations for all risk - averse sellers , no matter what their levels of risk - aversion are . we hope that this paper spurs interest in the design and analysis of mechanisms for risk - averse sellers .we see several open directions . for instance , identifying better mechanisms for the auction settings studied in this paper , identifying mechanisms for more combinatorial auction settings , and designing online mechanisms that adapt prices based on previous sales .we conclude by singling out a specific challenge : can we characterize the utility - optimal mechanism for a seller with a fixed known utility function ?what if the seller s utility function has additional structure for instance , it satisfies constant ( absolute or relative ) risk aversion ?( section [ sec : benchmark ] discusses how the standard approach from myerson does not work for multi - item auctions . )10 j. bulow and p. klemperer .auctions versus negotiations . , 86(1):180194 , 1996 .s. chawla , j. hartline , d. malec , and b. sivan .sequential posted pricing and multi - parameter mechanism design . in _ proc .41st acm symp . on theory of computing ( stoc ) _ ,e. h. clarke .multipart pricing of public goods ., 11:1733 , 1971 .p. dhangwatnotai , t. roughgarden , and q. yan .revenue maximization with a single sample . in _ proc .12th acm conf . on electronic commerce ( ec ) _ , 2010 .s. dughmi , t. roughgarden , and m. sundararajan .revenue submodularity . in _ec 09 : proceedings of the tenth acm conference on electronic commerce _ , pages 243252 , new york , ny , usa , 2009 .p. eso and g. futo .auction design with a risk averse seller ., 65(1):7174 , october 1999 . c. ewerhart. optimal design and -concavity .working paper , 2009 .a. goldberg and j. hartline .collusion - resistant mechanisms for single - parameter agents . in _ proc .16th acm symp . on discrete algorithms _ ,2005 . t. groves .incentives in teams . , 41:617631 , 1973 .a. hu , s. a. matthews , and l. zou .risk aversion and optimal reserve prices in first and second - price auctions . working paper , 2010 .e. s. maskin and j. g. riley .optimal auctions with risk averse buyers ., 52(6):14731518 , november 1984 .r. myerson .optimal auction design ., 6(1):5873 , 1981 .n. nisan , t. roughgarden , e. tardos , and v. v. vazirani . .cambridge university press , new york , ny , usa , 2007 .j. b. r. kaas .mean , median and mode in binomial distributions ., 34:1318 , 1980 .m. rothschild and j. e. stiglitz . increasing risk : i. a definition ., 2(3):225243 , september 1970 .m. rothschild and j. e. stiglitz . increasing risk ii : its economic consequences . ,3(1):6684 , march 1971 .w. vickrey .counterspeculation , auctions , and competitive sealed tenders ., 16:837 , 1961 .
the existing literature on optimal auctions focuses on optimizing the _ expected revenue _ of the seller , and is appropriate for risk - neutral sellers . in this paper , we identify good mechanisms for _ risk - averse _ sellers . as is standard in the economics literature , we model the risk - aversion of a seller by endowing the seller with a monotone concave utility function . we then seek robust mechanisms that are approximately optimal for all sellers , no matter what their levels of risk - aversion are . we have two main results for multi - unit auctions with unit - demand bidders whose valuations are drawn i.i.d . from a regular distribution . first , we identify a posted - price mechanism called the hedge mechanism , which gives a universal constant factor approximation ; we also show for the unlimited supply case that this mechanism is in a sense the best possible . second , we show that the vcg mechanism gives a universal constant factor approximation when the number of bidders is even only a small multiple of the number of items . along the way we point out that myerson s characterization of the optimal mechanisms fails to extend to utility - maximization for risk - averse sellers , and establish interesting properties of regular distributions and monotone hazard rate distributions . = 10000 = 10000 [ economics ]
the imaging of bone structures is usually done using x - ray or computed tomography ( ct ) . however , ionizing radiation , scanner time cost , and lack of portability are the limitations of these modalities. ultrasound may address these issues in many applications .ultrasound imaging of bone tissue has been investigated in different clinical procedures , _e.g. _ , registration of bone in neurosurgeries and orthopedics , guidance for diagnosis of skeletal fractures in emergency rooms , and pain management interventions . particularly , in some applications , dealing with the spine is of interest , _guidance for minimal invasive ( mi ) procedures in spinal surgery , and for administration of spinal anesthesia .+ ultrasound imaging is a valuable modality for enhancing the safety of different puncture techniques in regional anesthesia .these procedures are mostly performed landmark based or blind .ultrasound can facilitate these routines by visualizing the spinal anatomy , assisting to locate the puncture region before performing the injection procedure .further , ultrasound can be used as a real - time modality for needle trajectory control , or more effective placement of medication . however , in epidural injections , the spinal structures obstruct the ultrasound beams and makes the images noisy . +another potential application of ultrasound is computer - assisted minimally invasive ( mi ) spinal surgery .the procedure may require the registration of the patient positioned for surgery with preoperatively acquired images .the restriction of minimal invasiveness , together with limited radiation exposure , point at ultrasound imaging as a good candidate .the other important procedure in mi spine surgery is the accurate localization of the target vertebra .conventionally , localizing a vertebral level is performed by manual palpation and direct fluoroscopy .thus , surgeons identify a specific anatomical landmark such as the sacrum , and , then , start counting under fluoroscopic control up to the targeted vertebral level .this approach exposes the patient to an undesirable level of radiation , and is prone to counting errors due to the similar appearance of vertebrae in projection images .alternatively , ultrasound can improve patient safety and decrease the risk of wrong level surgery .+ in general , bone imaging using conventional ultrasound techniques is prone to higher level of artifacts in comparison with soft tissue imaging . in the case of the spine, images are filled with acoustical noise , and artifacts that can impede visualization of important features , and also make it hard to detect and segment the bone structure .image enhancement , where bone structures stand out more distinctly from surrounding soft tissue , helps to isolate the bone surface out of the b - mode ultrasound .+ to automate segmentation of the bone structure , image intensity or gradient - based methods are common , but results are sensitive to the parameters of image acquisition , e.g. frequency and dynamic range . pattern recognition or statistical shape models provide more robust results but require learning sets , and fail to identify traumatic cases as the pattern searched for is disrupted .+ the visual interpretation of images is strongly related to the phase of the underlying signal . such that the image features ( e.g. edges , corners , etc . )occur at parts of the image where the fourier components are maximally in phase with one another . based on local phase information ,a research group has presented a robust method for bone surface detection .they use 2-d log - gabor filters to derive the phase symmetry ( ps ) measure a ridge detector for bone localization and automatic segmentation of bone surfaces in ultrasound images .this technique detects the major axis of symmetry of the signals , and its performance may degrade with the performance of the reconstruction method .+ in standard medical ultrasound the images are reconstructed based on the das beamforming technique . in this technique received signals from active channels are dynamically delayed and summed in the beamformer . in this case , the achievable resolution , sidelobes level and contrast are limited . instead , using an adaptive method , such as minimum variance ( mv ) based beamforming techniques , can enhance the image quality as a result of lower sidelobes , a narrower beamwidth , and superior definition of edges . in the mv approach , for each time sample , the delayed received signal from each element is weighted adaptively before summing up in the beamformer .this approach was initially developed by capon for passive narrow - band applications .+ several researchers have previously investigated the mv approach in medical ultrasound .they have reported appreciable enhancements in the resolution and contrast in comparison with das beamforming .further , in a simulation study an eigenspace - based mv ( esmv ) technique has been employed in order to improve the contrast of the mv beamforming in medical ultrasound imaging .this technique has been developed based on earlier studies in radar imaging .previous work by our group has demonstrated that in bone imaging scenarios , the robustness of the mv beamformer degrades due to a poor estimation of the covariance matrix .the forward backward ( fb ) averaging technique has been proposed in order to enhance the covariance matrix estimation against signal misalignment due to the shadowing .more recently , we have investigated the potential of an esmv beamforming technique to enhance the edges of the acoustically hard tissues .we have also shown that by reducing the signal subspace rank the bone edges are improved . since the rank estimation is a challenge in esmv beamformers , in this studywe show that the use of a rank one signal subspace can reasonably well preserve the vertebra anatomy and enhance the bone edges in spinal imaging .the constructed images may be less appealing from a visual perspective , but the goal here is to achieve advantages for post - processing methods such as phase symmetry . in simulation , in - vitro , and in - vivo studies , we demonstrate that the extracted surfaces from the rank-1 esmv images are sharper , and the anatomy of the spine is better defined in comparison with their corresponding das images .+ the rest of this paper is organized as follows : in the next section , we first review the beamformer techniques , and the phase symmetry ridge detection method that are employed in this study ; then , simulation and experimental setups are introduced .we present the results from simulated data of a point scatterer and vertebra phantoms , followed by results from ct - us registration of a vertebra phantom , and in - vivo images of the spine .this section is followed by the discussion on the results .the minimum variance beamformer employs an element weight vector which minimizes the variance of the beamformer output under the constraint that the signal arriving from a point of interest is unaffected by the beamformer . in this method , the optimized weights are estimated as : where is the spatial covariance matrix , is the steering vector , and stands for hermitian transpose .a common estimator for the data covariance matrix is the sample covariance matrix .therefore , using a method called subarray technique , the sample covariance matrix is estimated as : \,{{{\bf{\bar x } } } _ { l } } { { [ { n - k}]}^{h } } } } , \label{eq : eq2}\ ] ] where = { \left [ { \begin{array}{*{20}{c } } { { { x } _ { l}}[n ] } & { { { x } _ { { l } + 1 } } [ { n } ] } & \ldots & { { { x } _ { { l } + l - 1 } } [ { n } ] } \\ \end{array } } \right]^t.}\ ] ] the sample covariance matrix has dimension , , the subarray length , $ ] is a time sampled signal from element of a uniformly spaced linear array with elements , and is transpose operator . in general , there is a time averaging over index which has been found to be necessary in order to get proper speckle statistics in the image . the subarray techniquecan be combined with forward - backward averaging to improve the covariance matrix estimation .the new estimate is expressed as : where is an exchange matrix , the left / right flipped version of the identity matrix , with the same dimension as , and denotes the complex conjugate of . substituting with either or in ( [ eq : eq1 ] ), the beamformer output is obtained as a coherent average over subarrays by : = \frac{1}{m -l + 1}\sum\limits _ { l = 1}^{m -l+1 } { { { \bf{w}}^h}}{{{\bf{\bar x } } } _ { l } } [ n ] , \label{eq : eq4}\ ] ] where , is a vector of time varying complex weights of size .also , in order to enhance the robustness of the mv estimate a term , , is added to the diagonal of the covariance matrix before evaluating ( [ eq : eq1 ] ) .there are many details about mv beamforming algorithms applied to medical ultrasound imaging , which have been addressed in previous publications . in this paperwe use the method that is described in .the eigenspace - based beamformer ( esmv ) utilizes the eigen structure of the covariance matrix to estimate mv weights . with assumption of , the sample covariance matrix defined by ( [ eq : eq2 ] )is eigendecomposed as : where ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\bf{e}_{\rm{n } } } = [ { { \bf{e}}_{{\it j } + 1}}, ... ,{{\bf{e}}_{{\it l}}}],\\ & { \bf{\lambda}_{\rm{s } } } = { \it diag}[{\it{\lambda _,{\lambda _ j}}],\,\,{{\bf{\lambda}_{\rm{n } } } } = { \it diag}[{\it{\lambda _ { j + { \rm 1}}}, ... ,{\lambda _ { l } } } ] , \label{eq : eq6 } \end{split}\ ] ] and , are eigenvalues in descending order , and , are the corresponding orthonormal eigenvectors .we refer to the subspace spanned by the columns of as the signal subspace and to that of as the noise subspace .ideally , the direction of the steering vector and the noise subspace are orthogonal , i.e. .this will result in a weight vector as : equation [ eq : eq7 ] can be interpreted as the projection of on the signal subspace of .we select the rank of the signal subspace employing the cross - spectral metric .the output signal power of the minimum variance beamformer can be expressed based on the cross - spectral metric as given in chapter 6.8.2 of . where is the cross - spectral metric for the eigenvalue .we select the rank of by identifying the largest eigenvalues for which the sum of their cross - spectral metric is times smaller than the total output signal power ( ) .the images were obtained from different beamforming techniques using matlab ( the mathworks , natick , ma , u.s ) , and resampled to isotropic pixels to form the basis for further image processing by phase symmetry filtering .we also implemented the phase symmetry algorithm in matlab .a log - gabor filter was defined in polar coordinates as the product of a radial factor by an angular factor : \cdot \exp \left [ { - \frac{1}{2 } \cdot { { \left ( { \frac{{\alpha ( \theta , { \theta _ 0})}}{{{\sigma _ \alpha } } } } \right)}^2 } } \right ] , \label{eq : eq9}\ ] ] where and are the coordinates in the fourier - transformed image , the characteristic radius , and the radial standard deviation .for the radial part , we choose empirically as 0.15 .the angular factor , shows the angle between the position vector and the direction of the filter , and is angular standard deviation that is assumed to be in this study .the two - dimensional fast fourier transform of the image is multiplied by the filter and their product is inversely transformed by . + a bank of filters is used with different and in order to enhance features of the image of different sizes and orientations . for each image , we use filters consisting of the combinations of several characteristic radius exponentially distributed from to in pixel space , and 6 characteristic orientations ( 0 , , ) , distributed around the main direction of the ultrasound beam ( downward ) .the filters and the corresponding filtered images were marked by the index .+ at each point of the imaging field , the real and imaginary parts of the filtered images are combined to form a metric of the phase symmetry ( ps ) : where denotes and and are the even and odd ( real and imaginary ) part of the image processed by filter , is a noise threshold ( dimensionless ) and is included simply to avoid division by zero ( ) .the asymmetrical treatment of even and odd components reflects a polarity choice where only dark - to - light - to - dark features are detected .there are more details about the ps method which have been addressed in previous publications .+ the threshold , angular and radial standard deviations are chosen empirically to provide images with the least noise ; yet retaining the most information . they are maintained identical for all images .the central radial frequency combinations are adjusted to best fit different applications . for the patient imaging of the sagittal lamina view, we used and , for simulations , the sagittal spinous process view , and the transversal lamina view , to , and for the water bath images we used , to .further , we set the noise threshold for the water bath images , and for the rest of images . in this study ,we simulate two different phantoms using field ii : a single point scatterer phantom , and a vertebra phantom .the vertebra phantom consists of a vertebra body that is embedded in the soft tissue .we use the simulation scenario proposed in for the vertebra phantom .we assume that the bone structure is completely attenuating .therefore , it shadows point scatterers and surfaces which are not directly visible to the imaging aperture .the 3d geometry of the vertebra body is obtained by ct scanning of a human lumbar vertebra specimen ( fig .[ fig1 ] ) . by utilizing matlab and vtk( kitware , new york , ny , u.s ) the 3d vertebra dataset has been segmented into triangular surfaces .then , equally weighted and spaced point scatterers are generated on the triangulated surfaces with a concentration of 200 scatterers / mm .the soft tissue is modeled by equal amplitude point scatterers that are uniformly distributed in a region of mm .the number of scatterers per resolution cell exceeds 10 , which is recommended to simulate speckle .the scatterers that are inside the vertebra body are identified and removed from the phantom .the image of the shadowed surfaces and point scatterers are modified by introducing a binary apodization - based shadowing model .this model is applied to field ii in order to make an image of the vertebra phantom .+ we simulate images employing a linear array with 128 elements and a center frequency of 5 mhz ( ) with 60 percent db fractional bandwidth. the array s elevation focus is 19 mm , and its pitch equals 0.308 mm .the maximum accessible aperture size for this array transducer is 38.70 mm ( = 128 ) .the array is excited by 1.5 periods of a square wave at the center frequency of the array . in all simulations , a beam density of 1 beam per element , a fixed transmit focus , and dynamic receive focusingin addition , the f number in the transmit is set to fn = 2.8 , while the receive f number is set to fn = 2.5 for the point scatterer phantom , and fn = 1.5 for the vertebra phantom .we select a large fn for the point scatterer phantom imaging scenario in order to achieve a wide enough beam width to ease further analysis .the transmit focal depth is set to 15 mm unless otherwise specified .the channel data are acquired for each scan line with a sampling frequency of 100 mhz .for all beamformers after applying delays the channel data are down - sampled to 20 mhz .we computed the analytic signals by applying the hilbert transform to the channel data .consequently , in the das approach the delayed received channel data are summed up for each scan line , without any apodization , whereas for mv - based beamformers the optimal aperture weights are estimated for each time sample before summation . in the adaptive approaches ,we use diagonal loading with in all simulations .we have 2 different experimental cases : registration of a single vertebra , and imaging the spine of a volunteer . in the first experiment we use a human lumbar vertebra specimen ( l3 ) and align the 3d - ct dataset to the 3d - us one .thus , we secure the vertebra specimen in a rigid holder and glue 4 small plastic balls ( fiducials ) with a diameter of 2 mm on the vertebra body ; two on the spinous process ( top ) and two on the lamina. positions of these fiducials are illustrated in fig .[ fig1](a ) . + the 3d us volumeis constructed from 2d us slices acquired from imaging the vertebra specimen in a water bath , and by moving the probe using a 2d robot in the elevation direction by a step of 0.5 mm [ fig .[ fig1](a ) ] .the constructed 3d - us volume consists of voxels with a dimension of 0.077 mm.077 mm.5 mm .subsequently a ct dataset of the vertebra specimen is prepared using a high - resolution ct imager ( siemens , somantom definition flash ) .this results in a ct volume of voxels with a resolution of ( 0.19 mm.19 mm.3 mm ) . for registration ,the coordinates of the fiducials tip are manually selected both in us and ct datasets .a landmark - based rigid registration algorithm is used to transform the ct dataset in order to match the 3d - us volume .the ct slices are resampled to the in - plane us resolution .since ct - us registration is performed , the bone iso - surfaces are extracted from the ct volume employing the marching cubes algorithm in vtk .this is expected to match the ones in us and can be used as the gold standard ( gs ) reference .an empirically chosen thresholding value of -524 hounsfield unit ( hu ) is used to extract the surface profile from the ct slices .we measure the registration accuracy by calculating the fiducial registration error ( fre ) .+ to compare our images with the gold standard ct surface profile , a signed distance distribution of the us intensity values is computed .first , us images are mapped to their corresponding ct image , and the normal distance of non - zero intensity pixels are computed with respect to the extracted gs profile .the pixels located inside the gs profile have positive and the pixels located outside the gs profile have negative distance values .this produces a set of intensity/ signed distance pairs .the high intensity values around the zero distance indicates the bone localization accuracy , and the concentration of the intensity values in positive / negative distances shows the noise level inside / outside of the bone surface .+ in the in - vivo experiments , we use a male healthy volunteer . his lumbar vertebra ( l2 ) is scanned in three different planes : sagittal plane of the spinous process , and sagittal and transversal planes of the lamina . for scanning the spinous process , we use a 10 mm stand - off ( sonaraid , wolhusen , lucerne , switzerland ) in order to improve the matching between probe and skin .the scans were preformed after obtaining signed consent from the volunteer . + in the experimental studies , channel data are acquired using a sonixmdp scanner ( ultrasonix medical corporation , vancouver , british columbia , canada ) , along with a linear array transducer ( l14 - 5/38 ) with 128 elements , centre frequency of 5 mhz , and pitch of 0.308 mm .we use 256 imaging beams which are transmitted with fn = 2.8 , and received with fn = 1.5 .further , the receive aperture walks with the transmit aperture , meaning that the active receive elements are centered on the transmit beam axes .the sonixdaq ( ultrasonix medical corporation , vancouver , british columbia , canada ) is used to capture the channel data .this module allows us to store rf data acquired from 128 elements simultaneously . for the beamforming ,the channel data related to each beam is first determined and delayed .then , the esmv beamforming method is applied to construct images of interest . as for the simulations, is used for the diagonal loading purpose .further , after construction of the images a 2d median filter with a window size of is applied to smooth the images .fig . [ fig2 ] demonstrates the effect of using only the largest eigenvalue on the image of a point scatterer .[ fig2](a ) shows the das image of the simulated point scatterer .[ fig2](b ) presents the esmv image when only the largest eigenvalue is used for estimating the signal subspace ( ) . in comparison with fig .[ fig2](a ) , the point scatterer is defined with higher resolution and the sidelobe level is decreased . in fig .[ fig2](c ) , it is assumed that all eigenvalues contribute in the signal subspace except the largest one ( ) . in this scenario , the image of the point scatterer is completely distorted . in fig .[ fig2](d ) the beam profiles corresponding to figs .[ fig2](a ) - ( c ) are compared . in this figureit can be seen that using in the esmv beamformer results in a -12 db beamwidth of 0.35 mm .this value is about 0.8 mm for das .ideally , the sidelobe levels are decreased from -30 db in das to -95 db for esmv .also , it can be seen that when is excluded a major part of the mainlobe between 0 and -20 db is removed [ fig .[ fig2](d ) ] . fig .[ fig3 ] shows simulated images of the vertebra phantom introduced in the simulation setup section and its corresponding phase symmetry images for different beamformers . in this imaging scenario ,the transmit focal depth is 15 mm .[ fig3](a ) shows the das image of the vertebra phantom .[ fig3](b ) - ( d ) present esmv images for different eigenvalue threshold values ( ) . in fig .[ fig3](d ) , is selected to ensure that just the largest eigenvalue is used . it can be seen that by decreasing the speckle pattern in the neighboring region of the vertebra body is distorted , and for it is almost removed , especially between the depths of 25 mm and 32 mm .this effect can be partly seen around the spinous process ( top of the vertebra ) at a depth of 15 mm .[ fig3](e ) - ( h ) show ps images related to figs [ fig3](a ) - ( d ) .it can be seen that by decreasing the bone boundaries become sharper . in fig .[ fig3](h ) , we observe that a larger segment of the lamina is detected between a depth of 26 mm and 29 mm in box a , and between a depth of 23 mm and 26 mm in box b in comparison with fig .[ fig3](e ) . figs .[ fig4 ] and [ fig5 ] show the ct gold standard surface profile overlaid on the ultrasound images for the two different vertebra slices of fig .[ fig1](b ) . in this registration setup ,the fre value is calculated as 0.13 mm .[ fig4](a ) and ( b ) show the das and esmv images of slice 1 .the ct profile matches well on outer boundary of the vertebra in both images . in the das image [ fig .[ fig4](a ) ] the sidelobe noise is clearly observed around the spinous process between a depth of 15 mm and 20 mm . also , the sidewall boundaries are stretched due to the shadowing effect , whereas in the esmv image the sidelobe noise is decreased and the boundaries are enhanced . in figs . [ fig4](c ) and ( d ) a deviation of the surface from the gold standard surface is observed , particularly on spinous process ( top of the vertebra ) . in fig . [ fig4](c ) the curvature of spinous process profile has been distorted , whereas the anatomy of the vertebra is preserved reasonably well in fig .[ fig4](d ) .further , the acoustical noise observed inside the bone , between a depth of 35 mm and 40 mm , is reduced in fig .[ fig4](d ) than that of fig .[ fig4](c ) .+ figs .[ fig5](a ) and ( b ) show the das and esmv images of slice 2 , and figs .[ fig5](c ) and ( d ) demonstrate their corresponding ps images . comparing the das and esmv based us images ,the bone edges are improved in fig .[ fig5](b ) in comparison with fig .[ fig5](a ) .further , comparing to the gold standard surface profile , in fig .[ fig5](d ) the anatomy of the spinous process is preserved whereas it is distorted in fig .[ fig5](c ) . also , in fig .[ fig5](d ) , the detected surface is sharper in comparison with fig .[ fig5](c ) . + fig .[ fig6 ] presents the distribution of intensity values and their corresponding signed distances for the images in fig .each graph is divided into three regions : a , b , and c. the region a indicates the intensity distribution around the bone surface , defined between -0.5 mm and 2.1 mm in us images , and between 0.1 mm and 2.1 mm in ps images . both us and ps images corresponding to the esmv beamforming technique have less noise level in the regions b and c ( table . [tab : tabel0 ] ) , and a narrower distribution in the region a in comparison with those of the das beamforming technique . comparing the ps images ,the mean surface localization error , calculated in region a , is 0.90 mm ( std = 0.85 mm ) for das and 0.79 mm ( std = 0.77 mm ) for esmv .+ fig .[ fig7 ] presents the distribution of intensity values with their corresponding signed distance for the images in fig .the regions a , b , and c are defined the same as in fig .the noise levels in the regions b and c of the esmv image are almost 16 and 13 of that in the das image . comparing figs . [ fig6](c ) and ( d ) , we observe that the concentration of intensity values are much less in regions b , and c in the esmv image ( ps ) than in the das image ( ps ) .the mean localization error is 0.97 mm ( std = 0.57 mm ) for das and 0.95 mm ( std=0.45 mm ) for esmv .table [ tab : tabel0 ] shows quantitative results for the image quality assessment of the vertebra slices in figs .[ fig4 ] and [ fig5 ] .this table shows the bone surface localization errors and the noise level in the us and ps images obtained from the das and esmv beamforming techniques . in fig .[ fig8 ] , we present two different image lines of the us and ps images presented in fig .these lines are marked in fig .[ fig4](c ) . in figs .[ fig8](a ) - ( d ) the location of the bone surface obtained from the gold standard reference is marked by vertical dash - dot lines . in fig .[ fig8](a ) , there is a peak bias of 1.12 mm for both das and esmv , which is measured relative to the gold standard reference .the mean intensity of acoustical noise , measured between a depth of 30 mm and 40 mm , is decreased from 69.01 in das to 14.33 in esmv .the profile widths at an intensity value of 200 are 1.03 mm for das and 0.73 mm for esmv .[ fig8](b ) shows a horizontal image line at = 29.80 mm .the peaks at 18.30 mm and 24 mm indicate the left - hand and right - hand sidewalls at the corresponding depth .the profile width at a pixel intensity of 150 is 0.75 mm for esmv and 1.39 mm for das for the right - hand sidewall . fig .[ fig8](c ) presents the ps scan - lines corresponding to fig .[ fig8](a ) . in this figure ,the profile width measured at an intensity value of 200 is 0.44 mm for das , and 0.30 mm for esmv .further , the intensity level drops by 129 for das and by 218 for esmv between a depth of 17.45 mm and 18 mm .also , the mean noise level is 24.50 for das and 4.67 for esmv between a depth of 30 mm and 40 mm . in fig .[ fig8](d ) , the profile width at an intensity level of 100 is 0.51 mm for das and 0.34 mm for esmv , measured around the right - hand sidewall . .[ cols="^,^,^,^,^,^,^,^,^,^ " , ] [ tab : tabel0 ] fig .[ fig9 ] presents two different image lines of the us and ps images in fig .these lines are marked in fig .[ fig5](c ) . in fig .[ fig9](a ) , the mean noise level , measured between a depth of 30 mm and 40 mm , is decreased from 69.01 in das to 14.33 in esmv .[ fig9](b ) shows a horizontal image line at = 31 mm . in this figure , the peaks at 19.1 mm and 23.5 mm indicate the left - hand and right - hand sidewalls at the corresponding depth .the signal sensitivity at the right - hand and left - hand sidewalls are 126 and 136 for esmv , and 55 and 81 for das . in fig .[ fig9](c ) , the profile width measured at an intensity value of 150 is 0.59 mm for das and 0.29 mm for esmv .the mean noise levels measured between a depth of 17.45 mm and 18 mm are 20.35 and 4.62 for das and esmv . in fig .[ fig8](d ) , the profile width at an intensity level of 50 is 0.71 m for das and 0.45 mm for esmv for the left - hand sidewall .[ fig10 ] - [ fig12 ] demonstrate a qualitative comparison between ps images obtained from das and esmv beamformers .[ fig10 ] shows images of a lamina in the sagittal direction .[ fig10](a ) corresponds to the das image and fig .[ fig10](b ) demonstrates the esmv image for .it can be seen that in the esmv image , the amount of the speckle around the bone surface is reduced .[ fig10](c ) and ( d ) show ps images obtained from figs .[ fig10](a ) and ( b ) .it is observed that the esmv beamformer improves the bone surface and results in a thinner definition of the bone boundary . also on the left - hand side of the das image ( marked with a white arrow ) some unwanted features are observed , which have been removed in fig .[ fig10](d ) .+ figs .[ fig11 ] shows sagittal plane images of spinous process .11(a ) shows the das image .11(b ) demonstrate the esmv image with . comparing with fig .10(a ) , in this image the speckle around the bone surface is reduced while the structure of the bone is preserved .11(c ) shows the ps image obtained from the das image . in this image the bone surface is smeared out and the boundaries are not well delineated , whereas in fig .11(d ) the bone surface is reasonably well isolated from the connective tissue on the top of the surface . in fig .10(c ) , the bone boundary , on both side of the spinous process marked with white arrows , is thick and unclear . in comparison , in fig .11(d ) the bone boundary is sharper and a prolongation of the surface is observed .in a similar manner in fig .[ fig11](d ) the sharpness of the bone surface is increased for smaller , and the surface is somewhat better isolated from the connective tissue in comparison with fig .[ fig11](c ) .[ fig12 ] shows an image of the lamina in the transversal direction . comparing the us images, we observe a superior isolation of the bone surface from surrounding soft tissue in the esmv image .comparing the ps images , in fig .[ fig12](d ) , we observe a delineation of the facet joint on left -hand side , and the lamina boundary on the right - hand side .further , in fig . [ fig12](d ) , an improved isolation between the facet joint and the lamina , and a sharper definition of the bone boundaries is observed .there is a potential for the esmv beamformer to enhance the bone edges in us images , but the performance of this beamformer depends on the signal subspace estimation . from figs .[ fig3](b ) - ( d ) , we observe that by using a small threshold value the bone structure is preserved while the speckle in its neighborhood is reduced .this effect which has been discussed in can give rise to images with enhanced edges but distorted speckle patterns [ figs .[ fig3](a ) - ( d ) ] . a very small thresholding value results in a rank-1 signal subspace [ fig .[ fig3](d ) ] , i.e. just the largest eigenvalue is used for the signal subspace estimation in ( [ eq : eq7 ] ) .thus , since detection of edges is the main purpose , regardless of the speckle pattern , a rank-1 signal subspace can enhance the bone edges images obtained from the esmv beamformer [ figs .[ fig10](b ) - [ fig12](b ) ] .this is beneficial for post - processing techniques , e.g. the phase symmetry method , for extracting or locating the bone surfaces .+ in the simulated images , fig .[ fig3 ] , because of the specular reflection , some parts of the vertebra sidewalls are missed . in fig .[ fig3](a ) , the coherent scattering from the perpendicular surfaces to the beams result in echoes with the higher intensities , e.g. , the the spinous process top , and parts of the lamina located at mm . in this simulation setup , for each triangle , the scatterers are equally spaced and located in - plane , and all have equal scattering strengths .that is , roughness effects are not considered in these images .however , the angle between the triangular surface elements can partly introduce the roughness to our simulation model .+ from fig .[ fig3](h ) , and figs .[ fig10](d ) - [ fig12](d ) we observe that the bone surfaces which are extracted from the esmv are sharper , the bone boundaries are thinner , and they are reasonably well isolated from the connective tissue in comparison with the das one . also , this setup shows more details of the vertebra geometry , e.g. in figs .[ fig4](d ) - [ fig5](d ) the spinous process geometry is well preserved ( at top of the images ) .+ the registration with ct - contours , shown in figs .[ fig4 ] and [ fig5 ] , suggests that the ultrasound bone response appears within the ct - contours .the ps filtered bone surface is delineated at the maximum of the ultrasound bone response , which places it even further inside the ct - surface .this behavior of the ps - filter is expected from its mathematical formulation as it identifies the maximum of the response in the signal rather than its rising side .the other reason for the observed bias is due to the registration error as pinpointing the balls tip accurately in the us images was more difficult than in the ct dataset . in thisstudy the ps parameters are assigned empirically , and finding an optimal setup for the log - gabor filter may be a challenge . in ,an automated procedure for selecting the filter parameters has been investigated , which can ease the filter tuning procedure .+ from fig .[ fig3 ] , fig .[ fig4 ] , and table [ tab : tabel0 ] , we observe that the surface localization error in both das and esmv are in the same order of magnitude , but narrower distribution of the intensity values in region a indicates shaper boundaries in the esmv images .further , the noise level is much lower inside / outside of the bone in the esmv images .+ the post - processed images in figs .[ fig10 ] - [ fig12 ] , demonstrate that there is a potential for the phase symmetry technique to reasonably well exploit the spinal structure from us images .this can result in an enhanced 3d reconstruction of the spinal anatomy , which facilitates level detection procedure in minimally invasive spinal surgeries , and registration of preoperative ct or mr images to intraoperative us in neuro - navigation surgeries .furthermore , the superior separation of the bone surface from the connective tissues achieved in fig .[ fig11](d ) , can ease the model - based automated segmentation of the spine anatomy .+ the use of direction - dependent thresholding as designed in , was not implemented as preservation of minute anatomical structures was considered more important than further noise removal .further , the automatic adaptive parameterization suggested in was not tested for this work .manual tuning of the different parameters provided satisfying results .the automated approach will be implemented in our further work .we have explored the potential of a rank-1 esmv beamformer , together with the phase symmetry post - processing method to enhance the spinal anatomy in ultrasound images .the suggested beamformer is independent of the thresholding factor , and its complexity is in the same order as for minimum variance beamformer .this beamforming setup can locate the spinal structure reasonably well while reducing the speckle from the surrounding tissue .therefore , the phase symmetry filtering of these images can result in an improved definition of the boundaries and enhanced separation of the spinal anatomy from the neighboring connective tissues in comparison with the das technique .this shows that beamforming which is optimized for good visual appearance is not always optimal for feature extraction .this is therefore one of the first examples which demonstrates that it can be beneficial to do beamforming in a way which does not give the best visual appearance , but rather one that gives the best feature detection .if good optimization criteria can be defined , then future work could take this one step further by actually doing a joint optimization of the two operations in order to improve feature detection .+ s. winter , b. brendel , i. pechlivanis , k. schmieder , and c. igel , `` registration of ct and intraoperative 3-d ultrasound images of the spine using evolutionary and gradient - based methods , '' _ ieee transactions on evolutionary computation _ ,12 , no . 3 , pp .284296 , 2008 .m. gofeld , a. bhatia , s. abbas , s. ganapathy , and m. johnson , `` development and validation of a new technique for ultrasound - guided stellate ganglion block , '' _ regional anesthesia and pain medicine _ , vol .34 , no . 5 ,pp . 475479 , 2009 .m. m. bonsanto , r. metzner , a. aschoff , v. tronnier , s. kunze , and c. r. wirtz , `` 3d ultrasound navigation in syrinx surgery - a feasibility study , '' _ acta neurochirurgica _ , vol .147 , no . 5 , pp .533541 , 2005 .f. kolstad , o. m. rygh , t. selbekk , g. unsgaard , and o. p. nygaard , `` three - dimensional ultrasonography navigation in spinal cord tumor surgery , '' _ journal of neurosurgery : spine _ , vol . 5 , no . 3 , pp .264270 , 2006 .c. arzola , s. davies , a. rofaeel , and j. c. carvalho , `` ultrasound using the transverse approach to the lumbar spine provides reliable landmarks for labor epidurals , '' _ anesthesia and analgesia _ , vol . 104 , no . 5 , pp. 118892 , 2007 , comparative study .d. tran , k .- w .hor , v. a. lessoway , a. a. kamani , and r. n. rohling , `` adaptive ultrasound imaging of the lumbar spine for guidance of epidural anesthesia , '' _ computerized medical imaging and graphics _ , vol .33 , no . 8 , pp . 593601 , 2009 .y. otake , s. schafer , j. w. stayman , w. zbijewski , g. kleinszig , a. graumann , a. j. khanna , and j. h. siewerdsen , `` automatic localization of target vertebrae in spine surgery using fast ct - to - fluoroscopy ( 3d-2d ) image registration , '' _ spie medical imaging _ , vol . 8316 , 2012 .f. mauldin , k. owen , m. tiouririne , and j. hossack , `` the effects of transducer geometry on artifacts common to diagnostic bone imaging with conventional medical ultrasound , '' _ ultrasonics , ferroelectrics and frequency control , ieee transactions on _ , vol .59 , no . 6 , pp .1101 1114 , june 2012 .j. kowal , c. amstutz , f. langlotz , h. talib , and m. g. ballester , `` automated bone contour detection in ultrasound b - mode images for minimally invasive registration in computer - assisted surgery - an in vitro evaluation , '' _ the international journal of medical robotics and computer assisted surgery _ , vol . 3 , no . 4 ,pp . 341348 , 2007 .a. k. jain and r. h. taylor , `` understanding bone responses in b - mode ultrasound images and automatic bone surface extraction using a bayesian probabilistic framework , '' in _ medical imaging 2004 : ultrasonic imaging and signal processing _ , vol .5373.1em plus 0.5em minus 0.4emsan diego , ca , usa : spie , 2004 , pp .131142 .i. hacihaliloglu , r. abugharbieh , a. j. hodgson , and r. n. rohling , `` bone surface localization in ultrasound using image phase - based features , '' _ ultrasound in medicine & biology _ , vol .35 , no . 9 , pp . 14751487 , 2009 .j. f. synnevag , a. austeng , and s. holm , `` benefits of minimum - variance beamforming in medical ultrasound imaging , '' _ ieee transactions on ultrasonics , ferroelectrics and frequency control _ , vol .56 , no . 9 , pp . 18681879 , 2009 .f. vignon and m. r. burcher , `` capon beamforming in medical ultrasound imaging with focused beams , '' _ ieee transactions on ultrasonics , ferroelectrics and frequency control _ , vol .55 , no . 3 , pp .619628 , 2008 .b. mohammadzadeh asl and a. mahloojifar , `` eigenspace - based minimum variance beamforming applied to medical ultrasound imaging , '' _ ieee transactions on ultrasonics , ferroelectrics and frequency control _57 , no . 11 , pp . 23812390 , 2010 .s. mehdizadeh , a. austeng , t. johansen , and s. holm , `` minimum variance beamforming applied to ultrasound imaging with a partially shaded aperture , '' _ ieee transactions on ultrasonics , ferroelectrics and frequency control _ , vol .59 , no . 4 , pp . 683693 , 2012 .w. featherstone , h. j. strangeways , m. a. zatman , and h. mewes , `` a novel method to improve the performance of capon s minimum variance estimator , '' in _ antennas and propagation , tenth international conference on ( conf .vol . 1 , 1997 , pp .322325 .i. hacihaliloglu , r. abugharbieh , a. j. hodgson , and r. n. rohling , `` automatic adaptive parameterization in local phase feature - based bone segmentation in ultrasound , '' _ ultrasound in medicine & biology _ , vol .37 , no . 10 , pp . 16891703 , 2011 .r. f. wagner , m. f. insana , and s. w. smith , `` fundamental correlation lengths of coherent speckle in medical ultrasonic images , '' _ ieee transactions on ultrasonics , ferroelectrics and frequency control _ , vol .35 , no . 1, pp . 3444 , 1988 .j. m. fitzpatrick , `` fiducial registration error and target registration error are uncorrelated , '' in _ society of photo - optical instrumentation engineers ( spie ) conference series _ ,7261 , feb 2009 .i. hacihaliloglu , r. abugharbieh , a. j. hodgson , r. n. rohling , and p. guy , `` automatic bone localization and fracture detection from volumetric ultrasound images using 3-d local phase features , '' _ ultrasound in medicine & biology _ , vol .38 , no . 1 ,pp . 128144 , 2011 .
we propose a framework for extracting the bone surface from b - mode images employing the eigen - space minimum variance ( esmv ) beamformer and a ridge detection method . we show that an esmv beamformer with a rank-1 signal subspace can preserve the bone anatomy and enhance the edges , despite an image which is less visually appealing due to some speckle pattern distortion . the beamformed images are post - processed using the phase symmetry ( ps ) technique . we validate this framework by registering the ultrasound images of a vertebra ( in a water bath ) against the corresponding computed tomography ( ct ) dataset . the results show a bone localization error in the same order of magnitude as the standard delay - and - sum ( das ) technique , but with approximately 20 smaller standard deviation ( std ) of the image intensity distribution around the bone surface . this indicates a sharper bone surface detection . further , the noise level inside the bone shadow is reduced by 60 . in in - vivo experiments , this framework is used for imaging the spinal anatomy . we show that ps images obtained from this beamformer setup have sharper bone boundaries in comparison with the standard das ones , and they are reasonably well separated from the surrounding soft tissue .
cryptography , the science and the art of communicating messages secretly has been the subject of intense research for the last 50 years . the field itself is much older , dating back to as old as 1900 bc , when egyptian scribes , a derived form of the standard hieroglyphics were used for secure communication . in 1949 ,shannon , the father of information theory , wrote a seminal paper ( see ) on the theory of secrecy systems , where he established the area on a firm footing by using concepts from his information theory . in this lucid paper , among other important contributions , he established the perfect secrecy of the vernam cryptographic system , popularly known as the one - time pad or otp for short. otp happens to be the only known _ perfectly secure _ or _ provably , absolutely unbreakable _ cipher till date .shannon s work meant that otps offer the best possible mathematical security of any encryption scheme ( under certain conditions ) , anywhere and anytime an astonishing result .there have been a number of other cryptographic algorithms in the last century , but none can provide shannon security ( perfect security ) .this is one of our motivations to probe into the otp and investigate its properties .to the best of our knowledge there has been very little work on the otp since shannon .recently , raub and others describe a statistically secure one time pad based crypto - system .dodis and spencer show that the difficulty of finding perfect random sources could make achieving perfect security for the otp an impossibility .we shall not deal with the issue of random sources in this paper .the question we intend to ask in this paper is what can we say about the length of the otp to be transmitted across the secure channel ?we prove a counter - intuitive result in this paper the length of the otp to be transmitted need not always be equal to the length of the message and that it is possible to achieve shannon security even if the transmitted otp length is actually smaller than the message length .note that we treat the otp as perfectly random and uncompressible .however , the length of the otp is one piece of information that is not exploited and is always compromised in its traditional usage .we construct a protocol where this piece of information can be used effectively to reduce the length of the otp to be transmitted while not losing shannon security for any of the bits of the message .we then give an alternate interpretation of the otp encryption and follow it up with a new paradigm of cryptography called private - object cryptography .the paper is divided as follows . in the next section ,we describe the otp and its traditional interpretation as xor operation by means of a simple example . in section 3 ,we prove the central theoretical result of the paper that it is possible to have the transmitted otp length less than the message length while still retaining perfect secrecy .we first prove a 1-bit reduction of the transmitted otp length and then generalize for a -bit reduction for a message of length bits .we also give an alternative method of compressing the otp based on the length information which is universally known . in section 4, we provide our new alternate interpretation of the otp as a private - object and the encrytpion / decryption as equivalent to making statements about the object . section 5 talks more about the new paradigm of private - object cryptography .we claim that every private - key cryptography is essentially a form of private - object cryptography and can provide theoretical security for at least one message of length equal to the entropy of the crypto - system .we then ask the important question how should we invest bits of secret ?we hint towards the use of formal axiomatic systems ( fas ) for this purpose .we conclude in section 6 .in 1917 , gilbert vernam of at&t invented the first electrical one time pad .the vernam cipher was obtained by combining each character in the message with a character on a paper tape key .captain joseph mauborgne ( then a captain in the united states army and later chief of the signal corps ) recognized that the character on the key tape could be made completely random .together they invented the first one time tape system .there were other developments in the 1920s which resulted in the paper pad system .the germans had paper pads with each page containing lines of random numbers .a page would be used to encrypt data by simple addition of the message with these random numbers .the recipient who had a duplicate of the paper pad would reverse the procedure and then destroy his copy of the page .an otp was used for encrypting a teletype hot - line between washington and moscow .otps were also used successfully by the english in world war ii .these were especially useful in battlefields and remote regions where there were no sophisticated equipments for encryption , all that they used were otps printed on silk .the final discovery of the significance and theoretical importance of the otp was made by claude shannon in 1949 .we describe the encryption and decryption of an otp by a simple example .alice and bob have shared an otp ( ) in complete secrecy ( assume that they have met in private and shared the key ) .one fine day , alice wants to invite bob to her house and wishes to send the message ` come at 8 pm ' to him .but she is afraid of the interception of the message by eve whom she dislikes .she therefore encrypts her message as follows .she first converts her message into binary ( assume that she has a dictionary which converts the message into the bits ) .she then performs an xor operation to yield the cipher - text .she transmits this across a public channel .bob receives the cipher - text .since he has the otp with him , he does the xor operation of the cipher - text with the otp to yield the correct message .he then looks up at the dictionary ( this need not be secret ) and converts this to the more readable message ` come at 8 pm ' to summarize ( refer to fig .[ fig : otp ] ) : 1 .the otp is a random set of bits which is used as a private - key known only to alice and bob .the otp encryption involves an xor operation of the message with the otp to yield the cipher - text .the otp decryption involves an xor of the cipher - text with the otp to get back the original message .the classical interpretation of the otp as xor implies the following two important observations . 1 .the length of the otp is completely compromised in the process of encryption .one bit of the otp is employed to encrypt exactly one bit of the message and this requires one xor operation .all bits of the message require the same amount of effort to encrypt and decrypt .we shall have more to say about the above observations later .but what can we say about the security of the otp encryption ?shannon , in his seminal 1949 paper on the theory of secrecy systems defined perfect secrecy as the condition that the _ a posteriori _ probabilities of all possible messages are equal to the _ a priori _ probabilities independently of the number of messages and the number of possible cryptograms .this means that the cryptanalyst has no information whatsoever by intercepting the cipher - text because all of her probabilities as to what the cryptogram contains remain unchanged .he then argued that there must be at least as many of cryptograms as the messages since for a given key , there must exist a one - to - one correspondence between all the messages and some of the cryptograms .in other words , there is at least one key which transforms any given message into any of the cryptograms .in particular , he gave an example of a perfect system with equal number of cryptograms and messages with a suitable transformation transforming every message to every cryptogram .he then showed that the otp actually achieves this .in other words , the best possible mathematical security is obtained by the otp .incidently , this is the only known method that achieves shannon security till date .it has generally been believed that the otps that are transmitted are required to have a length equal to that of the message in order for shannon s argument to hold . in this section ,we show this is not the case .although the length of the otp _ while _ encryption need to be equal to the length of the message , the otp that is transmitted could be less .but this sounds quite paradoxical because the otp is assumed to have been derived from a perfect random source and hence uncompressible .even if we are able to construct a compression algorithm that compresses some of the generated otps , it has to expand some other otps , it ca nt losslessly compress _ all _ otps .this is because of the _ counting argument _ which states that every lossless compression algorithm can compress only some messages while expanding others .however , we prove the central theoretical result of this paper that the transmitted otp length can be bits less than the message length while still retaining perfect secrecy .although we might not be able to achieve this reduction all the time , our method _ never expands _ the transmitted otp .at worst , our transmitted otps are of the length of the message .we first prove an easier case where the otp could be less than the message by 1-bit and the same idea is employed for the reduction .we make use of our earlier observation that the otp encryption compromises its length in its traditional usage which we can actually avoid . +* theorem 1 : * _ for every message of length bits , it is equally likely that the transmitted otp was of length or bits while still retaining perfect theoretical secrecy . _ + * proof : * we shall prove this result by constructing a ( modified ) protocol ( fig . [fig : figotp ] ) where alice and bob exchange a message of length by using an otp .however , in this modified protocol , there is a 50% probability that the transmitted otp had a length of or while still retaining perfect secrecy .we guarantee perfect secrecy for all the bits of the message .the protocol works as follows : + + * step 1 : * alice performs a coin flip with a perfect coin . if it falls * heads * , she constructs an otp of length and if it falls * tails * she constructs an otp of length .it is assumed that alice has access to a perfect random source to construct the otp in either events .+ + * step 2 : * alice communicates the otp through a secure channel to bob .+ + * step 3 : * on some later day , alice intends to send a message of length bits to bob .if the otp she generated has bits , she appends an additional bit at the end of the otp .this additional bit is set to if the length is * odd * and to if is * even*. in case the otp already has bits , alice forces the bit to if is * odd * and to if is * even*. + + * step 4 : * alice then performs the xor operation of the message with the resulting otp to yield a cipher - text which has bits .she transmits on the insecure public channel to bob .+ + * step 5 : * bob receives . bob checks to see if the otp he had earlier received from alice has sufficient bits to decrypt the message . in other words , does it have bits or bits . in casethe otp has bits , he does the exact same trick which alice did i.e. appends an additional bit and sets it to or depending on whether is * odd * or * even * respectively .if the otp already has bits , bob forces the bit to if is * odd * and to if is * even*. + + bob decrypts by performing an xor with the modified otp and obtains the message .+ we need not prove the perfect secrecy of the first bits as shannon s arguments hold . we need to prove that the bit is perfectly secure .we shall analyze the situation from the eavesdropper eve s perspective .eve knows of this entire protocol .eve intercepts the cipher - text which is of length bits .she knows that there is a 50% probability that it came from an otp which originally had bits or bits .she has no other strategy but to make a random guess and the probability of success is 50% .hence , her guess of the bit is no better than a 50% success .this proves the perfect secrecy of the bit .while this result seems highly theoretical and of little practical value , it actually shows an interesting aspect of the otp which has been taken for granted . the fact that the length of the otp contains _ information _ is usually neglected .our proof was aimed at achieving theoretical security for one additional bit by using the least significant bit ( lsb ) of the length of the otp ( by * odd * we mean lsb and * even * we mean lsb ) and we can do this half of the time .the natural question to ask is can we make use of the other bits of the length ? * theorem 2 : * _ for every message of length , it is possible that the transmitted otp had one of the lengths or with respective probabilities or while still retaining perfect theoretical secrecy . _ + * proof : * we generalize the aforementioned argument for a reduction in the length of the transmitted otp .assume that the message length and let the binary representation of the numbers be the following : where each of the is binary for all and .also ( note that since ) .alice has a -sided biased coin which produces otps of length with probabilities .for encryption of a bit message , alice forces the last bits of the , , , otps to the bits respectively .only for the instance when alice is generating an otp of length bits , she ensures that the last bits never have the same sequence as the other otps before sending it to bob on the secure channel .moreover , she ensures that the remaining available combinations which are in number have each a probability of occurrence .this way , the last bits of the otps are perfectly random because the probability of obtaining any particular sequence of bits is .the rest of the protocol remains unchanged . with this , we have proved by construction that it is possible for the transmitted otp to have a length lesser than the message length with a non - zero probability while still attaining perfect theoretical secrecy .it is interesting to see that for larger reductions ( larger values of ) , the probability of obtaining a reduction reduces .the best average reduction is for and , where we get bits of reduction .the average reduction is given by .the upper bound on is given by the condition which implies .note that in our protocol , we have not violated the assumption that the otp is perfectly random and incompressible .alternatively , we can say that the transmitted otp is compressible to the extent the length information allows . we provide a method of compressing the transmitted otp given the fact that the messages to be encrypted are always of length , which is publicly known .alice generates an bit otp .if the last bit is 1 , she deletes it to create an bit otp .if the last bit is 0 , she deletes all bits which are zeros from the end up to and including the bit which is 1 .if the otp has no 1s in it , then alice transmit it as is .as an example , consider the bit otp ` 1011001001 ' .since the last bit is 1 , alice deletes to create the 9-bit otp ` 101100100 ' .if the bit otp happens to be ` 1011001000 ' , by the above rule , alice obtains the 6-bit otp ` 101100 ' .alice transmits the resulting _ compressed _ otp across the secure channel to bob .since the length of messages to be encrypted is always , bob decompresses the received otp to bits by reversing the rule . in other words , if bob receives a bit otp , he appends a 1 to make it bits . if the received otp is of length bits , where , he appends a 1 followed by zeros .thus , the otp is correctly decompressed by bob in all instances .an interesting thing to observe is that the otp is compressed for all instances except the case when it has no 1s .there is only one such otp ( all 0s ) which is uncompressed by this scheme . at a first glance, one might wrongly infer that we are contradicting the counting argument .however , this is not the case .the counting argument applies only to _lossless compression algorithms . in our case, bob has the _ a priori _ information about the length and hence it is not memoryless .what are the reductions obtained by this method ?we can see that for 50% of the instances , there is a reduction by 1-bit only ( the last bit is 1 for 50% of the cases ) . among the remaining 50% , one instance is uncompressed ( the otp with all bits 0s ) and one instance has a maximum reduction of all bits ( the otp with a 1 followed by bits ) . for the remaining otps ,the compression ratios vary depending on the number of 0s in the end .for example , an otp with zeros in the end has a reduction of bits .there are such otps which will compress to an otp of length bits , a reduction by bits .in the previous section , we saw how we made use of the length of the otp in obtaining a reduction in its length .the length happens to be a particular feature of the otp , as if it were an _object_. this leads us to the notion of a private - object which we define as follows . +* private - object : * any object which is known only to the sender and the receiver is defined as a _ private - object_. + the above definition is very broad .the object may have any embodiment , not necessarily digital in nature .the object could be a real physical thing or it could be an one time pad ( could even be multi - dimensional ) .an important thing to note is that every private - object enables theoretically secure communication .the entropy of the private - object is determined by the _ number of independent true / false statements _ that can be made about the object without revealing any information about it . the way a messageis transmitted by means of a private - object is described below .alice and bob share a private - object , known only to them .alice intends to send a message ( as an example , the statement ` come at 8 pm ' to bob ) .the protocol is as follows : + * step 1 : * alice converts message into binary representation ( using a publicly known dictionary ) .say ` come at 8 pm ' translates to .+ * step 2 : * alice substitutes and .therefore . +* step 3 : * for each bit of the message , alice makes statements about the private - object which is true ( if the bit is t ) or false ( if the bit is f ) to obtain the cipher - text . in other words where is true , is true , is false etc . as a crude example , assume that the private - object is a physical object which has 3 eyes , 2 hands , 5 legs etc .alice could make a statement like ` has 3 eyes ' which is true or a statement like ` has 4 legs ' which is false ( the number of legs and hands in this hypothetical object are independent of each other ) . + * step 4 : * bob receives the cipher - text which is a collection of statements about .he verifies each statement and determines whether they are true ( ) or false ( ) .he obtains a string of and by this process ( ) . +* step 5 : * bob substitutes and in to obtain the binary message . + * step 6 : * bob looks up at the dictionary for to obtain the message ` come at 8 pm ' .+ the otp can be thought of as a private - object and the above protocol can be used for secure communication . for our previous example of section 2 ,the set of statements which alice would make are ` the first bit of the otp is 1 ' , ` the second bit of the otp is 0 ' ` the tenth bit of the otp is 0 ' .bob verifies these statements since he has the otp with him and obtains the correct message .in the previous section , we saw how the otp could be viewed as a private - object and statements about the object can be made to transmit information securely .so long as the statements are _ independent _ of each other , we are guaranteed to achieve perfect secrecy .this is because every statement encrypts one bit of the message and is making use of a unique feature of the private - object . for the otp ,every bit is its unique independent feature . for private - objects of the real physical world, the features could be the number of edges or the number of faces etc .determining the number of _ unique _ and _ independent _ features in a physical object might be difficult .this means that the _ entropy _ of the object is difficult to compute .the amount of information that can be securely transmitted by this method is upper bounded by the entropy of the object in bits .private - key or symmetric - key cryptography is a subset of private - object cryptography where the key happens to be a set of bits on which various mathematical operations are made . in effect, every private - key crypto - system is only making statements about the key which is the private - object .since the key of a private - key is usually much shorter than the message , the statements are not _ independent _ of each other .they formally map to complex statements about the key .we state the following theorem without proof : + * theorem 3 : * _ every symmetric - key crypto - system can encrypt exactly one binary message having a length equal to the entropy of the crypto - system with perfect theoretical secrecy . _+ one can always make a certain number of _ unique _ and _ independent _ statements about the crypto - system .we can treat the crypto - system with its unique parameters as a private - object having a certain entropy .these statements are _ finite _ in number and can be used to communicate a finite length binary message with perfect secrecy ( equivalent to an otp of the same entropy ) . the length of the message can be at most equal to the entropy of the crypto - system without sacrificing shannon security . finding the entropy of the crypto - system may not always be easy .another interesting off - shoot is the definition of the _ entropy of an object _ of the real world .we can invert our above observation to say that the entropy of an object is the number of bits of information that can be transmitted with perfect secrecy by making independent statements about the object .in other words , we claim that there exists a mapping from every object of the real world to an otp and the entropy of that otp is the entropy of the object. it may be hard in practice to determine the entropy of objects .let us now relax the _perfect secrecy _constraint since we need to send long keys ( if not as long as the message ) for achieving this .assume that we have a fixed bit - budget , say bits of secret .we wish to know what is the best private - object to invest these bits of secret so as to achieve a high encryption efficiency . here, we do not wish to achieve perfect secrecy , but breaking the system should be very hard . here, we are being vague in our definition .it suffices to say that we wish to obtain a method where currently known methods of cryptanalysis have a hard time in breaking , if not impossible .we wish to propose using a formal axiomatic system ( fas ) for investing these bits .this part of the paper is a bit speculative in nature and is mainly a motivation towards potential future research .a formal axiomatic system or fas for short , refers to a system of axioms and rules of inference which together define a set of theorems .an example of a fas is typographical number theory ( tnt ) .hilbert s program was to completely formalize the whole of mathematics using tnt .this ambitious plan was derailed by gdel who proved that all consistent and sufficiently powerful axiomatic systems contain _ undecidable _ propositions .because of this , formal axiomatic systems are fascinating objects .we can view a fas in another interesting way which is the _ compression _ view - point .a fas is actually a _ compressed _ version of all its theorems which can be proven within the system .it is this viewpoint that motivates us to consider an fas as a private - object which is shared between alice and bob .if alice were given a bit - budget of bits , she could invest it in the construction of a fas which is _ consistent _ and _ sufficiently strong_. these are the only two requirements .she would have to define a set of axioms and rules of inference to completely specify the fas .she shares this as a _ private - object _ with bob over a secure channel . the way alice and bob can now exchange information is to make statements or strings in the fas . the receiver can _ verify _ whether a particular statement or string is true or false in the fas which they share .if it is true , then it implies that the string is a _theorem _ and the bit conveyed is .if the string is false , then it is a _ non - theorem _ and conveys the bit .we basically use the private - object paradigm with theorems and non - theorems of the fas acting as binary representations for and respectively .we name such a system as formal axiomatic cryptographic system ( facts ) .[ fig : figfas ] shows the string space of a fas .since the fas is sufficiently strong , it would contain gdelian statementswhich are undecidable ( the system is incomplete ) .we believe that it may be possible to _ confuse _ and _ diffuse _ the cryptanalyst by a clever use of gdelian statements in the cipher - text . this is a speculation on our part , because we do not know of any procedure which would enable us to construct such statements in large numbers .one of the biggest advantages of such a set - up is the difficulty of breaking the system for eve using brute - force attack . in conventional systems such as the rsa and other public - key and private - key methods , brute - force attack would involve trying out all possible keys in the _ key - space_. for example , if the key length is 128-bit , it would mean trying out ( a huge number ) guesses for the private - key .a computer could mechanically try out these number of possibilities until it found the right key .this would probably take a long time but with a number of computers in parallel or by using quantum computers , this time could be sufficiently reduced .the important thing to realize in this scenario is that there is a _ mechanical procedure _ for trying out all the combination and with exponential increase in computational power over time , it could be eventual broken ( eg : the rsa-128 is already broken ) . in our system , the equivalent would be to try out all possible formal axiomatic systems of a given length .however there would be several systems which are _ duds _ , those that are inconsistent or meaningless .computers which are designed to try out different fass might have a difficult time to find out inconsistencies .they might have to deal with the turing halting problem .to summarize , the central contribution of this paper is a new result in the otp literature .we have shown that the length of the otp which is traditionally compromised in encryption could be avoided .we proved that it is possible to reduce the key - length of the transmitted otp ( which is perfectly random and uncompressible otherwise ) while still retaining perfect secrecy . even though this reduction is small, it is nevertheless useful in saving band - width for crypto - systems which use otps on a regular basis ( we showed that we never expand the otps in any case unlike compression algorithms which always expand some ) .we have conceived a new paradigm called private - object cryptography which makes use of statements about an object ( private to the communicating parties ) for secure message transmission and showed how the otp can be re - interpreted in this new paradigm .we also claimed that all existing private - key crypto - systems are a form of private - object cryptography .further , they are in essence making statements about the secret key .we believe that these statements are not independent but are necessarily more complex .we then suggested the investment of bits of secret in a fas .the verification of strings or statements of the fas as theorems or non - theorems could convey a bit of information .it may be the case that the structure of the fas and the space of theorems and non - theorems could be designed so that it is _ sufficiently random _ for cryptographic purposes .more research needs to be done in these directions .
in 1949 , shannon proved the perfect secrecy of the vernam cryptographic system , also popularly known as the one - time pad ( otp ) . since then , it has been believed that the perfectly random and uncompressible otp which is transmitted needs to have a length equal to the message length for this result to be true . in this paper , we prove that the length of the transmitted otp which actually contains useful information need not be compromised and could be less than the message length without sacrificing perfect secrecy . we also provide a new interpretation for the otp encryption by treating the message bits as making true / false statements about the pad , which we define as a private - object . we introduce the paradigm of private - object cryptography where messages are transmitted by verifying statements about a secret - object . we conclude by suggesting the use of formal axiomatic systems for investing bits of secret . * keywords : * one time pad , private - key cryptography , symmetric - key cryptography , perfect secrecy , shannon security , formal axiomatic systems .
the axelrod model of cultural dissemination is an apparently simple model of cultural diffusion , in which `` culture '' is modeled as a discrete vector ( of length ) , a multivariate property possessed by an agent at each of the sites on a fully occupied finite square lattice .agents interact with their lattice neighbors , and the dynamics of the model are based on the two principles of homophily and social influence .the former means that agents prefer to interact with similar others , while the latter means that agents , when they interact , become more similar . despite this apparent simplicity ,in fact the model displays a rich dynamic behavior , and does not inevitably converge to a state in which all agents have the same culture .rather it will converge either to a monocultural state , or a multicultural state , depending on the model parameters .the axelrod model has come to be of great interest in statistical physics , with a number of variations and analyses conducted .a review from a statistical physics perspective can be found in , and more recent reviews from different perspectives in .one of the best - known features of the axelrod model is the nonequilibrium phase transition between the monocultural ( ordered ) and multicultural ( disordered ) states , controlled by the value of , the number of traits ( possible values of each vector element ) .a number of variations and extensions of the model have been proposed , including an external field ( modeling a `` mass media '' effect ) , noise , and interaction via complex networks rather than a lattice .external influence on culture vectors , in the form of a `` generalized other '' was first introduced by .further work on external influence on culture vectors , or mass media effect , considers an external field which acts to cause features to become more similar to the external culture vector with a certain probability , or variations such as nonuniform or local fields or fields with adaptive features .counterintuitively , these mass media effects were found to actually increase cultural diversity rather than result in further homogenization , an effect explained by local homogenizing interactions causing the absorbing state to be less fragmented than when interacting with the external field only , the latter case actually resulting in more , rather than less , diversity .the effect of noise , or `` cultural drift '' , foreshadowed by , in the form of random perturbations of cultural features , has been examined .a sufficiently small level of noise actually promotes monoculture , while too high a level of noise prevents stable cultural regions from forming ( an `` anomic '' state , as described by ) .in fact , there is another phase transition induced by the noise rate .another form of noise , in the form of random error in determining cultural similarity between agents , has also been investigated .noise is also incorporated in various other extensions of the axelrod model . rather than interacting with the neighbors on a lattice ,neighborhoods defined by complex networks have also been investigated , including both static and coevolving networks .the use of complex networks rather than a lattice results in the phase transition controlled by the value of still existing , albeit possibly with a different critical value .the effect of network topology on the phase transition driven by noise has also been investigated .another extension of the axelrod model is the incorporation of multilateral influence , that is , interaction between more than two agents .multilateral influence allows diversity to be sustained in the presence of noise , when with dyadic influence it would collapse to monoculture or anomie that is , it removes the phase transition controlled by the noise rate described by .although most investigations of the axelrod model and its extensions have been purely through computational experiments , a number of papers have used either mean - field analysis , or proved rigorous results mathematically . the original description of the phase transition controlled by used mean - field analysis , as have some other papers .a rigorous mathematical analysis is much more challenging , and has so far mostly been restricted to the one - dimensional case , with the exception of , who proves results for the usual two - dimensional model . the critical behavior of the order parameter has also been investigated quantitatively for the case of on the square lattice and small - world networks .computational experiments have also been used to investigate the relationship between the lattice area and the number of cultures and thermodynamic quantities such as temperature , energy , and entropy . for the one - dimensional case , propose a thermodynamic version of the axelrod model and demonstrate its equivalence to a coupled potts model , as well as analyzing its behavior with respect to noise and an external field .an axelrod - like model with on a two - dimensional lattice is analyzed in the asymptotic case of by .other extensions and variations of the axelrod model include bounded confidence and metric features , agent migration , extended conservativeness ( a preference for the last source of cultural information ) , surface tension , cultural repulsion , the presence of some agents with constant culture vectors , having one or more features constant on some or all agents , using empirical or simulated rather than uniform random initial culture vectors , comparing mass media model predictions to empirical data on a mass media campaign , coupling two axelrod models through global fields , combining the axelrod model with a spatial public goods game , modeling diffusion of innovations by adding a new trait on a feature , and even using it as a heuristic for an optimization problem .in addition to the earliest phase diagrams showing just and the order parameter or the noise rate and the order parameter , the following phase diagrams , derived from either simulation experiments , or mean - field analysis ( or both ) , have been drawn for the axelrod model and various extensions ( notation may be changed from the original papers for consistency ) : where is external field strength ; and where is noise rate , and is a parameter controlling the network clustering structure ; where is the degree of overlap between the layers of a multilayer network ; where is the `` bounded confidence '' threshold ( minimum cultural similarity required for interaction ) ; for the one - dimensional case ; where is the fraction of `` persistent agents '' or `` opinion leaders '' ( those with a constant culture vector ) . show a phase diagram where is the rewiring probability on small - world network , and also plot the relationship between the order parameter ( largest region size ) and where is maximum node degree in a structured scale - free network . in the small - world network ,the phase transition still exists and is shifted by the degree of disorder of the network . in random scale - free networks , the transition disappears in the thermodynamic limit , but in structured scale - free networks the phase transition still exists . examine the nature of the phase transition in the one- and two - dimensional cases , while investigates in addition three- and four - dimensional systems as well as triangular and hexagonal lattices . despite these extensive investigations into various aspects of the axelrod model and its variants ,there has been a surprising lack of systematic investigation of the effect of increasing the neighborhood size , or `` range of interaction '' on a simple axelrod model with dyadic interaction on a square lattice .this is despite axelrod himself discussing the issue briefly and conducting experiments with neighborhoods of size 8 and 12 , finding that these result in fewer stable regions than the original von neumann neighborhood ( size 4 ) . , in their model with multilateral influence , use a larger von neumann neighborhood size , justifying it as empirically more plausible and a more conservative test of the preservation of cultural diversity .their extended model makes use of the larger neighborhood as its multilateral social influence uses more than two agents in an interaction , however all their experiments , including those reproducing the dyadic ( interpersonal ) influence model with noise of , fix the radius at , a precedent followed in a subsequent paper , while another model using a larger neighborhood for multilateral interactions fixes the radius at . investigate , for the special case , the axelrod model on a regular random graph using a mean - field analysis , giving an analytic explanation for the non - monotonic time dependence of the number of active links . increasing the coordination numbermay be considered to be similar to increasing the neighborhood size on a lattice with fixed coordination number in both cases all agents have the same number of `` neighbors '' ( aside from edge effects in the case of finite lattices ) , which increases monotonically with the coordination number or von neumann radius respectively . find that larger coordination numbers give better agreement between their master equation and axelrod model simulations , but do not describe a phase transition controlled by the coordination number . herewe investigate the effect of varying the radius of the von neumann neighborhood in which agents can interact , and find another phase transition in the axelrod model at a critical value of the radius , as well as the well - known phase transition at a critical value of , and draw a phase diagram for the axelrod model on a square lattice .each of the agents on the fully occupied lattice ( ) has an -dimensional culture vector ( ) for all .each entry of the cultural vector represents a feature and takes a single value from to , so , more precisely , for all and .each of the elements is referred to as a `` feature '' , and is known as the number of `` traits '' .the cultural similarity of two agents is the number of features they have in common .if element of the culture vector belonging to agent is , then the cultural similarity of two agents and is a normalized hamming similarity where is the kronecker delta function .an agent can interact with its neighbors , traditionally ( as was originally used by , for example ) , defined as the von neumann neighborhood , that is , the four ( north , south , east , west ) surrounding cells on the lattice , so the number of potentially interacting agents is the lattice coordination number . herewe extend this to larger von neumann neighborhoods by increasing the radius , that is , extending the neighborhood to all cells within a given manhattan distance , as was done by .this is illustrated in figure [ fig : neighborhoods ] .hence the number of potentially interacting agents ( the focal agent and all its neighbors ) in the von neumann neighborhood with radius is now at most ( we do not use periodic boundary conditions ) .von neumann neighborhoods of radius , , and .the focal agent is shown in black and the von neumann neighborhood for that agent in gray.,scaledwidth=48.0% ] initially , the agents are assigned uniform random culture vectors .the dynamics of the model are as follows .a focal agent is chosen at random , and another agent from the radius von neumann neighborhood is also chosen at random .with probability proportional to their cultural similarity ( the number of features on which they have identical traits ) , the two agents and interact .this interaction results in a randomly chosen feature on whose value is different from that on being changed to s value .this process is repeated until an absorbing , or frozen , state is reached . in this state , no more change is possible , because all agents neighbors have either identical or completely distinct ( no features in common , so no interaction can occur ) culture vectors . in the absorbing state ,the agents form cultural regions , or clusters . within the cluster ,all agents have identical culture vectors .then the average size of the largest cluster , is used as the order parameter , separating the ordered and disordered phases . in a monocultural ( ordered )state , , a single cultural region covers almost the entire lattice ; in a multicultural ( disordered ) state , multiple cultural regions exist .other order parameters that have been used include the number of cultural domains , mean density of cultural domains , entropy , overlap between neighboring sites , and activity ( number of changes ) per agent .source code for the model ( implemented in c++ and python with mpi ) is available from https://sites.google.com / site / alexdstivala / home / axelrod_qrphase/.figure [ fig : q ] shows the order parameter ( largest region size ) plotted against for , on three different lattice sizes .it is apparent that , as the size of the von neumann neighborhood is increased , the critical value of also increases .that is , by allowing a larger range of interactions , a larger scope of cultural possibilities is required in order for a multicultural absorbing state to exist .increasing the lattice size has a similar effect , although , as we shall show in section [ sec : meanfield ] , there is still a finite critical value of in the limit of an infinite lattice .the order parameter ( largest region size ) plotted against the number of traits for the axelrod model for , four different values of the von neumann radius , and three lattice sizes .each data point is the average over 50 independent runs and error bars show the 95% confidence interval .vertical dashed lines show the critical value of , where the variance of the order parameter is largest.,scaledwidth=48.0% ] figure [ fig : radius ] shows the order parameter ( largest region size ) plotted against the von neumann radius for , various values of , and three different lattice sizes . in each case( apart from the smallest value of , in which a monocultural state always prevails ) , there is a phase transition visible between a multicultural state ( for less than a critical value ) and a monocultural state .note that when is sufficiently large relative to the lattice size , every agent has every other agent in its von neumann neighborhood , and hence the situation is equivalent to a complete graph or a well - mixed population ( or `` soup '' ) .in this situation , it has long been known that heterogeneity can not be sustained .[ fig : radius ] shows that there appears to be a phase transition controlled by , between the multicultural phase and the monocultural phase . as the size of the neighborhood increases , so does the probability of an agent finding another agent with at least one feature in common with which to interact , and hence local convergence can happen in larger neighborhoods , resulting in larger cultural regions .however this does not result , at the absorbing state ( for a fixed value of ) , in a gradual increase in maximum cultural region size from a completely fragmented state to a monocultural state .rather , global polarization ( a multicultural absorbing state ) still occurs for sufficiently small , but at the critical value of the radius there is a phase transition so that for neighborhoods defined by a monocultural state prevails .the order parameter ( largest region size ) plotted against the von neumann radius for the axelrod model for , some different values of , and three lattice sizes .each data point is the average over 50 independent runs and error bars show the 95% confidence interval .vertical dashed lines show the critical value of , where the variance of the order parameter is largest.,scaledwidth=48.0% ] this phase transition is further apparent in figure [ fig : histogram ] , which shows histograms of the distribution of the order parameter ( largest region size ) at the critical radius for some different values of .that is , for each value of , the radius at which the variance of the order parameter is greatest .this shows the bistability of the order parameter at the critical radius , where the two extreme values are equally probable .distribution of the order parameter at the critical radius for some different values of , with , .each distribution is from 50 independent runs.,scaledwidth=48.0% ] figure [ fig : phase_diagram ] colors points on the plane according to the value of the order parameter , resulting in phase diagram. a multicultural state only results for sufficiently large values of and small values of .figure [ fig : phase_diagram_2color ] shows the phase transition more clearly , with the multicultural states in the upper left of the plane and the monocultural states in the bottom right . phase diagram showing the order parameter for the axelrod model for and three lattice sizes ( , , and ) .each data point is colored according to the size of the largest region averaged over 50 independent runs ., scaledwidth=48.0% ] phase diagram for the axelrod model for and three lattice sizes ( , , and ) . as in ,the arbitrary , but small , value of 0.1 is used as the value of the order parameter to plot the critical value of separating the monocultural and multicultural region for each value of the von neumann neighborhood radius .,scaledwidth=48.0% ]we detail the mean - field analysis carried out by who gave a differential equation . in the mean - field setting, we focus on the bonds between sites ( or agents ) located on an infinite lattice , so we can assume that each site and its von neumann neighborhood consists of exactly sites .the infinite lattice setting naturally implies that we do not consider edge effects . for a single ,randomly chosen bond between two sites , we let be the probability that the bond is of type at time , so both sites of the bond share common features , while features are different . if the randomly chosen bond is connected to sites and , then at time , we denote by the probability of a single feature of any two sites being common , so . if the features are distributed uniformly from to , then .it is sometimes assumed that the features have a poisson distribution with mean , so then application of the skellam distribution gives , where is a modified bessel function of the first kind .for the single bond , the number of common features is a binomial random variable , so derived a master equation , also known as a forward equation , given by ,\ ] ] where is the probability that an -type bond becomes an -type bond due to the updating of a -type neighbor bond .this equation is only defined for , but naturally the probabilities sum to one , giving for , we show that the master equation or , rather , the set of nonlinear differential equations ( [ master ] ) can be re - written as \nonumber \\ & + ( g-1)\left [ p_{m-1}(t)w_{m-1,m}^{(k)}(t ) \right .\nonumber \\ & - p_m(t ) w_{m , m-1}^{(k)}(t ) \nonumber \\ & + \left . p_{m+1}(t)w_{m+1,m}^{(k)}(t ) \right .& - \left .p_m(t ) w_{m , m+1}^{(k)}(t ) \right ] \sum_{k=1}^{f-1 } \frac{k}{f } p_k(t ) , \end{aligned}\ ] ] and zeroth differential equation is as in , we can investigate model dynamics within the mean - field treatment by studying the density of active bonds , that is a bond across which at least one feature is different and one the same .hence in an absorbing ( frozen ) state , . in the mean - field analysis , since an infinite lattice is assumed , only when a multicultural absorbing state is reached ; as noted by , the coarsening process by which a monocultural state is formed lasts indefinitely on an infinite lattice . phase diagram within the mean - field approximation for some different values of the von neumann radius .the value of ( shown at ) is obtained by numerical integration of ( [ mastersimple ] ) and ( [ mastersimple0]).,scaledwidth=48.0% ] figure [ fig : meanfield_q_radius ] plots the number of active bonds against the value of for some different values of within the mean - field approximation .it can be seen that the behavior is qualitatively the same as that shown in fig .[ fig : q ] for the simulations on finite lattices : the critical value of is higher for larger neighborhood sizes . on finite lattices, larger lattice sizes also increase the critical value of for a given neighborhood size , however on an infinite lattice , there is still a finite critical value of for a given neighborhood size .this suggests that , if the lattice size in the simulation could be increased further ( a very computationally demanding process ) , eventually the critical values would approach those obtained in the mean - field approximation .the original axelrod model had agents only interact with their immediate neighbors on a lattice , modeling the assumption of that geographic proximity largely determines the possibility of interaction .subsequent work has extended this to neighbors on complex networks , or allowed agent migration , or assumed a well - mixed population ( infinite - ranged social interactions ) on the assumption that online interactions are making this assumption more realistic . despite these , and other , increasingly sophisticated modifications of the axelrod model , however , an examination of the consequences of simply extending the lattice ( von neumann ) neighborhood had not been carried out .we have done so , and shown another phase transition in the model , controlled by the von neumann radius , as well as the well - known phase transition at the critical value of , and drawn a phase diagram .we have also used a mean - field analysis to analyze the behavior on an infinite lattice .these results show that , as well as the value of , the `` scope of cultural possibilities '' , having a critical value above which a multicultural state prevails , there is also a critical value of the radius of interaction , above which a monocultural state prevails .this simply says that , rather unsurprisingly , a world in which people can only interact with their immediate neighbors is ( for a fixed value of ) , more likely to remain multicultural than one in which people can interact with those further away .given this inevitability of a monocultural state for large enough `` neighborhoods '' , it might be more useful to consider alternative measurements of cultural diversity , such as the `` long term cultural diversity '' measured using the curve plotting the number of final cultural domains against the initial number of connected cultural components , as the bounded confidence threshold is varied , as described by ( where a well - mixed population was assumed , and hence a monocultural state results for when the bounded confidence threshold is zero ) .an obvious extension of this work is to examine the behavior of the axelrod model on complex networks where the neighborhood is extended to all agents within paths of length on the network .work by a.s . was supported in part by the asian office of aerospace research and development ( aoard ) grant no . fa2386 - 15 - 1 - 4020 and the australian research council ( arc ) grant no .p.k . acknowledges the support of leibniz program `` probabilistic methods for mobile adhoc networks '' and arc centre of excellence for the mathematical and statistical frontiers ( acems ) grant no .ce140100049 , and thanks prof .peter g. taylor for helpful discussion and for the invitation to visit melbourne .this research was supported by victorian life sciences computation initiative ( vlsci ) grant number vr0261 on its peak computing facility at the university of melbourne , an initiative of the victorian government , australia .we also used the university of melbourne its research services high performance computing facility and support services .72ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) link:\doibase 10.1103/revmodphys.81.591 [ * * , ( ) ] in _ _ , , ( , , ) in _ _ ( , ) pp . * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.72.065102 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.73.046119 [ * * , ( ) ] * * , ( ) \doibase http://dx.doi.org/10.1016/j.physa.2016.04.024 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.67.045101 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) \doibase doi:10.1007/s10959 - 015 - 0623-y [ ( ) , doi:10.1007/s10959 - 015 - 0623-y ] _ _ , ph.d . thesis , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( , , ) pp . `` , '' `` , '' * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
axelrod s model of cultural dissemination , despite its apparent simplicity , demonstrates complex behavior that has been of much interest in statistical physics . despite the many variations and extensions of the model that have been investigated , a systematic investigation of the effects of changing the size of the neighborhood on the lattice in which interactions can occur has not been made . here we investigate the effect of varying the radius of the von neumann neighborhood in which agents can interact . we show , in addition to the well - known phase transition at the critical value of , the number of traits , another phase transition at a critical value of , and draw a phase diagram for the axelrod model on a square lattice . in addition , we present a mean - field approximation of the model in which behavior on an infinite lattice can be analyzed .
cellular automata ( ca ) are spatial computations .they imitate the locality and uniformity of physical law in a stylized digital format .the finiteness of the information density and processing rate in a ca dynamics is also physically realistic . these connections with physicshave been exploited to construct ca models of spatial processes in nature and to explore artificial `` toy '' universes .the discrete and uniform spatial structure of ca computations also makes it possible to `` crystallize '' them into efficient hardware . herewe will focus on ca s as realistic spatial models of ordinary ( non - quantum - coherent ) computation . as fredkin and banks pointed out, we can demonstrate the computing capability of a ca dynamics by showing that certain patterns of bits act like logic gates , like signals , and like wires , and that we can put these pieces together into an initial state that , under the dynamics , exactly simulates the logic circuitry of an ordinary computer .such a ca dynamics is said to be _computation universal_. a ca may also be universal by being able to simulate the operation of a computer in a less efficient manner never reusing any logic gates for example . a universal ca that can perform long iterative computations within a fixed volume of spaceis said to be a _ spatially efficient _ model of computation .we would like our ca models of computation to be as realistic as possible .they should accurately reflect important constraints on physical information processing .for this reason , one of the basic properties that we incorporate into our models is the microscopic reversibility of physical dynamics : there is always enough information in the microscopic state of a physical system to determine not only what it will do next , but also exactly what state it was in a moment ago .this means , in particular , that in reversible ca s ( as in physics ) we can never truly erase any information .this constraint , combined with energy conservation , allows reversible ca systems to accurately model thermodynamic limits on computation .conversely , reversible ca s are particularly useful for modeling thermodynamic processes in physics .reversible ca `` toy universes '' also tend to have long and interesting evolutions .all of the ca s discussed in this paper fall into a class of ca s called lattice gas automata ( lga ) , or simply lattice gases . these ca s are particularly well suited to physical modeling .it is very easy to incorporate constraints such as reversibility , energy conservation and momentum conservation into a lattice gas .lattice gases are known which , in their large - scale average behavior , reproduce the normal continuum differential equations of hydrodynamics . in a lattice gas , particles hop around from lattice site to lattice site .these models are of particular interest here because one can imagine that the particles move continuously between lattice sites in between the discrete ca time - steps .using lga s allows us to add energy and momentum conservation to our computational models , and also to make a direct connection with continuous classical mechanics .our discussion begins with the most realistic classical mechanical model of digital computation , fredkin s billiard ball model .we then describe related classical mechanical models which , unlike the bbm , are isomorphic to simple lattice gases at integer times . in the bbm , computations are constructed out of the elastic collisions of very incompressible spheres .our new 2d and 3d models are based on elastically colliding spheres that are instead very compressible , and hence take an appreciable amount of time to bounce off each other .the universality of these soft sphere models ( ssm s ) depends on the finite extent in time of the interaction , rather than its finite extent in space ( as in the bbm ) .this difference allows us to interpret these models as simple lga s . using the ssm s, we discuss computation in perfectly momentum conserving physical systems ( cf . ) , and show that we can compute just as efficiently in the face of this added constraint .the main difficulty here turns out to be reusing signal - routing resources .we then provide an alternative physical interpretation of the ssm s ( and of all mass and momentum conserving lga s ) as relativistic systems , and discuss some alternative relativistic ssm models .finally , we discuss the use of these kinds of models as semi - classical systems which embody realistic quantum limits on classical computation .{bbm - coll } & \includegraphics[height=1.5in]{bbm - coll - half } & \includegraphics[height=1.5in]{bbm - mirrors } & \includegraphics[height=1.5in]{bbm - crossover } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } & \mbox{\bf ( d ) } \\ \end{array}\ ] ] in figure [ fig.bbm ] , we summarize edward fredkin s classical mechanical model of computation , the billiard ball model .his basic insight is that a location where balls may or may not collide acts like a logic gate : we get a ball coming out at certain places only if another ball did nt knock it away ! if the balls are used as signals , with the presence of a ball representing a logical `` 1 '' and the absence a logical `` 0 '' , then a place where signals intersect acts as a logic gate , with different logic functions of the inputs coming out at different places . figure[ fig.bbm]a illustrates the idea in more detail . for this to work right ,we need synchronized streams of data , with evenly spaced time - slots in which a 1 ( ball ) or 0 ( no ball ) may appear .when two 1 s impinge on the collision `` gate '' , they behave as shown in the figure , and they come out along the paths labeled .if a 1 comes in at but the corresponding slot at is empty , then that 1 makes it through to the path labeled ( and not ) . if sequences of such gates can be connected together with appropriate delays , the set of logic functions that appear at the outputs in figure [ fig.bbm]a is sufficient to build any computer . in order to guarantee composability of these logic gates , we constrain the initial state of the system .all balls are identical and are started at integer coordinates , with the unit of distance taken to be the diameter of the balls .this spacing is indicated in the figure by showing balls centered in the squares of a grid .all balls move at the same speed in one of four directions : up - right , up - left , down - right , or down - left .the unit of time is chosen so that at integer times , all freely moving balls are again found at integer coordinates .we arrange things so that balls always collide at right angles , as in figure [ fig.bbm]a .such a collision leaves the colliding balls on the grid at the next integer time .figure [ fig.bbm]b shows another allowed collision , in which the balls collide at half - integer times ( shown in gray ) but are still found on the grid at integer times .the signals leaving one collision - gate are routed to other gates using fixed mirrors , as shown in figure [ fig.bbm]c .the mirrors are strategically placed so that balls are always found on the grid at integer times .since zeros are represented by no balls ( i.e. , gaps in streams of balls ) , zeros are routed just as effectively by mirrors as the balls themselves are .finally , in figure [ fig.bbm]d , we show how two signal streams are made to cross without interacting this is needed to allow wires to cross in our logic diagrams . in the collision shown , if two balls come in , one each at and , then two balls come out on the same paths and with the same timing as they would have if they had simply passed straight through .needless to say , if one of the input paths has no ball , a ball on the other path just goes straight through . andif both inputs have no ball , we will certainly not get any balls at the outputs , so the zeros go straight through as well .clearly any computation that is done using the bbm is reversible , since if we were to simultaneously and exactly reverse the velocities of all balls , they would exactly retrace their paths , and either meet and collide or not at each intersection , exactly as they did going forward .even if we do nt actually reverse the velocities , we know that there is enough information in the present state to recover any earlier state , simply because we _ could _ reverse the dynamics .thus we have a classical mechanical system which , viewed at integer time steps , performs a discrete reversible digital process .the digital character of this model depends on more than just starting all balls at integer coordinates .we need to be careful , for example , not to wire two outputs together .this would result in head - on collisions which would not leave the balls on the grid at integer times !miswired logic circuits , in which we use a collision gate backward with the four inputs improperly correlated , would also spoil the digital character of the model . rather than depending on correct logic design to assure the applicability of the digital interpretation, we can imagine that our balls have an interaction potential that causes them to pass through each other without interacting in all cases that would cause problems .this is a bit strange , but it does conserve energy and momentum and is reversible . up to four balls , one traveling in each direction , can then occupy the same grid cell as they pass through each other .we can also associate the mirror information with the grid cells , thus completing the bbm as a ca model .unfortunately this is a rather complicated ca with a rather large neighborhood .{elastic - coll } & \includegraphics[height=1.3in]{ebmca - coll } & \includegraphics[height=1.3in]{ebmca - mirrors } & \includegraphics[height=1.3in]{ebmca - cross } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } & \mbox{\bf ( d ) } \\ \end{array}\ ] ] the complexity of the bbm as a ca rule can be attributed to the non - locality of the hard - sphere interaction .although the bbm interaction can be softened with the grid correspondingly adjusted this model depends fundamentally upon information interacting at a finite distance .a very simple ca model based on the bbm , the bbmca avoids this non - locality by modeling the front and back edges of each ball , and using a sequence of interactions between edge - particles to simulate a billiard ball collision .this results in a reversible ca with just a 4-bit neighborhood ( including all mirror information ! ) , but this model gives up exact momentum conservation , even in simulating the collision of two billiard balls .in addition to making the bbmca less physical , this loss of conservation makes bbmca logic circuits harder to synchronize than the original bbm . in the bbm ,if we start a column of signals out , all moving up - right or down - right , then they all have the same horizontal component of momentum .if all the mirrors they encounter are horizontal mirrors , this component remains invariant as we pass the signals through any desired sequence of collision `` gates . ''we do nt have to worry about synchronizing signals they all remain in a single column moving uniformly to the right . in the bbmca ,in contrast , simulated balls are delayed whenever they collide with anything . in a bbmca circuit with only horizontal mirrors ( or even without any mirrors ) , the horizontal component of momentum is not conserved , the center of mass does not move with constant horizontal velocity , and appropriate delays must be inserted in order to bring together signals that have gotten out of step .the bbmca has energy conservation , but not momentum conservation .it turns out that it is easy to make a model which is very similar to the bbm , which has the same kind of momentum conservation as the bbm , and which corresponds isomorphically to a simple ca rule .suppose we set things up exactly as we did for the bbm , with balls on a grid , moving so that they stay on the grid , but we change the collision , making the balls very compressible . in figure [ fig.ebm]a, we illustrate the elastic collision of two balls in the resulting soft sphere model ( ssm ) .if the springiness of the balls is just right ( i.e. , we choose an appropriate interaction potential ) , then the balls find themselves back on the grid after the collision . if only one or the other ball comes in , they go straight through .notice that the output paths are labeled exactly as in the bbm model , except that the paths are deflected inwards rather than outwards ( cf .appendix to ) .if we add bbm - style hard - collisions with mirrors , turns that we use in our ssm circuits can also be achieved by soft mirrors placed at slightly different locations . ]then this model can compute in the same manner as the bbm , with the same kind of momentum conservation aiding synchronization . in figure [ fig.ebm]b ,we have drawn an arrow in each grid cell corresponding to the velocity of the center of a ball at an integer time .the pair of colliding balls is taken to be a single particle , and we also draw an arrow at its center .we ve colored the arrows alternately gray and black , corresponding to successive positions of an incoming pair of logic values .we can now interpret the arrows as describing the dynamics of a simple lattice gas , with the sites of the lattice taken to be the corners of the cells of the grid . in a lattice gas , we alternately move particles and let them interact .in this example , at each lattice site we have room for up to eight particles ( 1 s ) : we can have one particle moving up - right , one down - right , one up - left , one down - left , one right , one left , one up and one down . in the movement step , all up - right particles are simultaneously moved one site up and one site to the right , while all down - right particles are moved down and to the right , etc . after all particles have been moved , we let the particles that have landed at each lattice site interact the interaction at each lattice site is independent of all other lattice sites . in the lattice gaspictured in figure [ fig.ebm]b , we see on the left particles coming in on paths and that are entering two lattice sites ( black arrows ) and the resulting data that leaves those sites ( gray arrows ) .our inferred rule is that single diagonal particles that enter a lattice site come out in the same direction they came in . at the next step ,these gray arrows represent two particles entering a single lattice site .our inferred rule is that when two diagonal particles collide at right angles , they turn into a single particle moving in the direction of the net momentum . now a horizontal black particle enters the next lattice site , and our rule is that it turns back into two diagonal particles . if only one particle had come in , along either or , it would have followed our `` single diagonal particles go straight '' rule , and so single particles would follow the dotted path in the figure. thus our lattice gas exactly duplicates the behavior of the ssm at integer times .{eba - rule } & \includegraphics[height=1.2in]{eba - rule - mirror } & \includegraphics[height=1.2in]{hex - coll } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] from figure [ fig.ebm]c we can infer the rule with the addition of mirrors . along with particles at each lattice site , we allow the possibility of one of two kinds of mirrors horizontal mirrors and vertical mirrors . if a single particle enters a lattice site occupied only by a mirror , then it is deflected as shown in the diagram .signal crossover takes more mirrors than in the bbm ( figure [ fig.ebm]d ) .our lattice gas rule is summarized in figure [ fig.eba-rule]a . for each caseshown , rotations of the state shown on the left turn into the same rotation of the state shown on the right . in all other cases ,particles go straight .this is a simple reversible rule , and ( except in the presence of mirrors ) it exactly conserves momentum .we will discuss a version of this model later without mirrors , in which momentum is always conserved .the relationship between the ssm of figure [ fig.ebm]a and a lattice gas can also be obtained by simply shrinking the size of the ssm balls without changing the grid spacing . with the right time - constant for the two - ball impact process, tiny particles would follow the paths indicated in figure [ fig.ebm]b , interacting at grid - corner lattice sites at integer times .the bbm can not be turned into a lattice gas in this manner , because the bbm depends upon the finite extent of the interaction in space , rather than in time .notice that in establishing an isomorphism between the integer - time dynamics of this ssm and a simple lattice gas , we have added the constraint to the ssm that we can not place mirrors at half - integer coordinates , as we did in order to route signals around in the bbm model in figure [ fig.bbm ] .this means , in particular , that we ca nt delay a signal by one time unit as the arrangement of mirrors in figure [ fig.ebm]c would if the spacing between all mirrors were halved .this does nt impair the universality of the model , however , since we can easily guarantee that all signal paths have an even length .to do this , we simply design our ssm circuits with mirrors at half - integer positions and then rescale the circuits by an even factor ( four is convenient ) . then all mirrors land at integer coordinates .the separation of outputs in the collision of figure [ fig.ebm]b can be rescaled by a factor of four by adding two mirrors to cause the two outputs to immediately collide a second time ( as in the bottom image of figure [ fig.ebm]d ) .we will revisit this issue when we discuss mirror - less models in section [ sec.mom ] .{3d - plane - coll } & \includegraphics[height=1.7in]{3d - diags - coll } & \includegraphics[height=1.7in]{3d - hex }\\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] in figure [ fig.eba-rule]b , we show a mass- and momentum - conserving ssm collision on a triangular lattice , which corresponds to a reversible lattice gas model of computation in exactly the same manner as discussed above .similarly , we can construct ssm s in 3d . in figure[ fig.3d]a , we see a mass and momentum conserving ssm collision using the face - diagonals of the cubes that make up our 3d grid .the resulting particle ( gray ) carries one bit of information about which of two possible planes the face - diagonals that created it resided in . in a corresponding diagram showing colliding spheres ( a 3d version of figure [ fig.ebm]a ) , we would see that this information is carried by the plane along which the spheres are compressed .this model is universal within a single plane of the 3d space , since it is just the 2d square - lattice ssm discussed above . to allow signals to get out of a single plane ,mirrors can be applied to diagonal particles to deflect them onto cube - face diagonals outside of their original plane .a slightly simpler 3d scheme is shown in figure [ fig.3d]b . herewe only use body and face diagonals , and body diagonals only collide when they are coplanar with a face diagonal .since each face diagonal can only come from one pair of body diagonals , no collision - plane information is carried by face - diagonal particles . for mirrors, we can restrict ourselves to reflecting each body diagonal into one of the three directions that it could have been deflected into by a collision with another body diagonal .this is an interesting restriction , because it means that we can potentially make a momentum - conserving version of this model without mirrors , using only signals to deflect signals .finally , the scheme shown in figure [ fig.3d]c uses only face diagonals , with the heavier particle traveling half as fast as the particles that collide to produce it . as in figure [ fig.3d]a , the slower particle carries a bit of collision - plane information . to accommodate the slower particles, the lattice needs to be twice as fine as in figures [ fig.3d]a and [ fig.3d]b , but we ve only shown one intermediate lattice site for clarity . noting that three coplanar face - diagonals of a cube form an equilateral triangle , we see that this model , for particles restricted to a single plane , is exactly equivalent to the triangular - lattice model pictured in figure [ fig.eba-rule]b . as in the model pictured in figure [ fig.3d]b , the deflection directions that can be obtained from particle - particle collisions are sufficient for 3d routing , andso this model is also a candidate for mirrorless momentum - conserving computation in three dimensions .a rather unphysical property of the bbm , as well as of the related soft sphere models we have constructed , is the use of immovable mirrors .if the mirrors moved even a little bit , they would spoil the digital nature of these models . to be perfectly immovable , as we demand, these mirrors must be infinitely massive , which is not very realistic . in this section, we will discuss ssm gases which compute without using mirrors , and hence are perfectly momentum conserving .the issue of computation universality in momentum - conserving lattice gases was discussed in , where it was shown that some 2d lga s of physical interest can compute any logical function .this paper did not , however , address the issue of whether such lga s can be spatially efficient models of computation , reusing spatial resources as ordinary computers do .there is also a new question about the generation of entropy ( undesired information ) which arises in the context of reversible momentum conserving computation models , and which we will address . with mirrors ,any reversible function can be computed in the ssm ( or bbm ) without leaving any intermediate results in the computer s memory .is this still true without mirrors , where even the routing of signals requires an interaction with other signals ?we will demonstrate mirrorless momentum - conserving ssm s that are just as efficient spatially as an ssm _ with _ mirrors , and that do nt need to generate any more entropy than an ssm with mirrors . in the processwe will illustrate some of the general physical issues involved in efficiently routing signals without mirrors .{ebca - dirty - mirror } & \includegraphics[height=1.5in]{ebca - clean - mirror } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } \\ \end{array}\ ] ] we begin our discussion by replacing a fixed mirror with a constant stream of particles ( ones ) , aimed at the position where we want a signal reflected .this is illustrated in figure [ fig.1mirrors]a .here we show the 2d square - lattice ssm of figure [ fig.ebm]a , with a signal being deflected by the constant stream . along with the desired reflection of , we also produce two undesired copies of ( one of them complemented ) .this suggests that perhaps every bend in every signal path will continuously generate undesired information that will have to be removed from the computer .figure [ fig.1mirrors]b shows a more promising deflection .the only thing that has changed is that we have brought in along with , and so we now get a 1 coming out the bottom regardless of what the value of was .thus signals that are represented in complementary form ( so - called `` dual - rail '' signals ) can be deflected cleanly .this makes sense , since each signal now carries one unit of momentum regardless of its value , and so the change of momentum in the deflecting mirror stream can now also be independent of the signal value .{cross - rest } & \includegraphics[height=1.5in]{cross - free } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } \\ \end{array}\ ] ] an important use of mirrors in the bbm and in ssm s is to allow signals to cross each other without interacting .while signals can also be made to cross by leaving regular gaps in signal streams and delaying one signal stream relative to the other , this technique requires the use of mirrors to insert compensating delays that resynchronize streams .if we re using streams of balls to act as mirrors , we have a problem when these mirror streams have to cross signals , or even each other . we can deal with this problem by extending the non - interacting portion of our dynamics . in order to make our ssm s unconditionally digital, we already require that balls pass through each other when too many try to pile up in one place .thus it seems natural to also use the presence of extra balls to force signals to cross . the simplest way to dothis is to add a rest particle to the model a particle that does nt move .at a site `` marked '' by a rest particle , signals will simply pass through each other .this is mass and momentum conserving , and is perfectly compatible with continuous classical mechanics .notice that we do nt actually have to change our ssm collision rule to include this extra non - interacting case , since we gave the rule in the form , `` these cases interact , and in all other cases particles go straight .'' figure [ fig.cross]a shows an example of two signal paths crossing over a rest particle ( indicated by a circle ) .figure [ fig.cross]b shows an example of a signal crossover that does nt require a rest particle in the lattice gas version of the ssm . since lga particles only interact at lattice sites , which are the corners of the grid , two signals that cross as in this figure can not interact .such a crossover occurs in figure [ fig.1mirrors]b , for example . without the lga lattice to indicate that no interaction can take place at this site , this crossover would also require a rest particle . to keep the lga and the continuous versions of the model equivalent, we will consider a rest particle to be present implicitly wherever signals cross between lattice sites . with the addition of rest particles to indicate signalcrossover , we can use the messy deflection of figure [ fig.1mirrors]a to build reusable circuitry and so perform spatially - efficient computation .the paths of the incoming `` mirror streams '' can cross whatever signals are in their way to get to the point where they are needed , and then the extra undesired `` garbage '' output streams can be led away by allowing them to cross any signals that are in their way . since every mirror stream ( which brings in energy but no information ) and every garbage stream ( which carries away both energy and entropy ) crosses a surface that encloses the circuit , the number of such streams that we can have is limited by the area of the enclosing surface .meanwhile , the number of circuit elements ( and hence also the demand for mirror and garbage streams ) grows as the volume of the circuit .this is the familiar surface to volume ratio problem that limits heat removal in ordinary heat - generating physical systems : the rate of heat generation is proportional to the volume , but the rate of heat removal is only proportional to the surface area .we have the same kind of problem if we try to bring free energy ( i.e. , energy without information ) into a volume .using dual - rail signalling , we ve seen that we have neat collisions available that do nt corrupt the deflecting mirror streams .we do not , however , avoid the surface to volume problem unless these clean mirror - streams can be reused : otherwise each reflection involves bringing in a mirror stream all the way from outside of the circuit , using it once , and then sending the reflected mirror stream all the way out of the circuit .thus if we ca nt reuse mirror streams , the maximum number of circuit elements we can put into a volume of space grows like the surface area rather than like the volume !we will show that ( at least in 2d ) mirror streams can be reused , and consequently momentum conservation does nt impair the spatial efficiency of computations .{ebca - mirror - cboard } & \includegraphics[height=1.4in]{ebca - mirror2 } & \includegraphics[height=1.4in]{ebca - mirror2-shift }\\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] even though we can reflect dual - rail signals and make them cross , we still have a problem with routing signals ( actually two problems , but we ll discuss the second problem when we confront it ) . figure [ fig.mirror-shift]a illustrates a problem that stems from not being able to reflect signals at half - integer locations .every reflection leaves the top signal on the dark checkerboard we ve drawn it ca nt connect to an input on the light checkerboard .we can fix this by rescaling the circuit , spreading all signals twice as far apart ( figure [ fig.mirror-shift]b ) .now the implicit crossover in the middle of figure [ fig.mirror-shift]a must be made explicit .notice also that the horizontal particle must be stretched it too goes straight in the presence of a rest particle .now we can move the reflection to a position that was formerly a half - integer location ( figure [ fig.mirror-shift]c ) , and the signal is deflected onto the white checkerboard .{ebca - switch - small } & \includegraphics[height=1.5in]{ebca - switch - on } & \includegraphics[height=1.5in]{ebca - switch - off } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] we ve seen that dual - rail signals can be cleanly routed . in order to use such signals for computation , we need to be able to build logic with dual - rail inputs and outputs .we will now see that if we let two dual - rail signals collide , we can form a switch - gate , as shown in figure [ fig.switch]a .the switch gate is a universal logic element that leaves the control input unchanged , and routes the controlled input to one of two places , depending on the value of . since each dual rail signal contains a 1 , andsince all collisions conserve the number of 1 s , all dual - rail logic gates need an equal number of inputs and outputs .thus our three output switch - gate needs an extra input which is a dual - rail constant of 0 .the switch gate ( figure [ fig.switch]a ) is based on a reflection of the type shown in figure [ fig.1mirrors]b . if = 1 ( figure [ fig.switch]b ) , the and pair are reflected downward; if = 0 there is no reflection and they go straight .the signal reflects off the constant - one input as in figure [ fig.1mirrors]a , to regenerate the and outputs .notice that if a rest particle were added in figure [ fig.switch]a at the intersection of the and signals , the switch would be stuck in the _ off _ position : and would always go straight through , and and would get reflected by the constant - one , and come out in their normal position . in order to see that momentum conservation does nt impair the spatial efficiency of ssm computation, we first illustrate the issues involved by showing how mirror streams can be reused in an array of fredkin gates .{ebmca - fgate - symm } & \includegraphics[height=2.5in]{farray - realistic } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } \\ \end{array}\ ] ] a fredkin gate has three inputs and three outputs .the input , called the control , appears unchanged as the output .the other two inputs either appear unchanged at corresponding outputs ( if = 1 ) , or appear interchanged at the corresponding outputs ( if = 0 ) .we construct a fredkin gate out of four switch gates , as shown in figure [ fig.fgate]a .the first two switch gates are used forward , the last two switch gates are used backward ( i.e. , flipped about a vertical axis ) .the control input is colored in solid gray , and we see it wend its way through the four switch gates .constant 1 s are shown using dotted gray arrows . in the case= 0 , all four switch gates pass their controlled signals straight through , and so and interchange positions in the output . in the case = 1 , all four switch gates deflect their controlled signals , and so and come out in the same positions they went in .now notice the bilateral symmetry of the fredkin gate implementation .we can make use of this symmetry in constructing an array of fredkin gates that reuse the constant 1 signals .if we add an extra stream of constant 1 s along the four paths drawn as arrowless dotted lines ( making these lie on the lattice involves rescaling the circuit ) , then the set of constant streams coming in or leaving along each of the four diagonal directions is symmetric about some axis .this means that we can make a regular array of fredkin gates and upside - down fredkin gates , as is indicated in figure [ fig.fgate]b , with the constants all lining up .these constants are passed back and forth between adjacent fredkin gates , and so do nt have to be supplied from outside of the array . since an upside - down fredkin gate is still a fredkin gate , butwith the sense of the control inverted , we have shown that constant streams of ones can be reused in a regular array of logic .we still have not routed the inputs and outputs to the fredkin gates , and so we have another set of associated mirror - streams that need to be reused .the obvious approach is to create a regular pattern of interconnection , thus allowing us to again solve the problem globally by solving it locally .but a regular pattern of interconnected logic elements that can implement universal computation is just a universal ca : we should simply implement a universal ca that does nt have momentum conservation !{abcd - array } & \includegraphics[height=1.5in]{f - bbmca - symm } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } \\ \end{array}\ ] ] the bbmca is a simple reversible ca based on the bbm , with fixed mirrors .it can be implemented as a regular array of identical logic blocks , each of which takes four bits of input , and produces four bits of output ( figure [ fig.bbmca-array]a ) .each logic block exchanges one bit of data with each of the four blocks that are diagonally adjacent .the four bits of input can be thought of as a pattern of data in a 2 region of the lattice , and the four outputs are the next state for this region .according to the bbmca rule , certain patterns are turned into each other , while other patterns are left unchanged .this rule can be implemented by a small number of switch gates , as is indicated schematically in figure [ fig.bbmca-array]b .we first implement a demultiplexer , which produces a value of 1 at a given output if and only if a corresponding 2 pattern appears in the inputs .patterns that do nt change under the bbmca dynamics only produce 1 s in the outputs labeled `` other . ''the demultiplexer is a combinational circuit ( i.e. , one without feedback ) .the inverse circuit is simply the mirror image of , obtained by reflecting about a vertical axis . in between and wire together the cases that need to interchange .this gives us a bilaterally symmetric circuit which implements the bbmca logic block in the same manner that our circuit of figure [ fig.fgate]a implemented the fredkin gate .note that the overall circuit is its own inverse , as any bilaterally symmetric combinational ssm circuit must be .{bbmca - block - f } & \includegraphics[height=1.1in]{lift - symm } & \includegraphics[height=1.4in]{symm - inputs - nof }\\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] now we would like to connect these logic blocks in a uniform array .we will first consider the issue of sharing the mirror streams associated with the individual logic blocks , and then the issue of sharing the mirror streams associated with interconnecting the four inputs and outputs . in figure[ fig.bbmca-symm]a we see a schematic representation of our bbmca block .it is a combinational circuit , with signals flowing from left to right .the number of signal streams flowing in along one diagonal direction is equal to the number flowing out along the same direction this is true overall because it s true of every collision ! in particular , since the four inputs and outputs are already matched in the diagram , the mirror streams must also be matched there are an equal number of streams of constant 1 s coming in and out along each direction .the input streams will not , however , in general be aligned with the output streams .if we can align these , then we can make a regular array of these blocks , with mirror - stream outputs of one connected to the mirror - stream inputs of the next . in figure[ fig.bbmca-symm]b we show how to align streams of ones . due to the bilateral symmetry of the bbmca circuit ,every incoming stream that we would like to shift up or down on one side is matched by an outgoing stream that needs to be shifted identically on the other side .thus we will shift streams in pairs . to understand the diagram ,suppose that and are constant streams of ones , with going into a circuit below the diagram , and coming out of it . now suppose that we would like to raise and to the positions labeled and . if a constant stream of horizontal particles is provided midway in between the two vertical positions, then we can accomplish this as shown .the constant horizontal stream splits at the first position without a rest particle .it provides the shifted signal , and a matching stream of ones collides with the original signal .the resulting horizontal stream is routed straight across until it reaches , where an incoming stream of ones is needed . hereit splits , with the extra stream of ones colliding with the incoming signal to restore the original horizontal stream of ones , which can be reused in the next block of the array of circuit blocks to perform the same function .the net effect is that the mirror streams and coming out of and into a circuit have been replaced with new streams that are shifted vertically . by reserving some fraction of the horizontal channels for horizontal constants that stream across the whole array , and reserving some channels for horizontal constantsthat connect pairs of streams being raised , we can adjust the positions of the mirror streams as needed .note that a mirror pair can be raised by several smaller shifts rather than just one large shift , in case there are conflicts in the use of horizontal constants .exactly the same arrangement can be used to lower and going into and out of a circuit above the diagram .if we flip the diagram over , we see how to handle pairs of streams going in the opposite directions .now we note that the wiring of the four signal inputs and outputs in our bbmca array also has bilateral symmetry , about a horizontal axis ( figure [ fig.bbmca-symm]c ) .thus it seems that we should be able to apply the same technique to align the mirror streams associated with this routing , in order to complete our construction .but there is a problem .{ebca - mirror - problem } & \includegraphics[height=1.6in]{eba - delay - rule } & \includegraphics[height=1.6in]{ebca - mirror - fix - delay } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] so far , we have only constructed circuits without feedback all signal flow has been left to right . because of the rotational symmetry of the ssm , we might expect that feedback is nt a problem .when we decided to use dual - rail signalling , however , we broke this symmetry .the timing of the dual rail signal pairs is aligned vertically and not horizontally . in figure[ fig.problem-delay]a , we see the problem that we encounter when we try to reflect a right - moving signal back to the left .a signal that passed the input position labeled at an even time - step collides with an unrelated signal that passed input at an odd time - step .these two signals need to be complements of each other in order to reconstitute the reflecting mirror stream .thus we only know how to reflect signals vertically , not horizontally !we will discuss two ways of fixing this problem .both involve using additional collisions in the ssm .the first method we describe is more complicated , since it adds additional particles and velocities to the model , but is more obvious .the idea is that we can resynchronize dual - rail pairs by delaying one signal .we do this by introducing an interacting rest particle ( distinct from our previously introduced non - interacting rest particle ) with the same mass as our diagonally - moving particles .the picture we have in mind is that if there is an interacting rest particle in the path of a diagonally - moving particle , then we can have a collision in which the moving particle ends up stationary , and the stationary particle ends up moving . during the finite intervalwhile the particles are colliding , the mass is doubled and so ( from momentum conservation ) the velocity is halved . by picking the head - on impact interval appropriately , the new stationary particle can be deposited on a lattice site , so that the model remains digital .this is illustrated in figure [ fig.problem-delay]b .here the square block indicates the interacting rest particle .this is picked up in a collision with a diagonal - moving particle to produce a half - speed double - mass particle indicated by the short arrow .note that adding this delay collision requires us to make our lattice twice as fine to accommodate the slower diagonal particles .it adds five new particle - states to our model ( four directions for the slow particle , and an interacting rest particle ) .the model remains , however , physically `` realistic '' and momentum conserving .figure [ fig.problem-delay]c illustrates the use of this delay to reflect a rightgoing signal leftwards .we insert a delay in the path both before the mirror - stream collision and afterward , in order to turn the plane of synchronization , turning it at a time .notice that we use a non - interacting rest particle ( round ) to extend the lifetime of the half - speed diagonal particle .in addition to complicating the model , this delay technique adds an extra complication to showing that momentum conservation does nt impair spatial efficiency .signals are delayed by picking up and later depositing a rest particle . in order to reuse circuitry, we must include a mechanism for putting the rest particle back where it started before the next signal comes through . since we can pick this particle up from any direction, this should be possible by using a succession of constant streams coming from various directions , but these streams must also be reused .we wo nt try to show that this can be done here we will pursue an easier course in the next section. it would be simpler if the moving particle was deposited at the same position that the particle it hit came from , so that no cleanup was needed .unfortunately , this necessarily results in no delay .since the velocity of the center of mass of the two particles is constant , if we end up with a stationary particle where we started , the other particle must exactly take the place of the one that stopped .{ebma - rule - a } & \includegraphics[height=.5in]{ebma - rule - b } & \includegraphics[height=.5in]{ebma - rule - c } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ]we can complete our construction without adding any extra particles or velocities to the model .instead , we simply add some cases to the ssm in which our normal soft - sphere collisions happen even when there are extra particles nearby .the cases we will need are shown in figure [ fig.extra-collisions ] . in this diagram, we show each forward collision in one particular orientation collisions in other orientations and the corresponding inverse collisions also apply .the first case is the ssm collision with nothing else around .the second case is a forward and backward collision simultaneously this will let us bounce signals back the way they came .the third case has at least two spectators , and possibly a third ( indicated by a dotted arrow ) .the collision proceeds normally , and all spectators pass straight through .this case will allow us to separate forward and backward moving signals .as usual , all other cases go straight .in particular , we will depend for the first time on head - on colliding particles going straight .we have not used any head - on collisions in our circuits thus far , and so we are free to define their behavior here . {ebca - mirror - fix - reverse } & \includegraphics[height=1.3in]{ebca - oneway - mirror } & \includegraphics[height=1.3in]{ebca - otherway - mirror } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \\ \end{array}\ ] ] figure [ fig.problem-reverse]a shows how two complementary sets of dual - rail signals can be reflected back the way they came .we show the signals up to the moment where they come to a point where they collide with the two constant streams . in the casewhere = 1 , we have four diagonal signals colliding at a point , and so everything goes straight through .in particular , the constant streams have particles going in both directions ( passing through each other ) , and the signal particles go back up the paths without interacting with oncoming signals . in the casewhere = 0 , we use our new `` both - directions '' collision , which again sends all particles back the way they came .thus we have succeeded in reversing the direction of a signal stream .figure [ fig.problem-reverse]b shows a mirror with all signals moving from left to right .we ve added in vertical constant - streams in two places , which do nt affect the operation of the `` mirror . ''these paths have a continual stream of particles in both the up and down directions , and so these particles all go straight ( head - on collisions ) . in figure [ fig.problem-reverse]c , we ve just shown signals coming into this mirror backward ( with the forward paths drawn in lightly ) .this mirror does nt reflect these backward - going signals , and so they go straight through .the vertical constants were needed to break the symmetry , so that it s unambiguous which signals should interact .this separation uses the extra spectator - particle cases added to our rule in figure [ fig.extra-collisions]c .as we will discuss , in a triangular - lattice ssm the separation at mirrors does nt require any vertical constants at the mirrors ( see section [ sec.hex ] ) .{circuit } & \includegraphics[height=1.3in]{circuit - flipped } & \includegraphics[height=1.3in]{ebca - switch - or } & \includegraphics[height=1.3in]{plus - flip } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } & \mbox{\bf ( d ) } \\ \end{array}\ ] ] {hex - min - rule } & \includegraphics[height=1.6in]{hex - mirror } & \includegraphics[height=1.6in]{hex - switch } & \includegraphics[height=1.6in]{hex - reverse } \\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } & \mbox{\bf ( d ) } \\ \end{array}\ ] ] finally , figure [ fig.upside-down ] shows how we can arrange to always have two complementary dual - rail pairs collide whenever we need to send a signal backward .figure [ fig.upside-down]a shows an ssm circuit with some number of dual - rail pairs . in each pair , the signals are synchronized vertically , with the uncomplemented signal on top .figure [ fig.upside-down]b shows the same gate flipped vertically .the collisions that implement the circuit work perfectly well upside - down , but both the inputs and the outputs are complemented by this inversion .for example , in figure [ fig.upside-down]c , we have turned a switch - gate upside down .if we relabel inputs and outputs in conventional order , then we see that this gate performs a logical or where the original gate performed an and . in figure[ fig.upside-down]d , we take our bbmca logic block of figure [ fig.bbmca-array]b and add a vertically reflected copy . this pair of circuits , taken together , has both vertical and horizontal symmetry .given quad - rail inputs ( dual rail inputs along with their dual - rail complements ) , it produces corresponding quad - rail outputs , which can be reflected backward using the collision of figure [ fig.problem-reverse]a , and separated at mirrors , as shown in figure [ fig.problem-reverse]c .now note that the constant - lifting technique of figure [ fig.bbmca-symm]b works equally well even if all of the constant streams have 1 s flowing in both directions simultaneously , by virtue of the bidirectional collision case of figure [ fig.extra-collisions]b .thus we are able to apply the constant - symmetrizing technique to mirror streams that connect the four signals between our bbmca logic blocks ( figure [ fig.bbmca-symm]c ) , and complete our construction .all of this works equally well for an ssm on the triangular lattice , and is even slightly simpler , since we do nt need to add extra constant streams at mirrors where forward and backward moving signals separate ( as we did in figure [ fig.problem-reverse]c ) .the complete rule is given in figure [ fig.hex-version]a : the dotted arrow indicates a position where an extra `` spectator '' particle may or may not come in . if present , it passes straight through and does nt interfere with the collision . in figures[ fig.hex-version]b and [ fig.hex-version]c , we see how mirrors and switch - gates ( and similarly any other square - lattice ssm combinational circuit ) can simply be stretched vertically to fit onto the triangular lattice .a back - reflection , where signals are sent back the way they came , is shown in figure [ fig.hex-version]d .this of course also means that the corresponding 3d model ( figure [ fig.3d]c ) can perform efficient momentum - conserving computation , at least in a single plane .if we have a dual - rail pair in one plane of this lattice , and its dual - rail complement directly below it in a parallel plane , this combination can be deflected cleanly in either of two planes by a pair of constant mirror - streams . thus it seems plausible that this kind of discussion may be generalized to three dimensions , but we wo nt pursue that here .we have presented examples of reversible lattice gases that support universal computation and that can be interpreted as a discrete - time sampling of the classical - mechanical dynamics of compressible balls .we would like to present here an alternative interpretation of the same models as a discrete - time sampling of relativistic classical mechanics , in which kinetic energy is converted by collisions into rest mass and then back into kinetic energy . for a relativistic collision of some set of particles , both relativistic energy and relativistic momentumare conserved , and so : where the unprimed quantities are the values for each particle before the collision , and the primed quantities are after the collision .these equations are true regardless of whether the various particles involved in the collision are massive or massless .now we note that for _ any mass and momentum conserving lattice gas _ , and so we need only reinterpret what is normally called `` mass '' in these models as relativistic energy in order to interpret the collisions in such a lattice gas as being relativistic . if all collisions are relativistically conservative , then the overall dynamics exactly conserves relativistic energy and momentum , regardless of the frame of reference in which the system is analyzed .normal non - relativistic systems have separate conservations of mass and non - relativistic energy a property that the collisions in most momentum - conserving lattice gases lack .thus we might argue that the relativistic interpretation is more natural in general . in the collision of figure [ fig.ebm]b , for example , we might call the incoming pair of particles `` photons , '' each with unit energy and unit speed .the two photons collide , and the vertical components of their momenta cancel , producing a slower moving ( ) massive particle ( ) with energy 2 and with the same horizontal component of momentum as the original pair .after one step , the massive particle decays back into two photons . at each step , relativisticenergy and momentum are conserved .as is discussed elsewhere , macroscopic relativistic invariance could be a key ingredient in constructing ca models with more of the macroscopic richness that nature has .if a ca had macroscopic relativistic invariance , then every complex macroscopic structure ( assuming there were any ! ) could be set in motion , since the macroscopic dynamical laws would be independent of the state of motion . thus complex macroscopic structures could move around and interact and recombine . any system with macroscopic relativistic symmetryis guaranteed to also have the relativistic conservations of energy and momentum that go along with it . as fredkin has pointed out , a natural approach to achieving macroscopic symmetries in ca sis to start by putting the associated microscopic conservations directly into the ca rule we certainly ca nt put the continuous symmetries there ! momentum and mass conserving lga models effectively do this . of course , merely reinterpreting the microscopic dynamics of lattice gases relativistically does nt make their macroscopic dynamics any richer .one additional microscopic property that we can look for is the ability to perform computation , using space as efficiently as is possible : this enables a system to support the highest possible level of complexity in a finite region .microscopically , ssm gases have both a relativistic interpretation and spatial efficiency for computation .what we would really like is a dynamics in which both of these properties persist at the macroscopic scale .{xrel - rule } & \includegraphics[height=1.4in]{xrel-01 } & \includegraphics[height=1.4in]{xrel-11 } & \includegraphics[height=1.4in]{xrel - mirror } \\\mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } & \mbox{\bf ( d ) } \\ \end{array}\ ] ] if we are trying to achieve macroscopic relativistic invariance along with efficient macroscopic computational capability , we can see that one potential problem in our `` bounce - back '' ssm gases ( figures [ fig.extra-collisions ] and [ fig.hex-version ] ) is a defect in their discrete rotational symmetry .dual - rail pairs of signals aligned in one orientation ca nt easily interact with dual - rail pairs that are aligned in a ( triangular lattice ) or ( square lattice ) rotated orientation .if this causes a problem macroscopically , we can always try adding individual signal delays to the model , as in figure [ fig.problem-delay]b .this may have macroscopic problems as well , however , since turning signals with the correct timing requires several correlated interactions .of course the reason we adopted dual - rail signalling to begin with was to avoid mixing logic - value information with signal momentum every dual - rail signal has unit momentum and can be reflected without `` measuring '' the logic value .perhaps we should simply decouple these two quantities at the level of the individual particle , and use some other degree of freedom ( other than presence or absence of a particle ) to encode the logic state ( eg ., angular momentum ) .an example of a model which decouples logic values and momentum is given in figure [ fig.rel-example ] .in figure [ fig.rel-example]a , we define a rule which involves three kinds of interacting particles .figures [ fig.rel-example]b and [ fig.rel-example]c show how an ssm style collision - gate can be realized , using one kind of particle to represent an intermediate state .single ones go straight , whereas pairs of ones are displaced inwards . both ones and zerosare deflected by the wavy `` mirror '' particles , which can play the role of the mirror - streams in our earlier constructions . deflecting a binary signal conserves momentum without recourse to dual rail logic , and without contaminating the mirror - stream .adding rest particles to this model allows signals to cross ( since the rule is , `` in all other cases particles do nt interact '' ) .models similar to this `` proto - ssm '' would be interesting to investigate on other lattices , in both 2d and 3d .the use of rest particles to allow signals to cross in this and earlier rules raises another issue connected with the macroscopic limit .if we want to support complicated macroscopic moving structures that contain rest particles , we have to have the rest particles move along with them ! ( or perhaps use moving signals to indicate crossings . )if we want to make rest particles `` move , '' they ca nt be completely non - interacting .thus we might want to extend the dynamics so that rest particles can both be created and destroyed .this could be done by redefining some of the non - interacting collision cases that have not been used in our constructions we have actually used very few of these cases .these collisions would be different from the springy collision of figure [ fig.ebm]a .even a single - particle colliding with a rest particle can move it ( as in figure [ fig.problem-delay]b for example ) .these are all issues that can be approached both theoretically , and by studying large - scale simulations .the term _ semi - classical _ has been applied to analyses in which a classical physics model can be used to reproduce properties of a physical system that are fundamentally quantum mechanical .since the finite and extensive character of entropy ( information ) in a finite physical system is such a property , all ca models can in a sense be considered semi - classical .it is interesting to ask what other aspects of quantum dynamics can be captured in classical ca models .one such aspect is the relationship in quantum systems between energy and maximum rate of state change .a quantum system takes a finite amount of time to evolve from a given state to a different state ( i.e. , a state that is quantum mechanically orthogonal ) .there is a simple relationship between the energy of a quantum system in the classical limit and the maximum rate at which the system can pass through a succession of distinct ( mutually orthogonal ) quantum states .this rate depends only on how much energy the system has .suppose that the quantum mechanical average energy ( which is the energy that appears in the classical equations of motion ) is measured relative to the system s ground - state energy , and in units where planck s constant is one .then the maximum number of distinct changes that can occur in the system per unit of time is simply , and this bound is always achieved by some state .now suppose we have an energy - conserving lga started in a state with total energy , where is much less than the maximum possible energy that we can fit onto the lattice .suppose also that the smallest quantity of energy that moves around in the lga dynamics is a particle with energy `` one . '' then with the given energy , the maximum number of spots that can possibly change on the lattice in one time - step is ( just as in the quantum case ) : smallest energy particles can each leave one spot and move to another , each causing two changes if none of them lands on a spot that was just vacated by another particle . since the minimum value of is 1 in this dynamics , andthe minimum value of is 1 since this is our integer unit of time , it is consistent to think of this as a system in which the minimum value of is 1 ( which for a quantum system would mean ) .thus simple lga s such as the ssm gases reproduce the quantum limit in terms of their maximum rate of dynamical change .this kind of property is interesting in a physical model of computation , since simple models that accurately reflect real physical limits allow us to ask rather sharp questions about quantifying the physical resources required by various algorithms ( cf .we have described soft sphere models of computation , a class of reversible and computation - universal lattice gases which correspond to a discrete - time sampling of continuous classical mechanical systems .we have described models in both 2d and 3d that use immovable mirrors , and provided a technique for making related models without immovable mirrors that are exactly momentum - conserving while preserving their universality and spatial efficiency . in the context of the 2d momentum - conserving models, we have shown that it is possible to avoid entropy generation associated with routing signals .for all of the momentum conserving models we have provided both a non - relativistic and a relativistic interpretation of the microscopic dynamics .the same relativistic interpretation applies generally to mass and momentum conserving lattice gases .we have also provided a semi - classical interpretation under which these models give the correct physical bound on maximum computation rate .it is easy to show that reversible lga s can all be turned into quantum dynamics which reproduce the lga state at integer times .thus ssm gases can be interpreted not only as both relativistic and non - relativistic systems , but also as both classical and as quantum systems . in all cases ,the models are digital at integer times , and so provide a link between continuous physics and the dynamics of digital information in all of these domains , and perhaps also a bridge linking informational concepts between these domains .this work was supported by darpa under contract number dabt63 - 95-c-0130 .this work was stimulated by the sfi constructive ca workshop .
_ fredkin s billiard ball model ( bbm ) is a continuous classical mechanical model of computation based on the elastic collisions of identical finite - diameter hard spheres . when the bbm is initialized appropriately , the sequence of states that appear at successive integer time - steps is equivalent to a discrete digital dynamics . _ here we discuss some models of computation that are based on the elastic collisions of identical finite - diameter _ soft _ spheres : spheres which are very compressible and hence take an appreciable amount of time to bounce off each other . because of this extended impact period , these soft sphere models ( ssm s ) correspond directly to simple lattice gas automata unlike the fast - impact bbm . successive time - steps of an ssm lattice gas dynamics can be viewed as integer - time snapshots of a continuous physical dynamics with a finite - range soft - potential interaction . we present both 2d and 3d models of universal ca s of this type , and then discuss spatially - efficient computation using momentum conserving versions of these models ( i.e. , without fixed mirrors ) . finally , we discuss the interpretation of these models as relativistic and as semi - classical systems , and extensions of these models motivated by these interpretations .
as neural networks are applied to increasingly complex tasks , they are often trained to meet end - to - end objectives that go beyond simple functional specifications .these objectives include , for example , generating realistic images ( e.g. , ) and solving multiagent problems ( e.g. , ) . advancing these lines of work , we show that neural networks can learn to protect their communications in order to satisfy a policy specified in terms of an adversary .cryptography is broadly concerned with algorithms and protocols that ensure the secrecy and integrity of information .cryptographic mechanisms are typically described as programs or turing machines .attackers are also described in those terms , with bounds on their complexity ( e.g. , limited to polynomial time ) and on their chances of success ( e.g. , limited to a negligible probability ) .a mechanism is deemed secure if it achieves its goal against all attackers .for instance , an encryption algorithm is said to be secure if no attacker can extract information about plaintexts from ciphertexts .modern cryptography provides rigorous versions of such definitions . adversaries also play important roles in the design and training of neural networks .they arise , in particular , in work on adversarial examples and on generative adversarial networks ( gans ) . in this latter context , the adversaries are neural networks ( rather than turing machines ) that attempt to determine whether a sample value was generated by a model or drawn from a given data distribution . furthermore, in contrast with definitions in cryptography , practical approaches to training gans do not consider all possible adversaries in a class , but rather one or a small number of adversaries that are optimized by training .we build on these ideas in our work .neural networks are generally not meant to be great at cryptography .famously , the simplest neural networks can not even compute xor , which is basic to many cryptographic algorithms .nevertheless , as we demonstrate , neural networks can learn to protect the confidentiality of their data from other neural networks : they discover forms of encryption and decryption , without being taught specific algorithms for these purposes . knowing how to encryptis seldom enough for security and privacy .interestingly , neural networks can also learn _ what _ to encrypt in order to achieve a desired secrecy property while maximizing utility .thus , when we wish to prevent an adversary from seeing a fragment of a plaintext , or from estimating a function of the plaintext , encryption can be selective , hiding the plaintext only partly .the resulting cryptosystems are generated automatically . in this respect, our work resembles recent research on automatic synthesis of cryptosystems , with tools such as zoocrypt , and contrasts with most of the literature , where hand - crafted cryptosystems are the norm .zoocrypt relies on symbolic theorem - proving , rather than neural networks .classical cryptography , and tools such as zoocrypt , typically provide a higher level of transparency and assurance than we would expect by our methods .our model of the adversary , which avoids quantification , results in much weaker guarantees . on the other hand , it is refreshingly simple , and it may sometimes be appropriate .consider , for example , a neural network with several components , and suppose that we wish to guarantee that one of the components does not rely on some aspect of the input data , perhaps because of concerns about privacy or discrimination .neural networks are notoriously difficult to explain , so it may be hard to characterize how the component functions . a simple solution is to treat the component as an adversary , and to apply encryption so that it does not have access to the information that it should not use . in this respect ,the present work follows the recent research on fair representations , which can hide or remove sensitive information , but goes beyond that work by allowing for the possibility of decryption , which supports richer dataflow structures. classical cryptography may be able to support some applications along these lines .in particular , homomorphic encryption enables inference on encrypted data . on the other hand , classical cryptographic functionsare generally not differentiable , so they are at odds with training by stochastic gradient descent ( sgd ) , the main optimization technique for deep neural networks . therefore , we would have trouble learning _ what _ to encrypt , even if we know how to encrypt .integrating classical cryptographic functions and , more generally , integrating other known functions and relations ( e.g. , )into neural networks remains a fascinating problem .prior work at the intersection of machine learning and cryptography has focused on the generation and establishment of cryptographic keys , and on corresponding attacks . in contrast , our work takes these keys for granted , and focuses on their use ; a crucial , new element in our work is the reliance on adversarial goals and training .more broadly , from the perspective of machine learning , our work relates to the application of neural networks to multiagent tasks , mentioned above , and to the vibrant research on generative models and on adversarial training ( e.g. , ) . from the perspective of cryptography , it relates to big themes such as privacy and discrimination .while we embrace a playful , exploratory approach , we do so with the hope that it will provide insights useful for further work on these topics .section [ sec : sharedkey ] presents our approach to learning symmetric encryption ( that is , shared - key encryption , in which the same keys are used for encryption and for decryption ) and our corresponding results .appendix [ sec : publickey ] explains how the same concepts apply to asymmetric encryption ( that is , public - key encryption , in which different keys are used for encryption and for decryption ) .section [ sec : appexperiments ] considers selective protection .section [ sec : conclusion ] concludes and suggests avenues for further research .appendix [ sec : background ] is a brief review of background on neural networks .this section discusses how to protect the confidentiality of plaintexts using shared keys .it describes the organization of the system that we consider , and the objectives of the participants in this system .it also explains the training of these participants , defines their architecture , and presents experiments .a classic scenario in security involves three parties : alice , bob , and eve .typically , alice and bob wish to communicate securely , and eve wishes to eavesdrop on their communications .thus , the desired security property is secrecy ( not integrity ) , and the adversary is a `` passive attacker '' that can intercept communications but that is otherwise quite limited : it can not initiate sessions , inject messages , or modify messages in transit .we start with a particularly simple instance of this scenario , depicted in figure [ fig : symm ] , in which alice wishes to send a single confidential message to bob .the message is an input to alice .when alice processes this input , it produces an output .( `` '' stands for `` plaintext '' and `` '' stands for `` ciphertext '' . )both bob and eve receive , process it , and attempt to recover .we represent what they compute by and , respectively .alice and bob have an advantage over eve : they share a secret key .we treat as an additional input to alice and bob .we assume one fresh key per plaintext , but , at least at this abstract level , we do not impose that and have the same length . for us ,alice , bob , and eve are all neural networks .we describe their structures in sections [ sec : nnarchitecture ] and [ sec : sharedkeyexperiments ] .they each have parameters , which we write , , and , respectively .since and need not be equal , encryption and decryption need not be the same function even if alice and bob have the same structure . as is common for neural networks , alice , bob , and eve work over tuples of floating - point numbers , rather than sequences of bits .in other words , , , , , and are all tuples of floating - point numbers .note that , with this formulation , , , and may consist of arbitrary floating - point numbers even if and consist of 0s and 1s . in practice ,our implementations constrain these values to the range , but permit the intermediate values .we have explored alternatives ( based on williams reinforce algorithm or on foerster et al.s discretization technique ) , but omit them as they are not essential to our main points .this set - up , although rudimentary , suffices for basic schemes , in particular allowing for the possibility that alice and bob decide to rely on as a one - time pad , performing encryption and decryption simply by xoring the key with the plaintext and the ciphertext , respectively .however , we do not require that alice and bob function in this way and indeed , in our experiments in section [ sec : sharedkeyexperiments ] , they discover other schemes . for simplicity, we ignore the process of generating a key from a seed .we also omit the use of randomness for probabilistic encryption .such enhancements may be the subject of further work . informally , the objectives of the participants are as follows .eve s goal is simple : to reconstruct accurately ( in other words , to minimize the error between and ) .alice and bob want to communicate clearly ( to minimize the error between and ) , but also to hide their communication from eve .note that , in line with modern cryptographic definitions ( e.g. , ) , we do not require that the ciphertext `` look random '' to eve .a ciphertext may even contain obvious metadata that identifies it as such .therefore , it is not a goal for eve to distinguish from a random value drawn from some distribution . in this respect ,eve s objectives contrast with common ones for the adversaries of gans . on the other hand, one could try to reformulate eve s goal in terms of distinguishing the ciphertexts constructed from two different plaintexts . given these objectives , instead of training each of alice and bob separately to implement some known cryptosystem ,we train alice and bob jointly to communicate successfully and to defeat eve without a pre - specified notion of what cryptosystem they may discover for this purpose .much as in the definitions of gans , we would like alice and bob to defeat the best possible version of eve , rather than a fixed eve .of course , alice and bob may not win for every plaintext and every key , since knowledge of some particular plaintexts and keys may be hardwired into eve .( for instance , eve could always output the same plaintext , and be right at least once . )therefore , we assume a distribution on plaintexts and keys , and phrase our goals for alice and bob in terms of expected values . we write for alice s output on input , write for bob s output on input , and write for eve s output on input .we introduce a distance function on plaintexts .although the exact choice of this function is probably not crucial , for concreteness we take the l1 distance where is the length of plaintexts .we define a per - example loss function for eve : intuitively , represents how much eve is wrong when the plaintext is and the key is .we also define a loss function for eve over the distribution on plaintexts and keys by taking an expected value : we obtain the `` optimal eve '' by minimizing this loss : similarly , we define a per - example reconstruction error for bob , and extend it to the distribution on plaintexts and keys : l_b(\theta_a,\theta_b ) = { { \mathbb{e}}}_{p , k}(d(p , b(\theta_b , a(\theta_a , p , k),k ) ) ) \end{array}\ ] ] we define a loss function for alice and bob by combining and the optimal value of : this combination reflects that alice and bob want to minimize bob s reconstruction error and to maximize the reconstruction error of the `` optimal eve '' .the use of a simple subtraction is somewhat arbitrary ; below we describe useful variants .we obtain the `` optimal alice and bob '' by minimizing : we write `` optimal '' in quotes because there need be no single global minimum . in general , there are many equi - optimal solutions for alice and bob . as a simple example , assuming that the key is of the same size as the plaintext and the ciphertext , alice and bob may xor the plaintext and the ciphertext , respectively , with any permutation of the key , and all permutations are equally good as long as alice and bob use the same one ; moreover , with the way we architect our networks ( see section [ sec : nnarchitecture ] ) , all permutations are equally likely to arise .training begins with the alice and bob networks initialized randomly .the goal of training is to go from that state to , or close to .we explain the training process next. our training method is based upon sgd . in practice , much as in work on gans , our training method cuts a few corners and incorporates a few improvements with respect to the high - level description of objectives of section [ sec : objectives ] .we present these refinements next , and give further details in section [ sec : sharedkeyexperiments ] .first , the training relies on estimated values calculated over `` minibatches '' of hundreds or thousands of examples , rather than on expected values over a distribution .we do not compute the `` optimal eve '' for a given value of , but simply approximate it , alternating the training of eve with that of alice and bob . intuitively , the training may for example proceed roughly as follows .alice may initially produce ciphertexts that neither bob nor eve understand at all . by training fora few steps , alice and bob may discover a way to communicate that allows bob to decrypt alice s ciphertexts at least partly , but which is not understood by ( the present version of ) eve .in particular , alice and bob may discover some trivial transformations , akin to rot13 .after a bit of training , however , eve may start to break this code .with some more training , alice and bob may discover refinements , in particular codes that exploit the key material better .eve eventually finds it impossible to adjust to those codes .this kind of alternation is typical of games ; the theory of continuous games includes results about convergence to equilibria ( e.g. , ) which it might be possible to apply in our setting .furthermore , in the training of alice and bob , we do not attempt to maximize eve s reconstruction error . if we did , and made eve completely wrong , then eve could be completely right in the next iteration by simply flipping all output bits ! a more realistic and useful goal for alice and bobis , generally , to minimize the mutual information between eve s guess and the real plaintext . in the case of symmetric encryption, this goal equates to making eve produce answers indistinguishable from a random guess .this approach is somewhat analogous to methods that aim to prevent overtraining gans on the current adversary ( * ? ? ?* section 3.1 ) . additionally , we can tweak the loss functions so that they do not give much importance to eve being a little lucky or to bob making small errors that standard error - correction could easily address .finally , once we stop training alice and bob , and they have picked their cryptosystem , we validate that they work as intended by training many instances of eve that attempt to break the cryptosystem .some of these instances may be derived from earlier phases in the training .[ [ the - architecture - of - alice - bob - and - eve ] ] the architecture of alice , bob , and eve + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + because we wish to explore whether a general neural network can learn to communicate securely , rather than to engineer a particular method , we aimed to create a neural network architecture that was _ sufficient _ to learn mixing functions such as xor , but that did not strongly encode the form of any particular algorithm . to this end , we chose the following `` mix & transform '' architecture .it has a first fully - connected ( fc ) layer , where the number of outputs is equal to the number of inputs .the plaintext and key bits are fed into this fc layer . because each output bit can be a linear combination of all of the input bits, this layer enables but does not mandate mixing between the key and the plaintext bits .in particular , this layer can permute the bits .the fc layer is followed by a sequence of convolutional layers , the last of which produces an output of a size suitable for a plaintext or ciphertext .these convolutional layers learn to apply some function to groups of the bits mixed by the previous layer , without an a priori specification of what that function should be .notably , the opposite order ( convolutional followed by fc ) is much more common in image - processing applications .neural networks developed for those applications frequently use convolutions to take advantage of spatial locality . for neural cryptography , we specifically wanted locality i.e . , which bits to combine to be a _ learned _property , instead of a pre - specified one .while it would certainly work to manually pair each input plaintext bit with a corresponding key bit , we felt that doing so would be uninteresting .we refrain from imposing further constraints that would simplify the problem . for example, we do not tie the parameters and , as we would if we had in mind that alice and bob should both learn the same function , such as xor . as a proof - of - concept , we implemented alice , bob , and eve networks that take -bit random plaintext and key values , and produce -entry floating - point ciphertexts , for , , and .both plaintext and key values are uniformly distributed .keys are not deliberately reused , but may reoccur because of random selection .( the experiments in section [ sec : appexperiments ] consider more interesting distributions and also allow plaintext and key values to have different sizes . )we implemented our experiments in tensorflow .we ran them on a workstation with one gpu ; the specific computation platform does not affect the outcome of the experiments .we plan to release the source code for the experiments .[ [ network - structure ] ] network structure + + + + + + + + + + + + + + + + + our networks follow the `` mix & transform '' pattern described in section [ sec : nnarchitecture ] . the alice network concatenates two -bit inputs ( the plaintext and the key ) into a -entry vector , using and to represent the bit values .this vector is processed through a fc layer , then sent through a succession of four 1-d convolutional layers .the convolutional layers are described in terms of their window size , input depth , and output depth .each has a `` stride''the amount by which the window is shifted at each step .the concrete layers are ] , ] , with strides of .intuitively , the first layer slides a size-4 window across the output elements from the fc layer , outputting two numbers ( output depth 2 ) , with stride .we use a sigmoid nonlinear unit after each layer except the final one . after the final layer , where the output is reduced to elements, we use a tanh nonlinear unit .( recall that tanh has an output domain between $ ] , bringing the values back to a range that can map to binary values . )the bob network is identical to the alice network the eve network takes only the ciphertext as input , and thus has a first fc layer . [ [ training ] ] training + + + + + + + + to train the networks , we use a `` minibatch '' size ranging from 256 to 4096 entries . unless otherwise specified , we present the results from the training runs with 4096 entries .we use tensorflow s adam optimizer with a learning rate of .we do not reduce the learning rate over time because we wish alice , bob , and eve to be able to continue to respond strongly to changes in other components , until alice and bob hopefully achieve a solution that is robust to additional changes in eve .training alternates between alice / bob and eve , with alice / bob training for one minibatch , and then eve training for two minibatches .we chose this ratio in order to give a slight computational edge to the adversary eve without training it so much that it becomes excessively specific to the exact current parameters of alice and bob .empirically , this value has worked well .[ [ loss - functions ] ] loss functions + + + + + + + + + + + + + + eve s loss function is exactly as described above : the l1 distance between eve s guess and the input plaintext .the loss function for alice and bob is more complex , as indicated in sections [ sec : objectives ] and [ sec : sharedkeytraining ] .this function has two components , related to bob s reconstruction error and to the eavesdropper s success .the first component is simply the l1 distance between bob s output and the input plaintext .the latter component , on the other hand , is .this definition expresses the goal , described in section [ sec : sharedkeytraining ] , that eve should not do better than random guessing .accordingly , this component is minimized when half of the message bits are wrong and half are right .we choose a quadratic formula in order to place more emphasis on making eve have a large error , and to impose less of a penalty when eve guesses a few bits correctly , as should happen occasionally even if eve s guesses are effectively random . adopting this formulation allowed us to have a meaningful per - example loss function ( instead of looking at larger batch statistics ) , and improved the robustness of training .its cost is that our final , trained alice and bob typically allow eve to reconstruct slightly more bits than purely random guessing would achieve .we have not obtained satisfactory results for loss functions that depend linearly ( rather than quadratically ) on eve s reconstruction error .the best formulation remains an open question .[ [ post - training - evaluation ] ] post - training evaluation + + + + + + + + + + + + + + + + + + + + + + + + after successfully training the networks so that they cross an accuracy threshold ( e.g. , at most 0.05 bits of reconstruction error for alice and bob , with eve achieving only 1 - 2 bits more than random guessing would predict ) , we reset the eve network and train it from scratch 5 times , each for up to 250,000 steps , recording the best result achieved by any eve .an alice / bob combination that fails to achieve the target thresholds within 150,000 steps is a training failure . if the retrained eves obtain a substantial advantage , the solution is non - robust . otherwise , we consider it a successful training outcome . [[ results ] ] results + + + + + + + figure [ fig : bob_vs_eve_one ] shows , for one successful run , the evolution of bob s reconstruction error and eve s reconstruction error vs. the number of training steps for bit plaintext and key values , using a minibatch size of 4096 .each point in the graph is the mean error across 4096 examples .an ideal result would have bob s reconstruction error drop to zero and eve s reconstruction error reach 8 ( half the bits wrong ) . in this example , both reconstruction errors start high . after a period of time , alice andbob start to communicate quite effectively , but in a way that allows eve to improve its understanding as well , gradually .then , around step 10,000 , alice and bob counter eve s progress . by about step 15,000 ,the training goals are effectively achieved .the remaining steps merely increase eve s reconstruction error slightly .this training graph does _ not _ look like a typical result for neural networks , where monotonicity in the number of steps is generally expected .instead , the dynamics of this adversarial training appear somewhat more reminiscent of evolutionary processes .these dynamics appear to depend on somewhat random - seeming change to cause the bits to mix slightly , but once there is some mixing , the gradient descent can rapidly drive it farther . supportingthis interpretation is the observation that training is not always successful . with ,six of twenty initial runs were failures that never got bob s reconstruction error under the 0.05 threshold , or failed to drive eve s reconstruction error above 7.3 bits ( of 16 ) . in order to test the robustness of the other fourteen alice / bob combinations , we retrained eve five times , and obtained reconstruction errors for eve that ranged from 4.67 to 6.97 bits , with a mean of 6.1 .figure [ fig : bob_vs_eve_two ] shows the final reconstruction errors of bob and of the most effective retrained eve for those fourteen alice / bob combinations . if we somewhat arbitrarily define success as maintaining bob s reconstruction error at or under 0.05 bits , and requiring that eve get at least 6 bits wrong , on average , then training succeeded half of the time ( ten of twenty cases ) . although training with an adversary is often unstable , we suspect that some additional engineering of the neural network and its training may be able to increase this overall success rate . with a minibatch size of only 512 ,for example , we achieved a success rate of only ( vs. the that we achieved with a minibatch size of 4096 ) . in the future, it may be worth studying the impact of minibatch sizes , and also that of other parameters such as the learning rate .analogous results hold in general for and -bit keys and plaintexts ; training appears to be successful somewhat more often for . basically, the experiments for and indicate that there is nothing special about which , to a cryptographer , may look suspiciously tiny .we focus our presentation on the case of because , first , the experiments run more rapidly , and second , it is modestly easier to examine their behavior .for one successful training run , we studied the changes in the ciphertext induced by various plaintext / key pairs .although we did not perform an exhaustive analysis of the encryption method , we did make a few observations .first , it is key - dependent : changing the key and holding the plaintext constant results in different ciphertext output .it is also plaintext - dependent , as required for successful communication .however , it is not simply xor . in particular , the output values are often floating - point values other than 0 and 1 .moreover , the effect of a change to either a key bit or a plaintext bit is spread across multiple elements in the ciphertext , not constrained to a single bit as it would be with xor . a single - bit flip in the keytypically induces significant changes in three to six of the 16 elements in the ciphertext , and smaller changes in other elements .plaintext bits are similarly diffused across the ciphertext .building on the results of section [ sec : sharedkey ] , we study selective protection . in other words , we consider the question of whether neural networks can learn what information to protect , given confidentiality objectives described in terms of an adversary . in the simplest case, a plaintext may have several components ; if we wish to keep the adversary from seeing one of the components , it may suffice to encrypt it .more generally , we may wish to publish some values correlated with the plaintext but prevent the adversary from calculating other values .in such scenarios , protecting information selectively while maximizing utility is both challenging and interesting . to test these ideas and to demonstrate the viability of selective protection , we focus on a concrete experiment .we construct an artificial dataset consisting of tuples of four values , .we aim to build and train a system that outputs two predictions of d , given the first three values as inputs : a `` true '' prediction of d ( that is , the most accurate possible estimate of d given ) , and a `` privacy - preserving '' estimate of d , which we call d - public , defined as the best possible estimate of d that does not reveal any information about the value of c. the system organization is essentially that of figure [ fig : symm ] , with alice and bob sharing a key , but here alice receives and produces d - public in addition to a ciphertext ; both bob and eve have access to alice s outputs ; bob uses them for producing an improved estimate of d , while eve attempts to recover c. thus , our goal is to demonstrate that the adversarial training permits approximating d without revealing c , and that this approximation can be combined with encrypted information and with a key in order to obtain a better approximation of d. the distributions of a , b , c , and d are constructed so that there is substantial correlation between them . in particular , knowing d exactly would convey some information about the likely values of c. in our experiments , a , b , c , and d are normally distributed floating - point values with a correlation matrix imposed upon them .the specifics of how we generate a , b , c , and d are standard : we compute the cholesky decomposition of a 4 4 correlation matrix that specifies the pairwise correlations , we generate vectors of 4 normally distributed floating - point values , and then we multiply these vectors by in order to generate values of a , b , c , and d. although a , b , c , and d do not correspond to any real - world data , these experiments are loosely inspired by real - world tasks .for example , a , b , and c may represent attributes of a user , which may in general be correlated , and d may represent a decision about the user , a prediction of the user s behavior , or a recommendation to the user . in the experiments that follow , we use an augmented version of the neural network architecture of section [ sec : nnarchitecture ] .the inputs first go into a new fc layer ( 12 inputs eight key bits and four values and 12 outputs ) ; the outputs of that first layer are fed into a network with the architecture of section [ sec : nnarchitecture ] . intuitively , we chose this augmented architecture because a single fc layer should be capable of predicting d from a , b , and c , as well as making a prediction decorrelated with c ; and the architecture of section [ sec : nnarchitecture ] suffices to encrypt any of the output of the first layer under the key .we therefore believed this augmented architecture would be sufficient to accomplish its task , though it may be more than is necessary to do so .we train eve with the goal of minimizing the squared error of its estimate of c. we train alice and bob with the goal of minimizing a linear combination of three terms : the squared error of each of the two estimates of d , and the absolute value of the covariance of eve s estimate of c with respect to the actual value of c. we compute this covariance on a batch of training examples , and define the loss function for alice and bob batch - wise .a human engineer would naturally try to design the system somewhat differently likely first computing the desired quantities , and then encrypting the values to be hidden , relying on the encryption and decryption components developed in section [ sec : sharedkey ] as modules .we prefer instead an end - to - end approach , because our goal is to understand whether a neural network can learn to hide information selectively , given only a high - level specification ( `` do not leak information about c , but let people with the right key learn as much as possible about d '' ) . in this end - to - end approach ,the system does not have distinct , pre - assigned units for encryption and decryption , and all components are trained jointly . in particular , although we benefit from some of the insights from section [ sec : sharedkey ] , we do not rely on the components developed there .end - to - end approaches are sometimes preferred in machine learning because they can more efficiently use resources and information . in our case , this feature may mean , in particular , that the cryptographic operations are optimized for the given distribution of inputs .nevertheless , our choices have the primary goal of maximizing insight , not efficiency .to understand whether the system is learning to hide information properly , we train a separate evaluator that we call `` blind eve '' , which is aware of the distribution of c. blind eve tries to guess c relying only upon this baseline information , whereas the real eve also knows the intermediate output ( d - public ) and the ciphertext .if eve s reconstruction error becomes equal to that of blind eve , we know that eve is not successfully extracting information from the public estimate and the ciphertext .figure [ fig : hide_c ] shows the results of adversarial training to learn to hide c. the red `` public error '' line shows the average error of d - public vs. d. the green `` bob error '' line shows bob s average reconstruction error , which drops over time until bob is making the best possible prediction of d given the input .the public error becomes low , but is slightly higher than bob s .the blue dots show eve s average error _ improvement _ over blind eve .after a few steps of training , eve s error is substantially lower than that of blind eve ( by 0.10.2 ) .as time proceeds , eve s relative advantage becomes zero .eve is not able to reconstruct any more information about c than would be possible by simply knowing the distribution of values of c.in this paper , we demonstrate that neural networks can learn to protect communications . the learning does not require prescribing a particular set of cryptographic algorithms , nor indicating ways of applying these algorithms : it is based only on a secrecy specification represented by the training objectives . in thissetting , we model attackers by neural networks ; alternative models may perhaps be enabled by reinforcement learning . there is more to cryptography than encryption . in this spirit, further work may consider other tasks , for example steganography , pseudorandom - number generation , or integrity checks .finally , neural networks may be useful not only for cryptographic protections but also for attacks .while it seems improbable that neural networks would become great at cryptanalysis , they may be quite effective in making sense of metadata and in traffic analysis .we are grateful to samy bengio , laura downs , lfar erlingsson , jakob foerster , nando de freitas , ian goodfellow , geoff hinton , chris olah , ananth raghunathan , and luke vilnis for discussions on the matter of this paper .29 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 martn abadi , ashish agarwal , paul barham , eugene brevdo , zhifeng chen , craig citro , gregory s. corrado , andy davis , jeffrey dean , matthieu devin , sanjay ghemawat , ian j. goodfellow , andrew harp , geoffrey irving , michael isard , yangqing jia , rafal jzefowicz , lukasz kaiser , manjunath kudlur , josh levenberg , dan mane , rajat monga , sherry moore , derek gordon murray , chris olah , mike schuster , jonathon shlens , benoit steiner , ilya sutskever , kunal talwar , paul a. tucker , vincent vanhoucke , vijay vasudevan , fernanda b. vigas , oriol vinyals , pete warden , martin wattenberg , martin wicke , yuan yu , and xiaoqiang zheng .tensorflow : large - scale machine learning on heterogeneous distributed systems ._ corr _ , abs/1603.04467 , 2016 .url http://arxiv.org/abs/1603.04467 .martn abadi , paul barham , jianmin chen , zhifeng chen , andy davis , jeffrey dean , matthieu devin , sanjay ghemawat , geoffrey irving , michael isard , manjunath kudlur , josh levenberg , rajat monga , sherry moore , derek gordon murray , benoit steiner , paul a. tucker , vijay vasudevan , pete warden , martin wicke , yuan yu , and xiaoqiang zhang .tensorflow : a system for large - scale machine learning ._ corr _ , abs/1605.08695 , 2016 .url http://arxiv.org/abs/1605.08695 .boaz barak , oded goldreich , russell impagliazzo , steven rudich , amit sahai , salil vadhan , and ke yang . on the ( im)possibility of obfuscating programs ._ j. acm _ , 590 ( 2):0 6:16:48 , may 2012 .issn 0004 - 5411 .doi : 10.1145/2160158.2160159 .url http://doi.acm.org/10.1145/2160158.2160159 .gilles barthe , juan manuel crespo , benjamin grgoire , csar kunz , yassine lakhnech , benedikt schmidt , and santiago zanella - bguelin .fully automated analysis of padding - based encryption in the computational model . in _ proceedings of the 2013 acm sigsac conference on computer& # 38 ; communications security _ , ccs 13 , pp . 12471260 , new york , ny , usa , 2013 .isbn 978 - 1 - 4503 - 2477 - 9 .doi : 10.1145/2508859.2516663 .url http://doi.acm.org/10.1145/2508859.2516663 .xi chen , yan duan , rein houthooft , john schulman , ilya sutskever , and pieter abbeel .infogan : interpretable representation learning by information maximizing generative adversarial nets ._ corr _ , abs/1606.03657 , 2016 .url https://arxiv.org/abs/1606.03657 . emily l. denton , soumith chintala , arthur szlam , and robert fergus .deep generative image models using a laplacian pyramid of adversarial networks ._ corr _ , abs/1506.05751 , 2015 .url http://arxiv.org/abs/1506.05751 .sbastien dourlens .applied neuro - cryptography .mmoire , universit paris 8 , dpartement micro - informatique micro - electronique .harrison edwards and amos j. storkey . censoring representations with an adversary ._ corr _ , abs/1511.05897 , 2015 .url http://arxiv.org/abs/1511.05897 .jakob n. foerster , yannis m. assael , nando de freitas , and shimon whiteson .learning to communicate to solve riddles with deep distributed recurrent q - networks . _corr _ , abs/1602.02672 , 2016 .url http://arxiv.org/abs/1602.02672 .jakob n. foerster , yannis m. assael , nando de freitas , and shimon whiteson . learning to communicate with deep multi - agent reinforcement learning ._ corr _ , abs/1605.06676 , 2016 .url http://arxiv.org/abs/1605.06676 .yaroslav ganin , evgeniya ustinova , hana ajakan , pascal germain , hugo larochelle , franois laviolette , mario marchand , and victor s. lempitsky . domain - adversarial training of neural networks ._ corr _ , abs/1505.07818 , 2015 .. ran gilad - bachrach , nathan dowlin , kim laine , kristin e. lauter , michael naehrig , and john wernsing .cryptonets : applying neural networks to encrypted data with high throughput and accuracy . in maria - florina balcan and kilian q. weinberger ( eds . ) ,_ proceedings of the 33nd international conference on machine learning , icml 2016 , new york city , ny , usa , june 19 - 24 , 2016 _ , volume 48 of _ jmlr workshop and conference proceedings _ , pp .jmlr.org , 2016 .url http://jmlr.org/proceedings/papers/v48/gilad-bachrach16.html .shafi goldwasser and silvio micali .probabilistic encryption ._ j. comput ._ , 280 ( 2):0 270299 , 1984 .doi : 10.1016/0022 - 0000(84)90070 - 9. url http://dx.doi.org/10.1016/0022-0000(84)90070-9 .ian j. goodfellow , jean pouget - abadie , mehdi mirza , bing xu , david warde - farley , sherjil ozair , aaron c. courville , and yoshua bengio .generative adversarial nets . in zoubin ghahramani ,max welling , corinna cortes , neil d. lawrence , and kilian q. weinberger ( eds . ) , _ advances in neural information processing systems 27 : annual conference on neural information processing systems 2014 , december 8 - 13 2014 , montreal , quebec , canada _ , pp .26722680 , 2014 .url http://papers.nips.cc/paper/5423-generative-adversarial-nets .ian j. goodfellow , jonathon shlens , and christian szegedy . explaining and harnessing adversarial examples ._ corr _ , abs/1412.6572 , 2014 .url http://arxiv.org/abs/1412.6572 .diederik p. kingma and jimmy ba .adam : a method for stochastic optimization ._ corr _ , abs/1412.6980 , 2014 .url http://arxiv.org/abs/1412.6980 .wolfgang kinzel and ido kanter . neural cryptography ._ arxiv preprint cond - mat/0208453 _ , 2002 .alexander klimov , anton mityagin , and adi shamir .analysis of neural cryptography . in yuliang zheng ( ed . ) , _ advances in cryptology - asiacrypt 2002 , 8th international conference on the theory and application of cryptology and information security , queenstown , new zealand , december 1 - 5 , 2002 , proceedings _, volume 2501 of _ lecture notes in computer science _ , pp . 288298 .springer , 2002 .isbn 3 - 540 - 00171 - 9 .doi : 10.1007/3 - 540 - 36178 - 2_18 .url http://dx.doi.org/10.1007/3-540-36178-2_18 .yann lecun , yoshua bengio , and geoffrey hinton .deep learning ., 521:0 436444 , 2015 .christos louizos , kevin swersky , yujia li , max welling , and richard s. zemel .the variational fair autoencoder . _corr _ , abs/1511.00830 , 2015 .url http://arxiv.org/abs/1511.00830 .arvind neelakantan , quoc v. le , and ilya sutskever .neural programmer : inducing latent programs with gradient descent ._ corr _ , abs/1511.04834 , 2015 .url http://arxiv.org/abs/1511.04834 .sebastian nowozin , botond cseke , and ryota tomioka .f - gan : training generative neural samplers using variational divergence minimization ._ corr _ , abs/1606.00709 , 2016 .url https://arxiv.org/abs/1606.00709 .lillian j ratliff , samuel a burden , and s shankar sastry . characterization and computation of local nash equilibria in continuous games . in _communication , control , and computing ( allerton ) , 2013 51st annual allerton conference on _ , pp . 917924 .ieee , 2013 .andreas ruttor ._ neural synchronization and cryptography_. phd thesis , julius maximilian university of wrzburg , 2006 .url http://www.opus - bayern.de / uni - wuerzburg / volltexte/2007/2361/. tim salimans , ian goodfellow , wojciech zaremba , vicki cheung , alec radford , and xi chen. improved techniques for training gans ._ corr _ , abs/1606.03498 , 2016 .url https://arxiv.org/abs/1606.03498 .sainbayar sukhbaatar , arthur szlam , and rob fergus . learning multiagent communication with backpropagation ._ corr _ , abs/1605.07736 , 2016 .url http://arxiv.org/abs/1605.07736 .christian szegedy , wojciech zaremba , ilya sutskever , joan bruna , dumitru erhan , ian j. goodfellow , and rob fergus .intriguing properties of neural networks ._ corr _ , abs/1312.6199 , 2013 .url http://arxiv.org/abs/1312.6199 .ronald j. williams .simple statistical gradient - following algorithms for connectionist reinforcement learning . in _machine learning _ , pp . 229256 , 1992 .pengtao xie , misha bilenko , tom finley , ran gilad - bachrach , kristin e. lauter , and michael naehrig .crypto - nets : neural networks over encrypted data ._ corr _ , abs/1412.6181 , 2014 .url http://arxiv.org/abs/1412.6181 .paralleling section [ sec : sharedkey ] , this section examines asymmetric encryption ( also known as public - key encryption ) .it presents definitions and experimental results , but omits a detailed discussion of the objectives of asymmetric encryption , of the corresponding loss functions , and of the practical refinements that we develop for training , which are analogous to those for symmetric encryption . in asymmetric encryption ,a secret is associated with each principal .the secret may be seen as a seed for generating cryptographic keys , or directly as a secret key ; we adopt the latter view .a public key can be derived from the secret , in such a way that messages encrypted under the public key can be decrypted only with knowledge of the secret .we specify asymmetric encryption using a twist on our specification for symmetric encryption , shown in figure [ fig : asymm ] . instead of directly supplying the secret encryption key to alice , we supply the secret key to a public - key generator , the output of which is available to every node .only bob has access to the underlying secret key .much as in section [ sec : sharedkey ] , several variants are possible , for instance to support probabilistic encryption .the public - key generator is itself a neural network , with its own parameters .the loss functions treats these parameters much like those of alice and bob . in training, these parameters are adjusted at the same time as those of alice and bob . in our experiments on asymmetric encryption ,we rely on the same approach as in section [ sec : sharedkeyexperiments ] .in particular , we adopt the same network structure and the same approach to training .the results of these experiments are intriguing , but much harder to interpret than those for symmetric encryption . in most training runs ,the networks failed to achieve a robust outcome . often , although it appeared that alice and bob had learned to communicate secretly , upon resetting and retraining eve , the retrained adversary was able to decrypt messages nearly as well as bob was .however , figure [ fig : asymm - result ] shows the results of _ one _ training run , in which even after five reset / retrain cycles , eve was unable to decrypt messages between alice and bob .our chosen network structure is not sufficient to learn general implementations of many of the mathematical concepts underlying modern asymmetric cryptography , such as integer modular arithmetic .we therefore believe that the most likely explanation for this successful training run was that alice and bob accidentally obtained some `` security by obscurity '' ( cf . the derivation of asymmetric schemes from symmetric schemes by obfuscation ) .this belief is somewhat reinforced by the fact that the training result was fragile : upon further training of alice and bob , eve _ was _ able to decrypt the messages .however , we can not rule out that the networks trained into some set of hard - to - invert matrix operations resulting in `` public - key - like '' behavior .our results suggest that this issue deserves more exploration .further work might attempt to strengthen these results , perhaps relying on new designs of neural networks or new training procedures .a modest next step may consist in trying to learn particular asymmetric algorithms , such as lattice - based ciphers , in order to identify the required neural network structure and capacity .most of this paper assumes only a few basic notions in machine learning and neural networks , as provided by general introductions ( e.g. , ) .the following is a brief review .neural networks are specifications of parameterized functions .they are typically constructed out of a sequence of somewhat modular building blocks .for example , the input to alice is a vector of bits that represents the concatenation of the key and the plaintext .this vector ( ) is input into a `` fully - connected '' layer , which consists of a matrix multiply ( by ) and a vector addition ( with ) : . the result of that operationis then passed into a nonlinear function , sometimes termed an `` activation function '' , such as the sigmoid function , or the hyperbolic tangent function , tanh . in classical neural networks ,the activation function represents a threshold that determines whether a neuron would `` fire '' or not , based upon its inputs .this threshold , and matrices and vectors such as and , are typical neural network `` parameters '' .`` training '' a neural network is the process that finds values of its parameters that minimize the specified loss function over the training inputs .fully - connected layers are powerful but require substantial amounts of memory for a large network .an alternative to fully - connected layers are `` convolutional '' layers .convolutional layers operate much like their counterparts in computer graphics , by sliding a parameterized convolution window across their input .the number of parameters in this window is much smaller than in an equivalent fully - connected layer .convolutional layers are useful for applying the same function(s ) at every point in an input .a neural network architecture consists of a graph of these building blocks ( often , but not always , a dag ) , specifying what the individual layers are ( e.g. , fully - connected or convolutional ) , how they are parameterized ( number of inputs , number of outputs , etc . ) , and how they are wired .
we ask whether neural networks can learn to use secret keys to protect information from other neural networks . specifically , we focus on ensuring confidentiality properties in a multiagent system , and we specify those properties in terms of an adversary . thus , a system may consist of neural networks named alice and bob , and we aim to limit what a third neural network named eve learns from eavesdropping on the communication between alice and bob . we do not prescribe specific cryptographic algorithms to these neural networks ; instead , we train end - to - end , adversarially . we demonstrate that the neural networks can learn how to perform forms of encryption and decryption , and also how to apply these operations selectively in order to meet confidentiality goals .
modern information systems , such as the ads ( kurtz , et al .1993 ) , act as a nexus , linking together many densely interconnected systems of information .these systems can be viewed as systems of interconnecting graphs , an example of a bipartite graph would be the interaction of the set of all papers with the set of all authors , which yields connections between papers and papers ( papers are connected if they have the same author ) and between authors and authors ( co - authorship ) .modern computational techniques permit these rich data sources to be used to solve practical problems .some techniques use the graph representation to achieve orderings , such as with the girvan - newmann algorithm ( girvan and newmann , 2002 ) or the rosvall - bergstrom algorithm ( rosvall and bergstrom , 2008 ) .others use eigenvector techniques on the interconnectivity or influence matrices , either using exact methods ( e.g. thurstone 1934 , ossorio 1965 , kurtz 1993 ) , or approximate methods suitable for huge systems such as pagerank ( brin and page 1998 ) .developing practical solutions to the problem `` given my current state of knowledge , and what i am currently trying to do , what would be the best things for me to read '' requires an in depth understanding of the properties of the data and the nature of the many different reduction techniques .the data are quite complex ; as an example two papers ( a and b ) can be connected to each other because 1 ) a cites b ; 2 ) b cites a ; 3 ) a and b cite c ; 4 ) author x wrote both a and b ; 5 ) author x wrote a set of papers , at least one of which was cited by both a and b ; 6 ) a and b were read by the same person ; 7 ) a and b have the same key word ; 8) a and b refer to the same astronomical object ; 9 ) etc .. a practical example of combining data and techniques would be to build a faceted browse system for current awareness .a possible avenue is : take a set of qualified readers , say persons who read between 80 and 300 papers from the main astronomy journals within the last six months ; for each reader find the papers that reader read ; for each of these papers find the papers that that paper references ; for each of these papers find the keywords assigned to that paper by the journal ; next , for each reader create an n - dimensional normalized interest vector , where each dimension is a keyword and the amplitude represents the normalized frequency of occurrence in the papers cited by the papers read .this yields a reader - keyword matrix ; one way to view this is that the readers are points in a multidimensional keywork space .several things can be done with this matrix , for example if the readers are clustered , by k - means or some other algorithm , one obtains groups of readers with similar interests .these can be used as the basis of a collaborative filter , to find important recent literature of interest , and can be subdivided , to narrow the subject ( as defined by people with similar interests ) .this creates a faceted browse of important recent papers in subjects of current interest .the ads has sufficient nubers of users to support three levels of facets .using similar techniques the ads is currently implementing a recommender system : given that you are reading a particular article , what other articles might be useful for you to read ?the first step in developing a recommender system is to find a set of papers similar to the paper being read .a group of similar articles substantially enhances the signal to noise of the recommender system compared with a single article ; the mean downloads per month for a ten year old astrophysical journal article , for example , is one .there are three basic steps to this process : first create a system to find similar articles for an arbitrary article ; then given an article of interest find those similar articles , and finally use them to find recommended articles .the system must be designed so that the recommended articles can be chosen in real time , once the arbitrary article of interest is selected . one effective method for finding similar articles is : 1 ) take the reader - keyword matrix and reduce its dimensionality ( to about 50 ) using svd ; 2 ) transform all the papers into the reduced dimensionality system by fitting their keyword vectors to the significant svd vectors ; 3 ) cluster the ( 50 dimensional ) article keyword vectors into many clusters ( of about 1000 articles each ) using hierarchal clustering techniques ; 4 ) for each of the small clusters of papers perform a new svd decomposition on the dimensional vectors , reducing the dimensionality further ( to about 5 ) ; 5 ) for each small cluster of papers transform each paper into the corresponding 5 dimensional subspace .these steps can be done in advance as part of the indexing necessary for a text retrieval system .now , to find suggested reading for a new article , say one just released on arxiv , is relatively simple .first the keyword vector must be created for the paper , but note that this is a function of the articles the paper references , the arxiv paper itself need not be keyworded .next perfprm the transformations and classifications to put the article into the proper small cluster of papers with its five dimensional subspace .then find those n papers ( 40 is a reasonable number ) from the 1000 or so in the cluster which are closest in the 5-dimensional space to the input arxiv article .these 40 articles become the basis for the recommender systems .we look for recommended papers using second order operators ( kurtz 1992 , kurtz , et al 2005 ) . for three possible recommendations we use the betweenness of the papers in the group of 40; we find the paper which was most read immediatly following the reading of a member of the group , the paper most often read immediatly before a member of the group , and the paper most often read either before or after ( these can be , and often are , different ) .this inverts the concept of betweenness centrality , we do not find the papers which are most between a set of papers , we find the set of papers for which the group of 40 papers very similar to our input paper are between .two additional recommendations can be gotten by finding all the people who read any of the group of 40 papers , and finding the paper they read the most in the last few months , and by finding the most recent paper in the top 100 of this most alsoread list .the citations can give two more recommendations : the paper which the group of 40 papers cite the most , and the paper which cites the largest number of the group of 40 .finally a joint query of ads with simbad ( wenger , et al .2000 ) can find the paper which refers to the largest number of astronomical objects which are referred to by the papers in the group of 40 very similar papers to the input paper .clearly this is not the only way to find recommended papers ; like with architecture or civil engineering ( there is no best building or bridge design ) the problem is too complex to be fully optimized .there are very likely ways of doing this which are better than others , however , and this will be learned over time .we only used part of the available data here ; from the citations we only used in - degree and out - degree from the group of 40 .we did not use the author - based relations at all .instead of clustering the papers based on a hierarchal clustering of the reduced keyword vector ( a subject matter technique ) we could have clustered them based on the co - ctation netword , using the rosvall - bergstrom algorithm ( kurtz , et al .we did not use any knowledge about the actual user . etc .these methosd are not restricted to scientific papers .we are entering an age of very densely interconnected `` data '' objects ; building intelligent systems to guide users in their traversal of these new universes is clearly a branch of knowledge engineering whose time has come .brin , s. & page , l. 1998 , computer networks and isdn systems , 30 , 107 .girvan , m. & newman , m.e.j .2002 , pnas 99 , 7821 .kurtz , m.j .1992 , eso conference series 43 , 85 kurtz , m.j .et al 1993 , asp conference series 52 , 132 .kurtz , m.j .1993 , assl 182 , 21 .kurtz , m.j .2005 jasist 56 , 36 .ossorio , p.g .1965 j. multivariate behavioral research 2 , 479 .rosvall , m. & bergstrom , c. 2008 , pnas 105 , 1118 .thurstone , l.l .1934 , psychological review 41 , 1 .wenger , m. et al .2000 , a&as 143 , 9 .
the smithsonian / nasa astrophysics data system exists at the nexus of a dense system of interacting and interlinked information networks . the syntactic and the semantic content of this multipartite graph structure can be combined to provide very specific research recommendations to the scientist / user .
consider the following regression model where is the response variable for the subject , is the covariate for the subject , is the corresponding regression coefficient , and is the error term following some specified distribution .variable selection in regression problems has long been considered as one of the most important issues in modern statistics .it involves choosing an appropriate subset of indices so that for , the covariates s and estimated coefficients s are scientifically meaningful in interpretation , and estimates have relative good properties in prediction .a general approach to tackling variable selection problems is via utilizing a criterion in which the objective function is subject to a constraint on the number of covariates .the well - known information - based criteria such as aic and bic are of this kind . for small ,these criteria can be calculated by fitting models with all possible combinations of the covariates . however , when the number of covariates increases , the number of candidate models increases exponentially . for large , it is virtually impossible to calculate these criteria for all candidate models .+ + several approaches have been developed by modern bayesian statistics to tackle this problem .one approach is to use stochastic search methods .the methods aim to relax the combinatorial difficulty by stochastically exploring the high posterior probability regions in the model space .a well - known example is the stochastic search variable selection ( ssvs ) proposed by george and mcculloch .the ssvs assigns a mixture prior weighted by bernoulli variables to each regression coefficient , and then applying a gibbs sampling procedure to sample the posterior probability on the bernoulli variable .the resulting monte carlo average over the bernoulli samples can be seen as an estimate for the posterior inclusion probability of the covariate , and variable selection can be done by either evaluating the posterior probability with a given threshold or evaluating point estimates of regression coefficients via hypothesis testing or creditable interval construction .a general review of ssvs can be found in .recent researches in bayesian variable selection using mcmc - based inference procedures focus on providing full bayesian solutions to variable selection problems formulated by frequentists .these include the bayesian lasso , and the bayesian elastic net .other relevant approaches include .a major advantage of mcmc - based inference procedures is that they provide a practical way to assessing posterior probabilities , and inference tasks such as variable selection or point estimation can be straightforwardly carried out based on posterior probability calculation . however , as the number of covariates becomes quite large , mcmc - based inference procedures may become time - consuming .in addition , convergence of mcmc - based sampling algorithms is not often guaranteed .other approaches developed in modern bayesian statistics to tackle variable selection problems include those based on shrinkage - thresholding procedures .empirical bayes methods ,, aim to estimate regression coefficients by first applying numerical procedures to obtain a crude , non - zero estimate for each coefficient , and then using post - processing procedures to discard those with small values or those with large variances . on the other hand ,the maximum a posteriori ( map ) approach aims to estimate regression coefficients by directly maximizing the joint posterior density function .the bayesian logistic regression model with laplace priors developed by genkin et al . is of this kind .when the number of covariates is large , the map approach relies on the use of efficient search algorithms for parameter estimation . in a bayesian variable selection setting , these algorithms often involve iteratively applying shrinkage - thresholding steps to obtain estimates that have sparse features , i.e. some of them have exact zero values . in this sense ,parameter estimation and variable selection can be achieved simultaneously under the map approach .in addition , with suitable algorithms , variable selection under the map approach can also be fast and efficient .it is particularly explicit in the situation in which the number of covariates is large but the number of covariates with non - zero coefficients is small .+ + recent frequentists approaches to variable selection focus on applying the idea of regularization estimation in the situation in which the number of variables is much larger than the number of samples .these include the scad , , the elastic net ,, , the adaptive lasso , the group lasso , , the dantzig selector , the relaxed lasso , and mc penalty .all these approaches can either been seen as alternatives or as extensions of the lasso estimation .in addition , these approaches appear to have corresponding bayesian interpretations .for example , the adaptive lasso can be interpreted as the one which assigns laplace priors on regression coefficients , with the scale parameter estimated by using prior knowledge of the ols estimates or the ordinary lasso estimates .theoretical results provided by knight and fu show that with regular conditions on the order magnitude of the tuning parameter , the lasso is consistent in parameter estimation .however , as shown by meinshausen and bhlmann , and zou , for the lasso estimation , consistency in parameter estimation does not imply consistency in variable selection .further conditions on the design matrix and tuning parameter should be imposed to ensure consistency in variable selection for lasso type estimations .zhao and yu established the irrepresentable condition and showed that the lasso can be asymptotically consistent in both variable selection and parameter estimation if the irrepresentable condition holds and some regular conditions on the tuning parameter are satisfied .the same condition was also established by zou and yuan and lin .+ + in this paper we develop a method to carrying out map estimation for a class of bayesian models in tackling variable selection problems .the use of map estimation in variable selection problems had previously been studied by genkin et al . in a logistic regression setting .the key difference between our approach and genkin et al.s is that our model assigns a mitchell - beauchamp prior , i.e. the gaussian - based spike and slab prior weighted by bernoulli variables , on each regression coefficient .conventionally , parameter estimation for this model relies on mcmc or other simulation - based methods .recent studies by ishwaran and rao , used a rather different approach in that regression coefficients are estimated via ols - based shrinkage methods . in our map estimation , an augmented version of the posterior joint density at logarithm scale is derived . from frequentists point of view , the map estimation is equivalent to the regularization estimation with a mixture penalty of squared and norms on regression coefficients . in practice , we apply a majorization - minimization technique to modify the penalty function so that convexity for the objective function can be achieved .we further construct a coordinate - descent algorithm based on a specified iteration scheme to obtain the map estimates .simulation studies show that using the map estimates can lead to better performances in variable selection than those based on other benchmark methods in various circumstances .moreover , theoretical results show that the map estimator is asymptotically consistent in variable selection even when frequentists irrepresentable condition is violated .+ + the paper is organized as follows .section 3 focuses methodological aspects of the proposed method .section 4 provides two simulation studies on performances of the proposed method .section 5 develops relevant asymptotic analysis for the method .section 6 extends the method to parameter estimation in the generalized linear models .real data examples are provided in section 7 .discussions and concluding remarks are given in section 8 .let be an design matrix . the entry of , , is denoted by , and the row of is denoted by .the transpose of is denoted by .let , and . here is a realization of random variable .denote the identity matrix . for a -dimensional vector ,define the norm by , the norm by , the norm by , and the norm by , where is an index variable such that if , and otherwise .the probability density of the variable conditional on is denoted by . for non -zero valued coefficients in , denote the corresponding index set .finally , we define the sign function for variable by start by formulating the regression model ( [ 1 ] ) under a bayesian framework .note that in a regression model a covariate can only be selected if its coefficient is estimated with a non - zero value . based on this result, we assign an index variable to each covariate so that if then , and , and if , then and . with , the regression model ( [ 1 ] )has an equivalent representation given by from a variable selection point of view , the index vector is an indicator for candidate models .different candidate models will have different values in . under a bayesian framework , we assume the prior on implies that conditional on , is equal to 0 with probability one , and conditional on , follows a normal distribution with mean and variance .the prior on is the same as the spike - slab prior proposed by mitchell and beauchamp .in addition , given fixed , the hyperparameter will play a crucial role in controlling the concentration of .the prior will gradually concentrate its mass on as , and will gradually disperse its mass if .it implies that information the prior can provide is dependent on , and as decreases , the prior will become less informative .moreover , since , the mixture form of the prior allows us to express it as normal . this representation will be used in deriving the joint posterior density of the parameters .the variance has prior mean and variance ] . note that as increases , will decrease .it implies that a strong belief in the presence of a variable will decrease the penalty value for the variable .in addition , by definition , and the term can be seen as an norm on . to see how it can be ,note that , which is the norm by definition .here we have used the assumption that .we can express the fourth term in ( [ method.1 ] ) as .now given all other parameters fixed , the map estimator of can be derived by first making a derivative of ( [ method.1 ] ) with respect to , then setting the derivative to zero , and solving the equation for .the estimation of is then carried out given is fixed . with fixed and the regularization interpretations given above , ( [ method.1 ] ) has an equivalent representation given by note that here we have multiplied ( [ method.1 ] ) with . now with ( [ 6 ] ) , we can construct an iteration scheme to obtain ( [ 5 ] ) . at the iteration ,the iteration scheme is defined by note that the objective function ( [ 6 ] ) involves an norm , which by definition , is not convex .therefore related optimization tasks in the second term of ( [ 7 ] ) require some refinements . herewe adopt a relaxation approach to tackling the optimization problem .we begins the approach by noting that , mathematically the norm on a -dimensional vector can be expressed as ) as a function of and using lhopital s rule .a more detailed discussion on the properties of the log - sum function in the right hand side of ( [ 9 ] ) will be given later .now we only focus on using it in solving optimization problems involving constraints . with representation ( [ 9 ] ) , the objective function ( [ 6 ] ) can be re - expressed as if is small enough , the log - sum function in the right hand side of ( [ 10 ] ) will give an approximate representation of .graphical representations for the log - sum function with different and their mixtures with the squared norm can be found in the top panel of figure [ logsum.figure1 ] . in addition , since the log - sum function in ( [ 10 ] ) is continuous on , the combinatorial nature of is relaxed .however , the term is neither convex nor concave on , and replacing with ( [ 9 ] ) in ( [ 6 ] ) will still make the whole objective function ( [ 10 ] ) remain non - convex . to tackle this problem ,a majorization - minimization algorithm is adopted .majorization - minimization ( mm ) algorithms , are a set of analytic procedures aiming to tackle difficult optimization problems by modifying their objective functions so that solution spaces of the modified ones are easier to explore . for an objective function ,the modification procedure relies on finding a function satisfying the following properties : in ( [ 10.1 ] ) , the objective function is said to be majorized by . in this sense, is called the majorization function .in addition , ( [ 10.1 ] ) implies that is tangent to at .moreover if is a minimizer of , then ( [ 10.1 ] ) further implies that which means that the iteration procedure pushes toward its minimum .+ + now we turn back to the function in the right hand side of ( [ 10 ] ) .note that , since is a concave function of for , therefore the inequality holds for all and .note that the right hand side of ( [ 10.2 ] ) is convex in .in addition , if we let , then ( [ 10.2 ] ) becomes an equality , which implies that the right hand side of ( [ 10.2 ] ) satisfies the properties stated in ( [ 10.1 ] ) , therefore is a valid function for majorizing .[ proposition1 ] define and let be the same as ( [ 6 ] ) but without the constant term. then can be majorized by the following function : where _ proof of proposition [ proposition1 ] . _assume minimizes given .then with ( [ 9 ] ) and the inequality ( [ 10.2 ] ) , the quantity can be bounded in a way such that which verifies the first condition stated in ( [ 10.1 ] ) . for , is equal to the log - sum function in ( [ 9 ] ) , which verifies the second condition stated in ( [ 10.1 ] ) and completes the proof . + + a graphical representation of using mm algorithms in approximating the log - sum function in ( [ 9 ] )can be found in the bottom - left panel of figure [ logsum.figure1 ] . from the argument given above, we can construct an iteration scheme to minimize , with the norm , or equivalently the log - sum function , replaced by defined in proposition [ proposition1 ] .for example , in ( [ 7 ] ) , can be obtained by carrying out the following iteration scheme : over index , where ^{-1} ] . for an arbitrary ] into equal spaces and then use the discretized values to calculate according to ( [ 20 ] ) .our approach is the same as the one using a fixed grid on the tuning parameter for parameter estimation .this fixed grid approach to tuning parameter selection has been adopted in , , and , and is advocated by and for fast and accurate parameter estimation . herewe provide a toy example to illustrate the bava - mio estimation .we let the number of samples and the number of covariates . for regression coefficients ,we let , , , , and for .we simulate each row of independently identically from mvn , and then calculate with mvn . for the hyperparameters , we let , and .further let .we use 100 equal spaced points to form a grid for .we perform two bava - mio estimations : one uses the bayes factor and the other uses ten fold cross validation for tuning parameter selection .we define the bayes factor between models and by where the term refers to the marginalized likelihood with and being integrated out with respect to their prior probability measures . for the bayesian model stated in ( [ 2 ] ), the marginalized likelihood has a closed form representation given by }.\nonumber\end{aligned}\ ] ] in subsequent sections we will use the measure ( [ bayes.factor ] ) for variable selection . in addition , for all variable selection tasks using ( [ bayes.factor ] ) , the baseline model will always refer to the null model .+ + the results are shown in figure [ toy.figure1 ] .the path plot in the top left panel of figure [ toy.figure1 ] shows that non - zero coefficients entered into the model earlier under the bava - mio estimation .in addition , the paths of estimated coefficients behave similar to those under the hard - thresholding estimation , that is , once a coefficient is estimated to be non - zero , the corresponding estimation path makes a sharp jump to the non - thresholded value .moreover , due to the presence of the squared norm in the objective function , the number of selected covariates can be larger than the number of samples . throughout the estimation procedure ,the maximum number of selected covariates is 831 , which is much larger than the number of samples .here we also provide the lasso estimation for regression fitting with the same data .the results are shown in the right panel of figure [ toy.figure1 ] . as compared with the lasso estimation , in which 33 covariates are selected using ten fold cross validation, the bava - mio estimations using the bayes factor and ten fold cross validation correctly select covariates with non - zero coefficients .in addition , as shown in the bottom panel of figure [ toy.figure1 ] , values of the non - zero coefficients are also estimated more accurately under the bava - mio estimations .we briefly discuss properties of the log - sum function stated in ( [ 9 ] ) .first , note that . by multiplying to the sum andlet , one obtains the logarithm of the product of over . as pointed out by tipping , the term is an improper version of student s density .a rather different way is to see the log - sum function as a product of logarithm of the generalized pareto density , which has a parametric form given by for , , , and . by multiplying and adding a constant term to , it becomes , which is a logarithm of the product of generalized pareto densities with location parameter , scale parameter , and shape parameter .+ + the following two propositions discuss relationships between the log - sum function and the and norms .the first one states that the error rate between the log - sum function and the norm measured by an distance is of order as .proofs of the two propositions will be given in appendix b. [ proposition2 ] define and .then for , there exists a positive constant such that a graphical representation of proposition [ proposition2 ] can be found in the bottom - right panel of figure [ logsum.figure1 ] .the next proposition states that the log - sum function can do better in approximating the norm than the norm as . on the other hand , results in this proposition also implies that the log - sum function approaches to norm as .sriperumbudur et al . gave another heuristic argument for this property .[ proposition3 ] with the same notation used in proposition 2 , for and ] .the third approach is the adaptive lasso , which is defined by we use r package `` parcor '' to obtain the adaptive lasso estimates with the default setting of as the initial value for and tuning parameter selected via ten fold cross validation .+ + we collect several performance measures at each simulation run .the first one is the standardized distance between and , which is defined by the second one is the predictive mean squared error of for a test data set , which is defined by the test data set contains data points generated using a simulation scheme the same as the training data set .the third one is the number of coefficients with non - zero estimated values , where .the final one is the sign function - based false positive rates , which is defined by where the sign function sign is defined in ( [ notation.1 ] ) .+ + for each of the 45 simulation experiments , we generate 100 runs to collect the four performance measures .we then plot the average of each peformance measure against the ratio , i.e. the ratio between the number of samples and the number of true coefficients with non - zero values .these plots are shown in figures [ simui.fig1 ] , [ simui.fig2 ] , and [ simui.fig3 ] for snr and , respectively . from the three figures we can see none of the estimation approaches can dominate the others in all aspects of the performance measures . in most cases ,bava - mio based estimations have smaller sign function - based false positive rates , as shown in the second column of each figure .it implies that more accurate variable selection may be done using bava - mio estimations .in general , bava - mio estimations have fewer numbers of non - zero estimates , as shown in the fourth column of each figure .these findings become more significant as the number of samples increases . in addition , since the bava - mio estimation using the bayes factor has the fewest numbers of non - zero estimates than other estimation approaches , it is surprising that the pmse and -dis measures under the bmio - bf estimation are comparable to those under other estimation approaches , for example , in cases with snr and in some cases with snr .however , we also noticed that the bmio - bf estimation has higher values in the pmse and -dis in the cases with snr , particularly in those with small numbers of samples .figure [ simu.figure1.1 ] show heatmaps for the s - fpr and rankings of the s - fpr for the five estimation approaches under the 45 simulation scenarios .the heatmaps are generated by using the graphical software gap ( generalized associated plots ) , which was developed by wu , tien and chen as companion software to ( ` http://gap.stat.sinica.edu.tw/software/gap/index.htm ` ) .the gap - based heatmaps further suggest that using bava - mio estimation can lead to more accurate variable selection . in the second simulation studywe investigate the impact of the irrepresentable condition on the performance of bava - mio estimation in variable selection . before stating the irrepresentable condition, we give some notation definitions first .we assume the true model is parametrized by .define and .denote the coefficients with indices in , and the coefficients with indices in .similar definitions are also applied to and , respectively .an estimator is said to be sign consistent in estimating if the probability of the event approaches to as .given the sign consistency holds , the estimated index set will be the same as the true index set , therefore the sign consistency implies variable selection consistency , that is , asymptotically with probability one , non - zero valued coefficients will have non - zero estimated values , and zero - valued coefficients will be estimated with zero values .+ + zhao and yu showed that if the lasso estimation wants to achieve the sign consistency , then the design matrices and must satisfy the following condition : latexmath:[\ ] ] which completes the proof . .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] .top : model 1 ( covariance matrix with off - diagonal terms equal to ) ; middle : model 2 ( covariance matrix with off - diagonal terms equal to ) ; bottom : model 3 ( covariance matrix with off - diagonal terms following a toepolitz structure ) .first column : standardized -distance btween estimated and true values ; second column : sign function - adjusted false positive rate ; thrid column : prediction mean squared error ; fourth column : number of non - zero estimates.,title="fig:",scaledwidth=24.0% ] against the irrepresentable statistic under different signal - to - noise ratios.,title="fig:",scaledwidth=45.0% ] against the irrepresentable statistic under different signal - to - noise ratios.,title="fig:",scaledwidth=45.0% ] against the irrepresentable statistic under different signal - to - noise ratios.,title="fig:",scaledwidth=45.0% ] against the irrepresentable statistic under different signal - to - noise ratios.,title="fig:",scaledwidth=45.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities .the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities .the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] ; center ; right .top : the cv error and test error along the fitting path ; middle : the number of selected genes along the fitting path ; bottom : scatter plot for estimated label probabilities . the vertical dash line in each plot in the top two panels indicates where the tuning parameter is selected.,title="fig:",scaledwidth=32.0% ] .the sign probability under different signal - to - noise ratios .each value is calculated by averaging over 100 simulation runs , and the corresponding standard error is given in the bracket . the term corr . in the second line of each panelis the squared correlation between the sign probability and the irrepresentable statistic .we use kendall s in the correlation calculation . [ cols="<,^,^,^,^,^",options="header " , ]
we develop a method to carry out map estimation for a class of bayesian regression models in which coefficients are assigned with gaussian - based spike and slab priors weighted by bernoulli variables . unlike simulation - based inference methods , the proposed method directly optimizes the logarithm of the joint posterior density for parameter estimation . the corresponding optimization problem has an objective function in lagrangian form in that regression coefficients are regularized by a mixture of squared and norms . a tight approximation to the norm using majorization - minimization techniques is derived , and a coordinate descent algorithm in conjunction with a specified soft - thresholding scheme is used in searching for the optimizer of the approximated objective . simulation studies show that the proposed method can lead to more accurate variable selection than other benchmark methods . it also shows that the irrepresentable condition ( zhao and yu , 2006 ) appears to have less impacts on the performance of the proposed method . theoretical results further show that under some regular conditions , sign consistency can always be established , even when the irrepresentable condition is violated . results on posterior model consistency and estimation consistency , and an extension to parameter estimation in the generalized linear models are provided . + + * keywords : * map estimation ; norm ; majorization - minimization algorithms ; irrepresentable condition .
there is an explosion of data , generated , measured , and stored at very fast rates in many disciplines , from finance and social media to geology and biology .much of this _ big data _ takes the form of simultaneous , long running time series . examples , among many others , include protein - to - protein interactions in organisms , patients records in health care , customers consumptions in ( power , water , natural gas ) utility companies , cell phone usage for wireless service providers , companies financial data , social interactions among individuals in a population .the internet - of - things ( iot ) is an imminent source of ever increasing large collection of time series . the diversity and _ _ un__structured nature of big data challenges our ability to derive models from first principles ; in alternative , because data is abundant , it is of great significance to develop methodologies that, in collaboration with domain experts , assist extracting low - dimensional representations for the data . networks or graphs are becoming prevalent as models to describe data relationships .these low - dimensional graph data representations are then used for further analytics , for example , to compute statistics , make inferences , perform signal processing tasks , or quantify how topology influences diffusion in networks of agents . in many problems ,the first issue to address is inferring the unknown relations between entities from the data .early work include dimensionality reduction approaches such as . this paper focus on this problem for time - series data , estimating the network structure in the form of a possibly directed , weighted adjacency matrix .current work on estimating network structure largely associates graph structure with assuming that the process supported by the graph is markov .our work instead associates the graph with causal network effects , drawing inspiration from the discrete signal processing on graphs ( ) framework .we first provide a brief overview of the concepts and notations underlying the theory in section [ sec : dspg ] .then we introduce related prior work in section [ sec : prior ] and our new network process in section [ sec : gp ] .next , we present algorithms to infer the network structure from data generated by such processes in section [ sec : estim ] .finally , we show simulation results in section [ sec : exp ] and conclude the paper in section [ sec : concl ] .provides a framework with which to analyze data with elements for which relational information between elements is known .we follow in this brief review .consider a graph where the vertex set and is the weighted adjacency matrix of the graph .each data element corresponds to a node and weight is assigned to a directed edge from to .a graph signal is defined as a map since signals are isomorphic to complex vectors with elements , we can write graph signals as length vectors supported on , a graph filter is a system that takes a graph signal as input and outputs another graph signal . a basic nontrivial graph filter on graph called the graph shift is a local operation given by the product of the input signal with the adjacency matrix . assuming shift invariance , graph filters in are matrix polynomials of the form the output of the filter is . note that graph filters are linear shift - invariant filters . for a linear combination of graph signal inputs they produce the same linear combination of graph signal outputs , and consecutive application of multiple graph filters does not depend on the order of application ( i.e., graph filters commute ) .graph filters also have at most taps , where is the degree of the minimal polynomial of .here we describe previous models and methods used to estimate graph structure , noting the similarities and distinctions with the method presented in this paper .in particular , in section [ subsec : graphicalmodel ] we consider sparse gaussian graphical model selection , and in section [ subsec : spvar ] we consider sparse vector autoregression .sparse inverse covariance estimation combines the markov property with the assumption of gaussianity to learn a graph structure describing symmetric relations between the variables .a typical formulation of sparse inverse covariance estimation is graphical lasso .suppose the data matrix representing all the observations is given , &\x[1]&\ldots&\x[k-1]\big)\end{array}\in \mathbb{r}^{n\times k}\ ] ] in this problem , the data is assumed to be gaussian , i.e. each \sim \mathcal{n}(\0,\sigma) ] is a random noise process that is generated independently from ] is a random noise process that is generated independently from ] , a discrete time series on node in graph , where indexes the nodes of the graph and indexes the time samples .let be the total number of nodes and be the total number of time samples , and =\begin{array}{cccc } \big ( x_0[k ] & x_1[k ] & \ldots & x_{n-1}[k ] \big)^t\end{array } \in \mathbb{c}^n\ ] ] represents the graph signal at time sample .we consider a causal graph process ( cgp ) to be a discrete time series ] is statistical noise , are scalar polynomial coefficients , and is a vector collecting all the s . note that this model does _ not _ assume markovianity on nodes and their neighbors .it instead asserts that the signal on a node at the current time is affected through network effects by signals on other nodes at past times .matrix polynomial is at most of order , reflecting that ]. grouping terms containing ] by the cgp model in section [ subsec : cgp ] , the regularizing term promotes sparsity of the estimated adjacency matrix , and the term also promotes sparsity in the matrix polynomial coefficients .unfortunately , the matrix polynomial in the first term makes this problem highly nonconvex . that is , using a convex optimization based approach to solve ( [ eq : optnonconvex2 ] )directly may result in finding a matrix and coefficients minimizing the objective function locally , finding a solution that is not near to the true globally minimizing matrix and coefficients . instead , our approach here is to break this estimation down into three separate , more tractable steps : solve for recover structure of estimate as previously stated , the graph filters are polynomials of a and are thus shift - invariant and must mutually commute .then their commutator = p_i ( \a ) p_j ( \a ) - p_j ( \a ) p_i ( \a ) = 0 \ ; \ ; \forall i , j\ ] ] let ; is the estimate of .this leads to the optimization problem , - \sum\limits_{i=1}^{m}\r_i \x[k - i ] \right\|_2 ^ 2 \\ + & \lambda_1 \|\vc(\r_1 ) \|_1 + \lambda_2 \sum\limits_{i\ne j}\|[\r_i,\r_j]\|_f^2 \end{aligned}\ ] ] while this is still a non - convex problem , it is multi - convex .that is , when ( all except for ) are held constant , the optimization is convex in .this naturally leads to block coordinate descent as a solution , - \sum\limits_{i=1}^{m}\r_i \x[k - i ] \right\|_2 ^ 2 \\ + & \lambda_1 \|\vc(\r_1 ) \|_1 + \lambda_2\sum\limits_{j\ne i}\left\|[\r_i,\r_j]\right\|_f^2 \end{aligned}\ ] ] each of these sub - problems for estimating in a single sweep of estimating is formulated as an -regularized least - squares problem that can be solved using standard methods .after obtaining estimates , we find an estimate for .one approach is to take .this appears to ignore the information from the remaining .however , the information has already been incorporated during the iterations when solving for , especially if we begin one new sweep to estimate using ( [ eq : optri ] ) with .a second approach is also possible , explicitly using all the together to find , \right\|_f^2 \end{aligned}\ ] ] this can be seen as similar to running one additional step further in the block coordinate descent to find except that this approach does not explicitly use the data .we can estimate in one of two ways : we can estimate either from and or from and the data . to estimate from and , we set up the optimization , where alternatively , to estimate from and the data , we can use the optimization , where , & \x[m+1 ] & ... & \x[m+k - m-1]\end{array}\!\right),\end{aligned}\ ] ] which can also be solved using standard -regularized least squares methods .the methods discussed so far can be interpreted as assuming that 1 ) the process is a linear autoregressive process driven by white gaussian noise and 2 ) the parameters and a priori follow laplace distributions .the objective function in ( [ eq : optnonconvex2 ] ) approximately corresponds to the log posterior density and its solution to an approximate maximum a posteriori ( map ) estimate .this framework can be extended to estimate more general autoregressive processes , such as those with a non - gaussian noise model and certain forms of nonlinear dependence of the current state on past values of the state .we formulate the general optimization where is a loss function that can correspond to a log - likelihood function dictated by the noise model , and and are regularization functions ( usually convex norms ) that can correspond to log - prior distributions imposed on the parameters and are dictated by modeling assumptions .again , the matrix polynomials introduce nonconvexity , so similarly as before , we can separate the estimation into three steps to reduce complexity .we next generalize equation ( [ eq : optri ] ) used to find as estimates of with the optimization ,\ldots,[\r_i,\widehat{\r}_m ] ) \end{aligned}\ ] ] where is the part of the objective function that depends on when the other are fixed , the term regularizes the estimated matrix polynomial , and the term promotes commmutativity of the matrix polynomials .next , we can again take , or we can reformulate equation ( [ eq : finda2 ] ) where is some loss function , regularizes the estimated adjacency matrix , and enforces commutativity of the adjacency matrix with the other matrix polynomials. we can generalize ( [ eq : findcspec ] ) as where is the objective function , and is a regularizing function on the matrix polynomial coefficients , and is defined as in section [ subsec : estimcij ] ; and lastly generalize ( [ eq : findc2spec ] ) as where and are the same functions as above in ( [ eq : findc ] ) , and and are defined as in section [ subsec : estimcij ] . in sections [ subsec : solvepi]-[subsec : estimcij ] ,we have outlined a 3-step algorithm to obtain estimates and for the adjacency matrix and filter coefficients as a more efficient and well - behaved alternative to directly using ( [ eq : optnonconvex ] ) .initialize , , find with fixed , using ( [ eq : optpi ] ) . set or estimate from using ( [ eq : finda ] ) .solve for from , using ( [ eq : findc ] ) or from , using ( [ eq : findc2 ] ) .we call this 3-step procedure the basic algorithm , which is outlined in algorithm [ alg : algbasic ] .superscripts denote the iteration number , denotes and likewise denotes , and is the final iteration performed before convergence of is determined or a preset maximum iteration count is exceeded . as an extension of the basic algorithm, we can also choose the estimated matrix and filter coefficients to initialize the direct approach of using ( [ eq : optnonconvex ] ) .starting from these initial points , we may find better local minima than with initializations at and or at random points .we call this procedure the extended algorithm , summarized in algorithm [ alg : algext ]. estimate , using basic algorithm .find , using initialization , from ( [ eq : optnonconvex ] ) using convex methods .in this section , we discuss the convergence of both the basic and extended algorithms described above . in estimating and , the forms of the optimization problems are well studied when choosing and norms as loss and regularization functions , as seen in equations ( [ eq : finda2 ] ) and ( [ eq : findc2spec ] ) .however , using these same norms , step 1 of the algorithm is a nonconvex optimization .hence we would like to ensure that step 1 converges . when using block coordinate descent for general functions , neither the solution nor the objective function values are guaranteed to converge to a global or even a local minimum . however , under some mild assumptions , using block coordinate descent to estimate will converge . in equation ( [ eq : optnonconvex ] ) ,if we assume the objective function to be continuous and to have convex , compact level sets in each coordinate block ( for example , if the functions for and are the and norms as in equation ( [ eq : optnonconvex2 ] ) ) , then the block coordinate descent will converge .now we discuss the convergence of the extended method described in section [ subsec : extest ] assuming that the basic algorithm has converged to an initial point for the extended algorithm .we assume that the function in equation ( [ eq : optnonconvex ] ) has compact level sets and is bounded below .then an iterative convex method that produces updates of such that ( e.g. , generalized gradient descent with appropriately chosen step size ) converges , possibly to a local optimum if the problem is nonconvex . if the functions are the and norms as in equation ( [ eq : optnonconvex2 ] ) , these conditions are satisfied as well .the algorithm was tested on randomly generated examples ( with varying and ) and a real temperature sensor network dataset ( with and ) , sampled once per day over a year at locations in the continental united states . to solve the regularized least squares iterations for estimating the cgp matrices , we used gradient projection for sparse reconstruction . to estimate the mrf matrices, we implemented a proximal gradient descent algorithm to estimate the svar matrix coefficients from [ eq : svar ] since the code used in is not tested for larger graphs .[ fig : toy ] the random graph dataset was generated by first creating a random sparse matrix for a graph with nodes and coefficients that corresponded to a stable system .the matrix in our simulations had each off - diagonal element independently drawn from a unit normal distribution and made sparse by thresholding and then scaled to ensure stability where is the largest eigenvalue of .this results in a directed , weighted erds - rnyi graph topology with a constant probability of having an edge from node to node and with edge weights bounded away from .the diagonal elements were generated from a uniform distribution , also to ensure stability .then the adjacency matrix was formed .finally , the polynomial coefficients for a process of order were arbitrarily chosen to result in a stable process .the data matrix was formed by generating random initial states and zero - mean unit - covariance additive white gaussian noise ] according to ( [ eq : cgpmodel ] ) . in figure[ fig : toy_a ] , we see the structure of the matrix with nodes used in the simulated graph to generate with samples according to a cgp .we also see the structure of the matrices estimated using the basic and extended methods , and also using the gradient method in the extended method but initialized at the origin ( , ) . while the actual graphs are directed , it is difficult to depict the direction of edges in larger graphs , so for presentation , they are shown as undirected in figure [ fig : toy_a ] . in figure[ fig : toy_b ] , we see the individual values of the matrices estimated from the data .now we note the directed nature of the graph , as the matrix is asymmetric .we see that the estimated matrices all have almost the same support as the true matrix visually .qualitatively , the basic method produces a less sparse solution , and the gradient method produces a lower magnitude solution , while the solution produced by the extended method is both sparse and closer in magnitude solution to the true solution . thus we see that the extended method performs better than either the basic step or the gradient step alone .the mean squared errors ( mse s ) of entries of the matrix are computed as the mse s for the estimates shown in figure [ fig : toy ] are : for the basic , for the extended , and for the gradient . in figure[ fig : toy_kn_mse ] , we see mse for across different numbers of time samples on the same graph , as well as across different numbers of nodes corresponding to distinct graphs with the same level of sparsity . the mse is averaged over monte - carlo simulations of the data matrix for each problem size .as expected , the mse for each problem size decreases as the number of samples increases .while the estimate produced by the proposed method is biased for finite , the plot suggests that asymptotically in the number of samples , the graph produced may still be consistent .in addition , the mse s computed decrease with increasing , suggesting that the total error in estimating the matrix .the dependence of these error rates on and are of interest for further analysis .the temperature dataset is a collection of daily average temperature measurements taken over days at locations around the continental united states .the time series is linearly detrended at each measurement station to form . then the seasonally detrended time series are obtained by applying an ideal high - pass filter with cutoff period of days to each .finally , the data matrix is formed from [ fig : temp_pred ] we compare the sparsity and prediction errors of using cgp and mrf models as well as an undirected distance graph as described in , all on the same set of temperature data .the distance graph model uses an adjacency matrix to model the process =\w[k]+\sum\limits_{i=1}^{m } h_i ( \a^{\textrm{dist } } ) \x[k - i]\ ] ] where are polynomials of the distance matrix with elements chosen as with representing the neighborhood of cities nearest to city . in this model, is taken to be fixed and the polynomial coefficients are to be estimated . in our experiments , we assumed .we separated the data into two segments , one consisting of the even time indices and one consisting of the odd time indices .one set was used as training data and the other set was left as testing data .this way , the test and training data were both generated from almost the same process but with different daily fluctuations . in this experiment, we compute the prediction mse as -\widehat{\x}[i ] \right\|^2\ ] ] here , since we do not have the ground truth graph for this data , the experiments can be seen as corresponding to the two related tasks of compression and prediction .the training error indicates how well the estimated graph can represent or compress the entire series , while the test error indicates how well the estimated graph can predict the following time instance from past observations on new series . in figure[ fig : temp_pred_a ] , we see that using a directed graph ( either cgp or mrf ) performs better for prediction than using an undirected distance graph in both training and testing phases .in addition , for high sparsity or low nonzeros ( towards the left side of the graph with proportion of nonzeros ) , the cgp fits the data with better accuracy than the mrf model . at lower sparsity levels ( towards the right side of the graph ) , the mrf model performs better than the cgp model .figure [ fig : temp_pred_b ] shows the average of test and training error labeled as total error .the same trends are present here as well , in which the cgp model outperforms the mrf model for . however ,when the proportion of nonzeros in the adjacency matrix is above , one can not claim that the model is truly `` sparse '' .thus , when using sparse models , the cgp captures the true dynamics of the process using higher sparsity than the mrf model . in figure[ fig : temp ] , we compare the temperature networks estimated on the entire time series using cgp and mrf models that both have sparsity level . for the mrf estimate , we used the first matrix coefficient to represent the network , although as mentioned previously there is not the interpretation of a single weighted graph being estimated using this model .the -axis corresponds to longitude while the -axis corresponds to latitude .we note that the network produced by the mrf at the same sparsity level has similar support to that produced by the cgp , but the magnitudes of the edges are lower .we see that the cgp model clearly picks out the predominant west - to - east direction of wind in the portion of the country , as single points in this region are seen to predict multiple eastward points .it also shows the influence of the roughly north - northwest - to - south - southeast rocky mountain chain at .this yields easy interpretation consistent with knowledge of geographic and meteorological features .we have shown through experiments that the estimation algorithms presented are able to accurately estimate a sparse graph topology from data generated when the true model is a cgp .we have also demonstrated empirically that the cgp model can use a higher sparsity level than the mrf model to describe processes at the same levels of accuracy .for this reason , we believe that the cgp model reflects the dynamics of the process underlying the temperature sensor data more faithfully .we have presented a new type of graph - based network process and a computationally tractable algorithm for estimating such causal networks . the algorithm was demonstrated on several random graphs of varying size and a real temperature sensor network dataset .the estimated adjacency matrices in the random graph examples were shown to be close to the true graph . in the real dataset ,the adjacency matrices estimated using our method were consistent with prior physical knowledge and achieved lower prediction error compared to previous methods at the same sparsity level .
many _ big data _ applications collect a large number of time series , for example , the financial data of companies quoted in a stock exchange , the health care data of all patients that visit the emergency room of a hospital , or the temperature sequences continuously measured by weather stations across the us . a first task in the analytics of these data is to derive a low dimensional representation , a graph or discrete manifold , that describes well the _ _ inter__relations among the time series and their _ _ intra__relations across time . this paper presents a computationally tractable algorithm for estimating this graph structure from the available data . this graph is directed and weighted , possibly representing _ causation _ relations , not just correlations as in most existing approaches in the literature . the algorithm is demonstrated on random graph and real network time series datasets , and its performance is compared to that of related methods . the adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested . * keywords : * graph signal processing , graph structure , adjacency matrix , network , time series , big data , causal
an increasing number of geo - located data are generated everyday through mobile devices .this information allows for a better characterization of social interactions and human mobility patterns .indeed , several data sets coming from different sources have been analyzed during the last few years .some examples include cell phone records , credit card use information , gps data from devices installed in cars , geolocated tweets or foursquare data .this information led to notable insights in human mobility at individual level , but it makes also possible to introduce new methods to extract origin - destination tables at a more aggregated scale , to study the structure of cities and even to determine land use patterns . in this work ,we analyze a twitter database containing over million geo - located tweets from european countries with the aim of exploring the use of twitter in transport networks .two types of transportation systems are considered across the continent : highways and trains .tweets on the road and on the rail between september 2012 and november 2013 have been identified and the coverage of the total transportation system is analyzed country by country .differences between countries rise due to the different adoption or penetration rates of geo - located twitter technology .however , our results show that the penetration rate is not able to explain the full picture regarding differences across counties that may be related to the cultural diversity at play .the paper is structured as follows . in the first section ,the datasets are described and the method used to identify tweets on highways and railways is outlined . in the second section , we present the results starting by general features about the twitter database and then comparing different european countries by their percentage of highway and railway covered by the tweets .finally , the number of tweets on the road is compared with the average annual daily traffic ( aadt ) in france and in the united kingdom to assess its capacity as a proxy to measure traffic loads .the dataset comprehends geo - located tweets across europe emitted by twitter users in the period going from september to november .the data was gathered through the general data streaming with the twitter api .it is worth noting that the tweets are not uniformly distributed , see figure [ map]a .countries of western europe seem to be well represented , whereas countries of eastern europe are clearly under - represented ( except for turkey and russia ) . the highway ( both directions ) andthe railway european networks were extracted from openstreetmap ( see figure [ map]b and [ map]c for maps of roads and railways , respectively ) .a close look at the three maps reveals that while tweets concentrate in cities , there is a number of tweets following the main roads and train lines . in this sense ,even roads that go through relatively low population areas can be clearly discerned such as those on russia connecting the main cities , the area of monegros in spain , north of zaragoza or the main roads in the center of france ( see the maps country by country in appendix ) . herewe analyze in detail the statistics of the tweets posted on the roads and railways and discuss the possibility that they are a proxy for traffic and cultural differences .it is important to stress that we considered only the main highways ( motorways and international primary roads ) , not rural roads , while for railways we considered all the main lines ( standard gauge of the considered country ) .the european highways and railways that we consider have a total length of kilometers and kilometers , respectively , which have been divided into segments of kilometers each .the histograms of total lengths by country of highways ( panel ( a ) ) and railways ( panel ( b ) ) are plotted in figure [ rankroad ] .russia , spain , germany , france and turkey represent of the highways total length in europe . while , for the railway , russia and germany represent of the total length .figure [ rankroad]c shows that most of european countries have a railway network larger than the highway network except for turkey , norway , greece , spain , portugal and finland . in particular , turkey has a highway system three times larger than the railway network . to identify the tweets on the road / rail , we have considered all the tweets geo - located less than meters away from a highway ( both directions ) or a railway .then , each tweet on the road / rail is associated with the closest segment of road / rail . using this information, we can compute the percentage of road and rail segments covered by the tweets ( hereafter called highway coverage and railway coverage ) .a segment of road or rail is covered by tweets if there is at least one tweet associated with this segment .the data analyzed are publicly available as they come from public online sites ( twitter and openstreetmap ) .furthermore , the twitter data have been anonymized and aggregated before the analysis that has been performed in accordance with all local data protection laws .to evaluate the representativeness of the sample across european countries , the twitter penetration rate , defined as the ratio between the number of twitter users and the number of inhabitants of each country , is plotted in figure [ pr]a .this ratio is not distributed uniformly across european countries .the penetration rate is lower in countries of central europe .it has been shown in previous studies that the gross domestic product ( gdp ) per capita ( an indicator of the economic performance of a country ) is positively correlated with the penetration rate at a world - wide scale .figure [ pr]b shows the penetration rate as a function of the gdp per capita in european countries . no clear correlation is observed in this case .this fact does not conflict with the previous results since our analysis is restricted to europe and as shown in , in this relationship , countries from different continents cluster together .this means that a global positive correlation appears if countries from all continents are considered but it is not necessarily significant when the focus is set instead on a particular area of the world .the penetration rate of geo - located tweets is different across european countries and does not show a clear relation to the gdp per capita of each country .there are several factors that can contribute to this diversity such as the facility of access or prices of the mobile data providers .in addition , generic cultural differences when facing a delicate issue from the privacy perspective such as declaring the precise location in posted messages can be also present .one can then naturally wonder whether these differences extend to other aspects of the use of twitter or are constraint to geographical issues .one obvious question to explore is the structure of the social network formed by the interactions between users .we extract interaction networks by establishing the users as nodes and connecting a pair of them when they have interchanged a reply .replies are specific messages in twitter designed to answer the tweets of a particular user .it can be seen as a direct conversation between two users and as shown in ( and references therein ) can be related to more intense social relations .a network per country was obtained by assigning to each user the country from which most of his or her geo - located tweets are posted .figure [ degree]a shows the distribution of the social network s degree ( number of connections per user ) of countries ( belgium , croatia , estonia , hungary and the uk ) drawn at random among the considered .the slope of these distributions are very similar and can be fitted using a power - law distribution .more systematically , in figure [ degree]b and figure [ degree]c we have respectively plotted the box plot of the fitted exponent values obtained for the countries and the box plot of the r associated with these fits .all the networks have very similar degree distributions , although they show a different maximum degree as a result of the diverse network sizes .these networks are sparse due to the fact that we are keeping only users if they post geo - located tweets and connections only if a reply between two users have taken place .still and beyond the degree distribution , other topological features such as the average node clustering seems to be quite similar across europe laying between and for the most populated countries ( where we have more data for the network ) .the percentage of segments ( i.e. , km ) covered by the tweets in europe is for the highway and the railway . the highway coverage is better than the railway coverage probably because the number of passenger - kilometers per year , which is the number of passengers transported per year times kilometers traveled , on the rail network is lower .however , the coverage is very different according to the country .indeed , in figure [ tweeton ] we can observe that western european countries have a better coverage than countries of eastern europe except turkey and , to a lesser extent , russia .figure [ rank ] shows the top european countries ranked by highway coverage ( figure [ rank]a ) and railway coverage ( figure [ rank]b ) .the two countries with the best highway and railway coverages are the united kingdom and the netherlands .the tweets cover of the highway system in uk and in netherlands . on the other hand ,the tweets cover up to of the railway network in the uk and in netherlands .inversely , the country with the lowest coverage is moldavia with a highway coverage of and a railway coverage of .the first factor to take into account to understand such differences is the penetration rate .in fact , as it can be observed in figure [ prhrhr]a and figure [ prhrhr]b , as a general trend , the coverage of both highway and railway networks is positively correlated with the penetration rate . and , as a consequence , a positive correlation can also be observed between the highway coverage and the railway coverage ( figure [ prhrhr]c ) .however , these relationships are characterized by a high dispersion around the regression curve .note that the dispersion is higher than what it can look in a first impression because the scales of the plots of figure [ prhrhr ] are logarithmic .for the two first relationships the mean absolute error is around and for the third one the mean absolute error is around .this implies that divergences on the geo - located twitter penetration does not fully explain the coverage differences between the european countries .disparity in coverage between countries can neither be satisfactorily explained by differences in fares or accessibility to mobile data technology .for example , two countries as france and spain are similar in terms of highway infrastructure , mobile phone data fares and accessibility , but the geo - located twitter penetration rates are very different as also are their highway coverage in spain and in france . besides penetration rates , divergences in coverage might be the product of cultural differences among european countries when using twitter in transportation .as it can be observed in figure [ ttotr]a , the proportion of tweets geo - located on the highway or railway networks is very different from country to country . in the following , we focus on three examples of countries with similar characteristics in the sense of penetration rates but displaying significant differences in transport network coverage . + * ireland and united kingdom * the most explicit example of the impact of cultural differences on the way people tweet in transports could be given by the ireland and united kingdom case studies .indeed , these two countries have very similar penetration rates but uk has a proportion of tweets in transports more than two times higher than ireland .moreover , both highway and railway coverages are one and a half times higher in uk than in ireland .+ * turkey and netherlands * turkey and netherlands , which have similar penetration rate , are also an interesting example illustrating how cultural and economical differences may influence coverage . despite the fact that they both have a high highway coverage ,netherlands has a railway coverage three times higher than turkey .different economic levels of train and car travelers in turkey could be , for instance , an explanation for this . + * belgium and norway * for countries having similar penetration rate , the higher the proportion of tweets in transports , the better the coverage. however , some exceptions exist , for example , norway has a proportion of tweets in transports higher than belgium but , inversely , belgium has a highway coverage three times higher than norway . given the very extensive highway system of norway , some of the segments , especially on the north , can have very low traffic , which could be the origin of this difference .+ > m4 cm > m4 cm m4 cm < * pair of countries * & * difference between the percentage of rail passenger - kilometers * & * difference between the railway coverages * + belgium - turkey & 0.13 & 4.8 + ireland - belgium & 0.19 & -4.3 + ireland - turkey & 0.32 & 0.5 + france - latvia & 0.17 & 5.2 + switzerland - sweden & 0.18 & 8.1 + sweden - estonia & 0.17 & 7.5 + switzerland - estonia & 0.36 & 15.6 + norway - germany & 0.09 & -3.6 + danmark - norway & 0.05 & 4.5 + norway - portugal & 0.07 & 0.2 + danmark - germany & 0.15 & 0.9 + portugal - germany & 0.02 & -3.8 + danmark - portugal & 0.12 & 4.7 + in general , the distribution of tweets according to the transport network is also very different from country to country ( figure [ ttotr]b ) but also region by region .for example , countries from north and central europe have a higher proportion of tweets on the road than tweets on the rail than others european countries .this is probably due to difference regarding the transport mode preference among european countries . to check this assumption , we studied the distribution of rail passenger - kilometers in according to the proportion of tweets on the rail .figure [ rpk ] shows box plots of the distribution of rail passenger - kilometers expressed in percentage of total inland passenger - kilometers according to the proportion of tweets on the rail among the tweets on the road and rail .globally , the number of rail passenger - kilometers is lower for countries having a low proportion of tweets on the rail , which confirms our assumption . ] in the same way , the distribution of rail passenger - kilometers in can be used to understand why two countries having the same highway coverage might have very different railway coverages .for example , switzerland and estonia have the same highway coverage with about of road segments covered by the tweets but the railway coverage is very different , with about of rail segments covered in switzerland and in estonia .this can be explained by the fact that in switzerland trains accounted for of all inland passenger - kilometers in ( which was the highest value among european countries in that year ) and inversely , in estonia , trains accounted for of all inland passenger - kilometers ( one of the lowest in europe ) . more systematically , for each pair of countries having similar highway coverages , we compared the difference between railway coverages and the difference between the percentage of rail passenger - kilometers . first , pair of countries having a highway coverage higher than and an absolute different between their highway coverages lower than are selected .thus , we have selected pairs of countries with a similar highway coverage .table [ table ] displays the difference between the percentage of rail passenger - kilometers and the difference between the railway coverages for these pairs of countries . in out of cases ,the differences have the same sign .this fact points towards a possible correlation between traffic levels and tweet coverage . to assess more quantitatively this hypothetical relation between the number of vehicles and the number of tweets on the road , we compared the number of tweets and the average annual daily traffic ( aadt ) on the highways in united kingdom in and in france in .the aadt is the total number of vehicle traffic of a highway divided by days .the number of highway segments for which the aadt was gathered is in uk and in france .the average length of these segments is kilometers in uk and in france . as in the previous analysis ,the number of tweets associated with a segment was computed by identifying all the tweets geo - located at less than meters away from the segment .figure [ fruk]a and [ fruk]c shows a comparison between the aadt and the number of tweets on the road for both case studies .there is a positive correlation between the aadt and the number of tweets on the road but the pearson correlation coefficient values are low , around for the france case study and around for the uk case study .this can be explained by the large number of highway segments having a high aadt but a very low number of tweets .to understand the origin of such disagreement between tweets and traffic , we have divided the segments into two groups : those having a high aadt and a very low number of tweets ( red points ) and the rest ( blue points ) .these two types of segments have been separated using the black lines in figure [ fruk]a and [ fruk]c . figure [ fruk]b and [ fruk]d show the box plots of the highway segment length in kilometer according to the segment type for both case studies .it is interesting to note that the segments having a high aadt and a low number of tweets are globally shorter than the ones of the other group . indeed , according to the welch two sample t - test the average segment length of the first group ( km in france and in uk ) is significantly lower than the one of the second group ( km in france and in uk ) . given a similar speed one can assume that the shorter the road segment is , the lower time people have to post a tweet .other factors that may influence this result is the nature of the segments , rural vs urban , and the congestion levels that can significantly alter the time spent by travelers in the different segments .in this work , we have investigated the use of twitter in transport networks in europe .to do so , we have extracted from a twitter database containing more than million geo - located tweets posted from the highway and the railway networks of european countries .first , we show that the countries have different penetration rates for geo - located tweets with no clear dependence on the economic performance of the country .our results show , as well , no clear difference between countries in terms of the topological features of the twitter social network . dividing the highway and railway systems in segments ,we have also studied the coverage of the territory with geo - located tweets .european countries can be ranked according to the highway and railway coverage .the coverages are very different from country to country .although some of this disparity can be explained by differences in penetration rate or by the use of different transport modalities , a large dispersion in the data still persist .part of it could be due to cultural differences among european countries regarding the use of geo - located tools .finally , we explore whether twitter can be used as a proxy to measure of traffic on highways by comparing the number of tweets and the average annual daily traffic ( aadt ) on the highways in united kingdom and france .we observe a positive correlation between the number of tweets and the aadt .however , the quality of this relationship is reduced due to the short character of some aadt highway segments .we conclude that the number of tweets on the road ( train ) can be used as a valuable proxy to analyze the preferred transport modality as well as to study traffic congestion provided that the segment length is enough to obtain significant statistics .partial financial support has been received from the spanish ministry of economy ( mineco ) and feder ( eu ) under projects modass ( fis2011 - 24785 ) and intense ( fis2012 - 30634 ) , and from the eu commission through projects eunoia , lasagne and insight .ml acknowledges funding from the conselleria deducaci , cultura i universitats of the government of the balearic islands and jjr from the ramn y cajal program of mineco .45 watts dj ( 2007 ) a twenty - first century science .nature 445 : 489 . vespignani a ( 2009 ) predicting the behavior of techno - social systems .science 325 : 425428 .onnela j , saramaki j , hyvonen j , szabo g , lazer d , et al .( 2007 ) structure and tie strengths in mobile communication networks .proc natl acad sci usa 104 : 73327336 .eagle n , pentland as , lazer d ( 2009 ) from the cover : inferring friendship network structure by using mobile phone data .proceedings of the national academy of sciences 106 : 1527415278 .gonzalez mc , hidalgo ca , barabasi al ( 2008 ) understanding individual human mobility patterns .nature 453 : 779782 .song c , qu z , blumm n , barabsi al ( 2010 ) limits of predictability in human mobility . science 327 : 10181021 .phithakkitnukoon s , smoreda z , olivier p ( 2012 ) socio - geography of human mobility : a study using longitudinal mobile phone data .plos one 7 : e39253 .wang p , gonzlez m , hidalgo c , barabsi al ( 2009 ) understanding the spreading patterns of mobile phone viruses .science 324 : 10711076 .ratti c , pulselli rm , williams s , frenchman d ( 2006 ) mobile landscapes : using location data from cell phones for urban analysis . environment and planning b : planning and design 33 : 727 - 748 .reades j , calabrese f , sevtsuk a , ratti c ( 2007 ) cellular census : explorations in urban data collection .pervasive computing , ieee 6 : 30 - 38 .soto v , fras - martnez e ( 2011 ) automated land use identification using cell - phone records . in : proceedings of the 3rd acm international workshop on mobiarch .new york , ny , usa : acm , hotplanet 11 , pp . 1722 . doi : 10.1145/2000172.2000179 .http://doi.acm.org/10.1145/2000172.2000179 .fras - martnez v , soto v , hohwald h , fras - martnez e ( 2012 ) characterizing urban landscapes using geolocated tweets . in : socialcom / passat .ieee , pp .239 - 248 .isaacman s , becker r , cceres r , martonosi m , rowland j , et al .( 2012 ) human mobility modeling at metropolitan scales . in : proceedings of the international conference on mobile systems , applications , and services ( mobisys ) .doi : 10.1145/2307636.2307659 .http://dx.doi.org/10.1145/2307636.2307659 .toole j , ulm m , gonzlez m , bauer d ( 2014 ) inferring land use from mobile phone activity .proceedings of the acm sigkdd international workshop on urban computing pp 18 .pei t , sobolevsky s , ratti c , shaw sl , zhou c ( 2013 ) a new insight into land use classification based on aggregated mobile phone data .arxiv e - print arxiv:1310.6129 .louail t , lenormand m , garcia cant o , picornell m , herranz r , et al .( 2014 ) from mobile phone data to the spatial structure of cities .arxiv e - print arxiv:140:4540 .hasan s , schneider cm , ukkusuri sv , gonzlez mc ( 2012 ) spatiotemporal patterns of urban human mobility .journal of statistical physics 151 : 115 .gallotti r , bazzani a , rambaldi s ( 2012 ) towards a statistical physics of human mobility .international journal of modern physics 23 : 1250061 .furletti b , cintia p , renso c , spinsanti l ( 2013 ) inferring human activities from gps tracks . in : proceedings of the 2nd acm sigkdd international workshop on urban computing .new york , ny , usa : acm , urbcomp 13 , pp . 5:15:8 .mocanu d , baronchelli a , perra n , gonalves b , zhang q , et al .( 2013 ) the twitter of babel : mapping world languages through microblogging platforms .plos one 8 : e61981 .gonzlez - bailn s , borge - holthoefer j , rivero a , moreno y ( 2011 ) the dynamics of protest recruitment through an online network .scientific reports 1 : 197 .hawelka b , sitko i , beinat e , sobolevsky s , kazakopoulos p , et al .( 2013 ) geo - located twitter as a proxy for global mobility patterns .arxiv e - print arxiv:1311.0680 .lenormand m , picornell m , garcia cant o , tugores a , louail t , et al .( 2014 ) cross - checking different source of mobility information .arxiv e - print arxiv:1404.0333 .noulas a , scellato s , lambiotte r , pontil m , mascolo c ( 2012 ) a tale of many cities : universal patterns in human urban mobility .plos one 7 : e37027 .twitter api , section for developers of twitter web page , https://dev.twitter.com .open street map ( http://www.openstreetmap.org ) .gdp per capita in by the international monetary fund ( http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/weoselgr.aspx ) .grabowicz pa , ramasco jj , moro e , pujol jm , eguiluz vm ( 2012 ) social features of online networks : the strength of intermediary ties in online social media .plos one 7 : e29358 .source : eurostat ( http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/passenger_transport_statistics ) .http://www.dft.gov.uk / traffic - counts/. hese data are available upon request at service studies on transport , roads and facilities ( setra ) ( http://dtrf.setra.fr/ ) .welch bl ( 1951 ) on the comparison of several mean values : an alternative approach .biometrika 38 : pp .
the pervasiveness of mobile devices , which is increasing daily , is generating a vast amount of geo - located data allowing us to gain further insights into human behaviors . in particular , this new technology enables users to communicate through mobile social media applications , such as twitter , anytime and anywhere . thus , geo - located tweets offer the possibility to carry out in - depth studies on human mobility . in this paper , we study the use of twitter in transportation by identifying tweets posted from roads and rails in europe between september 2012 and november 2013 . we compute the percentage of highway and railway segments covered by tweets in countries . the coverages are very different from country to country and their variability can be partially explained by differences in twitter penetration rates . still , some of these differences might be related to cultural factors regarding mobility habits and interacting socially online . analyzing particular road sectors , our results show a positive correlation between the number of tweets on the road and the average annual daily traffic on highways in france and in the uk . transport modality can be studied with these data as well , for which we discover very heterogeneous usage patterns across the continent .
younger readers of this journal may not be fully aware of the passionate battles over bayesian inference among statisticians in the last half of the twentieth century . during this period ,the missionary zeal of many bayesians was matched , in the other direction , by a view among some theoreticians that bayesian methods are absurd not merely misguided but obviously wrong in principle .such anti - bayesianism could hardly be maintained in the present era , given the many recent practical successes of bayesian methods .but by examining the historical background of these beliefs , we may gain some insight into the statistical debates of today .we begin with a _note on bayes rule _ that appeared in william feller s classic probability text : `` unfortunately bayes rule has been somewhat discredited by metaphysical applications of the type described above . in routine practice , this kind of argument can be dangerous . a quality control engineer is concerned with one particular machine and not with an infinite population of machines from which one was chosen at random .he has been advised to use bayes rule on the grounds that it is logically acceptable and corresponds to our way of thinking .plato used this type of argument to prove the existence of atlantis , and philosophers used it to prove the absurdity of newton s mechanics . in our caseit overlooks the circumstance that the engineer desires success and that he will do better by estimating and minimizing the sources of various types of errors in predicting and guessing .the modern method of statistical tests and estimation is less intuitive but more realistic .it may be not only defended but also applied . '' w. feller , ( pp . 124 - 125 of the 1970 edition ) .feller believed that bayesian inference could be _ defended _ ( that is , supported via theoretical argument ) but not _ applied _ to give reliable answers to problems in science or engineering , a claim that seems quaint in the modern context of bayesian methods being used in problems from genetics , toxicology , and astronomy to economic forecasting and political science . as we discuss below , what struck us about feller s statement was not so much his stance as his apparent certainty .one might argue that , whatever the merits of feller s statement today , it might have been true back in 1950 . such a claim , however , would have to ignore , for example , the success of bayesian methods by turing and others in codebreaking during the second world war , followed up by expositions such as , as well as jeffreys s _ theory of probability _ , which came out in 1939 . consider this recollection from physicist and bayesian e. t. jaynes : `` when , as a student in 1946 , i decided that i ought to learn some probability theory , it was pure chance which led me to take the book _ theory of probability _ by , from the library shelf .in reading it , i was puzzled by something which , i am afraid , will also puzzle many who read the present book .why was he so much on the defensive ?it seemed to me that jeffreys viewpoint and most of his statements were the most obvious common sense , i could not imagine any sane person disputing them .why , then , did he feel it necessary to insert so many interludes of argumentation vigorously defending his viewpoint ?was nt he belaboring a straw man ?this suspicion disappeared quickly a few years later when i consulted another well - known book on probability ( feller , 1950 ) and began to realize what a fantastic situation exists in this field . _the whole approach of jeffreys was summarily rejected as metaphysical nonsense _[ emphasis added ] , without even a description .the author assured us that jeffreys methods of estimation , which seemed to me so simple and satisfactory , were completely erroneous , and wrote in glowing terms about the success of a ` modern theory , ' which had abolished all these mistakes .naturally , i was eager to learn what was wrong with jeffreys methods , why such glaring errors had escaped me , and what the new , improved methods were .but when i tried to find the new methods for handling estimation problems ( which jeffreys could formulate in two or three lines of the most elementary mathematics ) , i found that the new book did not contain them . '' e. t. jaynes ( ) . to return to feller s perceptions in 1950, it would be accurate , we believe , to refer to bayesian inference as being an undeveloped subfield in statistics at that time , with feller being one of many academics who were aware of some of the weaker bayesian ideas but not of the good stuff .this goes even without mentioning wald s complete class results of the 1940s .( wald s _ statistical decision functions _ got published in . )it is in that spirit that we consider s notorious dismissal of bayesian statistics , which is exceptional not in its recommendation after all , as of 1950 ( when the first edition of his wonderful book came out ) or even 1970 ( the year of his death ) , bayesian methods were indeed out of the mainstream of american statistics , both in theory and in application but rather in its intensity .feller combined a perhaps - understandable skepticism of the wilder claims of bayesians with a nave ( in retrospect ) faith in the classical neyman - pearson theory to solve practical problems in statistics .to say this again : feller s real error was not his anti - bayesianism ( an excusable position , given that many researchers at that time were apparently unaware of modern applied bayesian work ) but rather his casual , implicit , unthinking belief that classical methods could solve whatever statistical problems might come up . in short , feller was defining bayesian statistics by its limitations while crediting the neyman - pearson theory with the 1950 equivalent of vaporware : the unstated conviction that , having solved problems such as inference from the gaussian , poisson , binomial , etc . , distributions , that it would be no problem to solve all sorts of applied problems in the future .in retrospect , was wildly optimistic that the principle of estimating and minimizing the sources of various types of errors " would continue to be the best approach to solving engineering problems .( feller s appreciation of what a statistical problem is seems rather moderate : the two examples feller concedes to the bayesian team are ( i ) finding the probability a family has one child given that it has no girl and ( ii ) urn models for stratification / spurious contagion , problems that are purely probabilistic , no statistics being involved . ) or , to put it another way , even within the context of prediction and minimizing errors , why be so sure that bayesian methods can not apply ?feller perhaps leapt from the existence of philosophical justification of bayesian inference , to an assumption that philosophical arguments were the _ only _ justification of bayesian methods . where was this coming from , historically ?with stephen stigler out of the room , we are reduced to speculation ( or , maybe we should say , we are free to speculate ) .we doubt that feller came to his own considered judgment about the relevance of bayesian inference to the goals of quality control engineers .rather , we suspect that it was from discussions with one or more statistician colleague(s ) that he drew his strong opinions about the relative merits of different statistical philosophies . in that sense, feller is an interesting case in that he was a leading mathematician of his area , a person who one might have expected would be well informed about statistics , and the quotation reveals the unexamined assumptions of his colleagues .it is doubtful that even the most rabid anti - bayesian of 2010 would claim that bayesian inference can not applied .( we would further argue that the modern methods of statistics " feller refers to have to be understood in an historical context as eliminating older approaches by bayes , laplace and other 19th century authors , in a spirit akin to keynes ( 1921 ) .modernity starts with the great anti - bayesian ronald fisher who , along with richard von mises , is mentioned on page 6 by feller as the originator of `` the statistical attitude towards probability . '' )non - bayesians still occasionally dredge up feller s quotation as a pithy reminder of the perils of philosophy unchained by empiricism ( see , for example , , and ) . in a recent probability text , reviews some familiar probability paradoxes ( e.g. , the monty hall problem ) and draws the following lesson : `` in any experiment , the procedures and rules that define the sample space and all the probabilities must be explicit and fixed before you begin .this predetermined structure is called a protocol . embarking on experiments without a complete protocolhas proved to be an extremely convenient method of faking results over the years . and will no doubt continue to be so . ''strirzaker follows up with a portion of the feller quote and writes , despite all this experience , the popular press and even , sometimes , learned journals continue to print a variety of these bogus arguments in one form or another ." we are not quite sure why he attributes these problems to bayes , rather than , say , to kolmogorov after all , these error - ridden arguments can be viewed as misapplications of probability theory that might never have been made if people were to work with absolute frequencies rather than fractional probabilities . in any case, no serious scientist can be interested in bogus arguments ( except , perhaps , as a teaching tool or as a way to understand how intelligent and well - informed people can make evident mistakes , as discussed in chapter 3 of ) .what is perhaps more interesting is the presumed association between bayes and bogosity .we suspect that it is bayesians openness to making assumptions that makes their work a particular target , along with ( some ) bayesians intemperate rhetoric about optimality .somehow classical terms such as uniformly most powerful test " do not seem so upsetting .perhaps what has bothered mathematicians such as feller and stirzaker is that bayesians actually seem to believe their assumptions rather than merely treating them as counters in a mathematical game . in the first quote , the interpretation of the prior distribution as a reasoning based on an infinite population of machines " certainly indicates that feller takes the prior at face value !as shown by the recent foray of into the philosophy of bayesian foundations and in particular of definetti s , this interpretation may be common among probabilists , whereas we see applied statisticians as considering both prior and data models as assumptions to be valued for their use in the construction of effective statistical inferences . in applied bayesian inference, it is not necessary for us to believe our assumptions , any more than biostatisticians believe in the truth of their logistic regressions and proportional hazards models .rather , we make strong assumptions and use subjective knowledge in order to make inferences and predictions that can be tested by comparing to observed and new data ( see , or for a similar attitude coming from a non - bayesian direction ) .unfortunately , we doubt stirzaker was aware of this perspective when writing his book nor was feller , working years before either of the present authors were born .recall the following principle , to which we ( admitted bayesians ) subscribe : everyone uses bayesian inference when it is clearly appropriate .a bayesian is someone who uses bayesian inference even when it might seem inappropriate .what does this mean ?mathematical modelers from r. a. fisher on down have used and will use probability to model physical or algorithmic processes that seem well - approximated by randomness , from rolling of dice to scattering of atomic particles to mixing of genes in a cell to random - digit dialing . to be honest , most statisticians are pretty comfortable with probability models even for processes that are not so clearly probabilistic , for example fitting logistic regressions to purchasing decisions or survey responses or connections in a social network .( as discussed in , keynes _ treatise on probability _ is an exception in that even questions the sampling models . )bayesians will go the next step and assign a probability distribution to a parameter that one could not possibly imagine to have been generated by a random process , parameters such as the coefficient of party identification in a regression on vote choice , or the overdispersion in a network model , or hubble s constant in cosmology . as noted above, it is our impression that the assumptions of the likelihood are generally more crucial and often less carefully examined than the assumptions in the prior .still , we recognize that bayesians take this extra step of mathematical modeling . in some ways ,the role of bayesians compared to other statisticians is similar to the position of economists compared to other social scientists , in both cases making additional assumptions that are clearly wrong ( in the economists case , models of rational behavior ) in order to get stronger predictions . with great powercomes great responsibility , and bayesians and economists alike have the corresponding duty to check their predictions and abandon or extend their models as necessary . to return briefly to stirzaker s quote, we believe he is wrong or , at least , does not give any good evidence in his claim that in any experiment , the procedures and rules that define the sample space and all the probabilities must be explicit and fixed before you begin . " setting a protocol is fine if it is practical , but as discussed by rubin ( 1976 ) , what is really important from a statistical perspective is that all the information used in the procedure be based on known and measured variables .this is similar to the idea in survey sampling that clean inference can be obtained from probability sampling that is , rules under which all items have nonzero probabilities of being selected , with these probabilities being known ( or , realistically , modeled in a reasonable way ) .it is unfortunate that certain bayesians have published misleading and oversimplified expositions of the monty hall problem ( even when fully explicated , the puzzle is not trivial , as the resolution requires a full specification of a probability distribution for monty s possible actions under various states of nature , see e.g. ) ; nonetheless , this should not be a reason for statisticians to abandon decades of successful theory and practice on adaptive designs of experiments and surveys , not to mention the use of probability models for non - experimental data ( for which there is no protocol " at all ) .the prequel to feller s quotation above is the notorious argument , attributed to laplace , that uses a flat prior distribution on a binomial probability to estimate the probability the sun will rise tomorrow .the idea is that the sun has risen out of successive days in the past , implying a posterior mean of of the probability of the sun rising on any future day .( gives a recent coverage of the many criticisms that ridiculed laplace s `` mistake . '' ) to his credit , feller immediately recognized the silliness of that argument .for one thing , we do nt have direct information on the sun having risen on any particular day , thousands of years ago .so the analysis is conditioning on data that do nt exist .more than that , though , the big , big problem with the pr(sunrise tomorrow sunrise in the past ) argument is not in the prior but in the likelihood , which assumes a constant probability and independent events .why should anyone believe that ?why does it make sense to model a series of astronomical events as though they were spins of a roulette wheel in vegas ? why does stationarity apply to this series? that s not frequentist , it is nt bayesian , it s just dumb . or , to put it more charitably , it s a plain vanilla default model that we should use only if we are ready to abandon it on the slightest pretext .it is no surprise that when this model fails , it is the likelihood rather than the prior that is causing the problem . in the binomial model under consideration here ,the prior comes into the posterior distribution only once , and the likelihood comes in times .it is perhaps merely an accident of history that skeptics and subjectivists alike strain on the gnat of the prior distribution while swallowing the camel that is the likelihood . in any case, it is instructive that feller saw this example as an indictment of bayes ( or at least of the uniform prior as a prior for `` no advance knowledge '' ) rather than of the binomial distribution .bayesian inference has such a hegemonic position in philosophical discussions that , at this point , statistical arguments get interpreted as bayesian even when they are not .an example is the so - called doomsday argument ( carter , 1983 ) , which holds that there is a high probability that humanity will be extinct ( or drastically reduce in population ) soon , because if this were not true if , for example , humanity were to continue with 10 billion people or so for the next few thousand years then each of us would be among the first people to exist , and that s highly unlikely . to put it slightly more formally ,the `` data '' here is the number of people , , who have lived on earth up to this point , and the `` hypotheses '' correspond to the total number of people , , who will ever live .the statistical argument is that is almost certainly within two orders of magnitude of , otherwise the observed would be highly improbable . andif can not be much more than , this implies that civilization can not exist in its current form for millenia to come . for our purposes here, the ( sociologically ) interesting thing about this argument is that it s been presented as bayesian ( see , for example , ) but it is nt a bayesian analysis at all !the `` doomsday argument '' is actually a classical frequentist confidence interval .averaging over all members of the group under consideration , 95% of these confidence intervals will contain the true value .thus , if we go back and apply the doomsday argument to thousands of past data sets , its 95% intervals should indeed have 95% coverage .in 95% of populations examined at a randomly - observed rank , will be between and .this is the essence of neyman - pearson theory , that it makes claims about averages , not about particular cases .however , this does not mean that there is a 95% chance that any particular interval will contain the true value . especially not in this situation , where we have additional subject - matter knowledge .that s where bayesian statistics ( or , short of that , some humility about applying classical confidence statements to particular cases ) comes in .the doomsday argument seems silly to us , and we see it as fundamentally not bayesian .the doomsday argument sounds bayesian , though , having three familiar features that are ( unfortunately ) sometimes associated with traditional bayesian reasoning : * it sounds more like philosophy than science .* it s a probabilistic statement about a particular future event . *it s wacky , in an overconfident , you got ta believe this counterintuitive finding , it s supported by airtight logical reasoning , " sort of way . really , though , it s a classical confidence interval , tricked up with enough philosophical mystery and invocation of bayes that people think that the 95% interval applies to every individual case . or , to put it another way ,the doomsday argument is the ultimate triumph of the idea , beloved among bayesian educators , that our students and clients do nt really understand neyman - pearson confidence intervals and inevitably give them the intuitive bayesian interpretation .misunderstandings of the unconditional nature of frequentist probability statements are hardly new .consider feller s statement , `` a quality control engineer is concerned with one particular machine and not with an infinite population of machines from which one was chosen at random . ''it sounds as if feller is objecting to the prior distribution or `` infinite population , '' , and saying that he only wants inference for a particular value of .this misunderstanding is rather surprising when issued by a probabilist but it shows a confusion between data and parameter : as mentioned above , the engineer wants to condition upon the data at hand ( with obviously a specific if unknown value of lurking in the background ) .it does not help that many bayesians over the years have muddied the waters by describing parameters as random rather than fixed . actually , for bayesians as much as any other statistician , parameters are fixed but unknown .it is the knowledge about these unknowns that bayesians model as random . in any case, we suspect that many quality control engineers do take measurements on multiple machines , maybe even populations of machines , but to us feller s sentence noted above has the interesting feature that it is actually the opposite of the usual demarcation : typically it is the bayesian who makes the claim for inference in a particular instance and the frequentist who restricts claims to infinite populations of replications .why write an article picking on sixty years of confusion ? we are not seeking to malign the reputation of feller , a brilliant mathematician and author of arguably the most innovative and intellectually stimulating book ever written on probability theory .rather , it is feller s brilliance and eminence that makes the quotation that much more interesting : that this centrally - located figure in probability theory could make a statement that could seem so silly in retrospect ( and even not so long in retrospect , as indicated by the memoir of jaynes quoted above ) .misunderstandings of bayesian statistics can have practical consequences in the present era as well .we could well imagine a reader of stirzaker s generally excellent probability text and taking from it the message that all probabilities `` must be explicit and fixed before you begin , '' thus missing out on some of the most exciting and important work being done in statistics today . in the last half of the twentieth century, bayesians had the reputation ( perhaps deserved ) as philosophers who are all too willing to make broad claims about rationality , with optimality theorems that were ultimately built upon questionable assumptions of subjective probability , in a denial of the garbage - in - garbage - out principle that defies all common sense . in place of this , feller ( and others of his time ) placed the rigorous neyman - pearson theory , which `` may be not only defended but also applied . '' and , indeed , if the classical theory of hypothesis testing had lived up to the promise it seemed to have in 1950 ( fresh after solving important operations - research problems in the second world war ) , then indeed maybe we could have stopped right there . but , as the recent history of statistics makes so clear , no single paradigm bayesian or otherwise comes close to solving all our statistical problems ( see the recent reflections of ) and there are huge limitations to the type-1 , type-2 error framework which seemed so definitive to feller s colleagues at the time . at the very least , we hope feller s example will make us wary of relying on the advice of colleagues to criticize ideas we do not fully understand .new ideas by their nature are often expressed awkwardly and with mistakes but finding such mistakes can be an occasion for modifying and improving these ideas rather than rejecting them .we thank david aldous , ronald christensen , and two reviewers for helpful comments . in addition , the first author ( ag ) thanks the institute of education sciences , department of energy , national science foundation , and national security agency for partial support of this work .he remembers reading with pleasure much of feller s first volume in college , after taking probability but before taking any statistics courses .the second author s ( cpr ) research is partly supported by the agence nationale de la recherche ( anr , 212 , rue de bercy 75012 paris ) through the 20072010 grant anr-07-blan-0237 `` spbayes . ''he remembers buying feller s first volume in a bookstore in ann arbor during a bayesian econometrics conference where he was kindly supported by jim berger .
the missionary zeal of many bayesians of old has been matched , in the other direction , by an attitude among some theoreticians that bayesian methods are absurd not merely misguided but obviously wrong in principle . we consider several examples , beginning with feller s classic text on probability theory and continuing with more recent cases such as the perceived bayesian nature of the so - called doomsday argument . we analyze in this note the intellectual background behind various misconceptions about bayesian statistics , without aiming at a complete historical coverage of the reasons for this dismissal . andrew gelman + _ department of statistics and department of political science , columbia university _ + gelman.columbia.edu + christian p. robert + _ universit paris - dauphine , ceremade , and crest , paris _ + xian.dauphine.fr + 10 apr 2012 * keywords : * foundations , frequentist , bayesian , laplace law of succession , doomsdsay argument , bogosity .
biological data sets are growing enormously , leading to an information - driven science and allowing previously impossible breakthroughs .however , there is now an increasing constraint in identifying relevant characteristics among these large data sets .for example , in medicine , the identification of features that characterize control and disease subjects is key for the development of diagnostic procedures , prognosis and therapy . among several exploratory methods ,the study of clustering structures is a very appealing candidate method , mainly because several biological questions can be formalized in the form : are the features of populations a and b equally clustered ?one typical example occurs in neuroscience .it is believed that the brain is organized in clusters of neurons with different major functionalities , and deviations from the typical clustering pattern can lead to a pathological condition .another example is in molecular biology , where the gene expression clustering structures depend on the analyzed population ( control or tumor , for instance ) .therefore , in order to better understand diseases , it is necessary to differentiate the clustering structures among different populations .this leads to the problem of how to statistically test the equality of clustering structures of two or more populations followed by the identification of features that are clustered in a different manner .the traditional approach is to compare some descriptive statistics of the clustering structure ( number of clusters , common elements in the clusters , etc ) , but to the best of our knowledge , little or nothing is known regarding formal statistical methods to test the equality of clustering structures among populations . with this motivation , we introduce a new statistical test called anocva - analysis of cluster structure variability - in order to statistically compare the clustering structures of two or more populations .our method is an extension of two well established ideas : the silhouette statistic and anova .essentially , we use the silhouette statistic to measure the variability " of the clustering structure in each population .next , we compare the silhouette among populations . the intuitive idea behindthis approach is that we assume that populations with the same clustering structures also have the same `` variability '' .this simple idea allows us to obtain a powerful statistic test for equality of clustering structures , which ( 1 ) can be applied to a large variety of clustering algorithms ; ( 2 ) allows us to compare the clustering structure of multiple groups simultaneously ; ( 3 ) is fast and easy to implement ; and ( 4 ) identifies features that significantly contribute to the differential clustering .we illustrate the performance of anocva through simulation studies under different realistic scenarios and demonstrate the power of the test in identifying small differences in clustering among populations .we also applied our method to study the whole brain functional magnetic resonance imaging ( fmri ) recordings of 759 children with typical development ( td ) , attention deficit hyperactivity disorder ( adhd ) with hyperactivity / impulsivity and inattentiveness , and adhd with hyperactivity / impulsivity without inattentiveness .adhd is a psychiatric disorder that usually begins in childhood and often persists into adulthood , affecting at least 5 - 10% of children in the us and non - us populations .given its prevalence , impacts on the children s social life , and the difficult diagnosis , a better understanding of its pathology is fundamental .the statistical analysis using anocva on this large fmri data set composed of adhd and subjects with td identified brain regions that are consistent with already known literature of this physiopathology .moreover , we have also identified some brain regions previously not described as associated with this disorder , generating new hypothesis to be tested empirically .we can describe our problem in the following way . given populations where each population ( ) , is composed of subjects , and each subject has items that are clustered in some manner, we would like to verify whether the cluster structures of the populations are equal and , if not , which items are differently clustered . to further formalize our method , we must define what we mean by cluster structure .the silhouette statistic is used in our proposal to identify the cluster structure .we briefly describe it in the next section .the silhouette method was proposed in 1987 by with the purpose of verifying whether a specific item was assigned to an appropriate cluster . in other words ,the silhouette statistic is a measure of goodness - of - fit of the clustering procedure .let be the items of one subject that are clustered into clusters by a clustering algorithm according to an optimal criterion .note that .denote by the dissimilarity ( e.g. euclidian , manhattan , etc ) between items and and define as the average dissimilarity of to all items of cluster ( or ) , where is the number of items of .denote by the cluster to which has been assigned by the clustering algorithm and by any other cluster different of , for all .all quantities involved in the silhouette statistic are given by where is the `` within '' dissimilarity and is the smallest `` between '' dissimilarity for the sample unit .then a natural proposal to measure how well item has been clustered is given by the silhouette statistic the choice of the silhouette statistic is interesting due to its interpretations .notice that , if , this implies that the `` within '' dissimilarity is much smaller than the smallest `` between '' dissimilarity ( ) . in other words , item has been assigned to an appropriate cluster since the second - best choice cluster is not nearly as close as the actual cluster . if , then , hence it is not clear whether should have been assigned to the actual cluster or to the second - best choice cluster because it lies equally far away from both . if , then , so item lies much closer to the second - best choice cluster than to the actual cluster .therefore it is more natural to assign item to the second - best choice cluster instead of the actual cluster because this item has been `` misclassified '' . to conclude , measures how well item has been labeled .let be the -matrix of dissimilarities , then it is symmetric and has zero diagonal elements .let be the labels obtained by a clustering algorithm applied to the dissimilarity matrix , i.e. , the labels represent the cluster each item belongs to .it can be easily verified that the dissimilarity matrix and the vector of labels are sufficient to compute the quantities . in order to avoid notational confusions ,we will write rather than for all , because we deal with many data sets in the next section . in the previous section , we introduced notations when we have items in one subject . in the present section , we extend the approach to many populations and many subjects in each population .let be types of populations . for the population , subjects are collected , for . in order to establish notations , the items of the taken from the population are represented by the matrix where each item is a vector .first we define the matrix of dissimilarities among items of each matrix , by notice that each is symmetric with diagonal elements equal zero . also , we define the following average matrices of dissimilarities where , . the ( )-matrices and are the only quantities required for proceeding with our proposal . now , based on the matrix of dissimilarities can use a clustering algorithm to find the clustering labels .then , we compute the following silhouette statistics the former is the silhouette statistic based on the matrix of dissimilarities and the latter is the silhouette statistic based on the dissimilarity matrix , both obtained by using the clustering labels computed via the matrix .we expect that if the items from all populations are equally clustered , the quantities and must be close for all and .define the following vectors we want to test if all populations are clustered in the same manner , i.e. : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` given the clustering algorithm , the data from are equally clustered '' .+ _ versus _+ ``at least one is clustered in a different manner '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ where the test statistic is defined by in other words , given the clustering structure of the items of each subject , we would like to test if the items are equally clustered among populations . now , suppose that the null hypothesis is rejected by the previous test .thus , a natural next step is to identify which item is clustered in a different manner among populations .this question can be answered by analyzing each with the following statistical test _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` given the clustering algorithm , the item ( ) is equally clustered among populations '' .+ _ versus _+ `` the item is not equally clustered among populations '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ where the test statistic is defined by , for .the exact or asymptotic distributions of both and are not trivial , therefore , we use a computational procedure based on bootstrap to construct the empirical null distributions . the bootstrap implementation of both tests is as follows : 1 .resample with replacement subjects from the entire data set in order to construct bootstrap samples , for .2 . calculate , , and , for , using the bootstrap samples .3 . calculate and .repeat steps 1 to 3 until obtaining the desired number of bootstrap replications .the p - values from the bootstrap tests based on the observed statistics and are the fraction of replicates of and on the bootstrap data set , respectively , that are at least as large as the observed statistics on the original data set .the data analysis can be described as shown in [ fig : test - schema ] . ) whether the rois are equally clustered between populations .if they are not equally clustered , i.e. , the null hypothesis ( ) is rejected , the rois that most contribute to this differential clustering can be identified by using the test statistic .,width=528 ] four scenarios were designed to validate some features of anocva , such as size and power of the proposed tests .our first scenario evaluates the size of the test while the second , third , and forth ones evaluate the power under different settings : ( i ) all items are equally clustered along three populations ( ) this is our null hypothesis ( figure [ fig : simulacao]a ) ; ( ii ) when one single item of cluster a in population 1 is labeled as cluster b in population 2 ( ) in this alternative hypothesis , the number of items inside the clusters changes ( figure [ fig : simulacao]b ) ; ( iii ) when one single item of cluster a in population 1 is labeled as cluster b in population 2 , and another item from cluster b in population 1 is labeled as cluster a in population 2 ( ) in this alternative hypothesis , the number of items inside the clusters does not change ( figure [ fig : simulacao]c ) ; and ( iv ) when two clusters in population 1 are grouped into one single cluster in population 2 ( ) in this alternative hypothesis , the number of clusters between populations changes ( figure [ fig : simulacao]d ) .each population is composed of 20 subjects ( , for ) .each subject is composed of items ( ) .the items are generated by normal distributions with unit variance .we assume that items generated by the same distribution belong to the same cluster .the construction of the four scenarios were carried out in the following manner : 1 .scenario ( i ) : items ( ) , ( ) , ( ) , ( ) , and ( ) for and are centered at positions ( 0 , 0 ) , ( 2 , 2 ) , ( 4 , 4 ) , ( 6 , 6 ) , and ( 8 , 8) , respectively .this scenario represents the configuration of three populations , each population composed of 20 subjects , where each subject is composed of 100 items that are clustered in five groups .notice that the items of the subjects of the three populations are equally clustered , i.e. , they are under the null hypothesis .scenario ( ii ) : items of the ( ) subject taken from the ( ) population are generated in the same manner as in scenario ( i ) , except by item , which is centered at position ( 2 , 2 ) in population .this scenario represents the configuration of two populations , each population composed of 20 subjects , where each subject is composed of 100 items that are clustered in five groups .notice that in this scenario , the item , which belongs to cluster with center at in population , belongs to cluster with center at in population .therefore , clusters with centers at ( 0 , 0 ) and ( 2 , 2 ) in population have 19 and 21 items , respectively ( clusters with centers at ( 4 , 4 ) , ( 6 , 6 ) , and ( 8 , 8) have 20 items each one ) , while the subjects of population have each cluster composed of 20 items ( figure [ fig : simulacao]b ) .3 . scenario ( iii ) : items of the ( ) subject taken from the ( ) population are generated in the same manner as in scenario ( i ) , except by items and , which are centered at positions ( 2 , 2 ) and ( 0 , 0 ) in population , respectively .this scenario represents the configuration of two populations , each population composed of 20 subjects , where each subject is composed of 100 items that are clustered in five groups . in this scenario ,the item , which belongs to cluster with center at in population , belongs to cluster with center at in population , and item , which belongs to cluster with center at in population , belongs to cluster with center at in population .notice that , differently from scenario ( ii ) , there is no change in the number of items in each cluster between populations , i.e. , each cluster is composed of 20 items ( figure [ fig : simulacao]c ) .4 . scenario ( iv ) : items of the ( ) subject taken from the ( ) population are generated in the same manner as in scenario ( i ) , except by items ( ) that are centered at position ( 6 , 6 ) .this scenario represents the change in the number of clusters between populations .notice that population is composed of five clusters while population is composed of four clusters ( items ( ) belong to the same cluster of items ( ) ) .moreover , in the following we also assume that experimental data are usually mixtures of subjects of different populations , i.e. , one population is contaminated with subjects of another population and vice - versa .in order to verify the power of anocva under this condition , subjects are mixed at different proportions , from 0% ( no mixture ) to 50% ( half of the subjects are from one population and another half from another population , i.e. , totally mixed data sets ) . in order to construct a mixed data set ,100 subjects are generated for each population ( ) .then , subjects are randomly ( uniformly ) sampled from the 100 subjects of population and subjects are randomly sampled from the 100 subjects of population .the second data set is constructed by sampling subjects from the 100 subjects of population and subjects randomly sampled from the 100 subjects of population ( figure [ fig : simulacao - mix ] ) .the parameter varies from 0 to 0.5 , where 0 means no mixture of populations and 0.5 means totally mixed data sets .anocva is applied to these two mixed data sets .the clustering algorithm and the dissimilarity measure used to aforementioned simulations are the complete linkage hierarchical clustering procedure and the euclidian distance , respectively . )are generated .then , subjects are randomly sampled from the 100 subjects of population and subjects are randomly sampled from the 100 subjects of population .the second mixed data set is constructed by sampling subjects from the 100 subjects of population and subjects from the 100 subjects of population .the statistical test is applied to these mixed data sets 1 versus 2.,width=528 ] anocva was applied to a functional magnetic resonance imaging ( fmri ) data set composed of children with typical development ( td ) and with adhd , both under a resting state protocol , totaling 759 subjects .this data set is available at the adhd-200 consortium website ( http://fcon_1000.projects.nitrc.org / indi / adhd200/ ) .fmri data was collected in eight sites that compose the adhd-200 consortium , and was conducted with local internal review board approval , and also in accordance with local internal review board protocols .further details about this data set can be obtained at the adhd-200 consortium website .this data set is composed of 479 controls - children with td ( 253 males , mean age standard deviation of 12.23 3.26 years ) and three sub groups of adhd patients : ( i ) combined - hyperactive / impulsive and inattentive - ( 159 children , 130 males , 11.24 3.05 years ) , ( ii ) hyperactive / impulsive ( 11 subjects , 9 males , 13.40 4.51 years ) , and ( iii ) inattentive ( 110 subjects , 85 males , 12.06 2.55 years ) .the pre - processing of the fmri data was performed by applying the athena pipeline + ( http://www.nitrc.org/plugins/mwiki/index.php/neurobureau:athenapipeline ) .the pre - processed data is publicly available at the neurobureau website + ( http://neurobureau.projects.nitrc.org/adhd200 ) .briefly , the steps of the pipeline are as follows : exclusion of the first four scans ; slice timing correction ; deoblique dataset ; correction for head motion ; masking the volumes to discard non - brain voxels ; co - registration of mean image to the respective anatomic image of the children ; spatial normalization to mni space ( resampling to 4 mm 4 mm 4 mm resolution ) ; removing effects of wm , csf , head motion ( 6 parameters ) and linear trend using linear multiple regression ; temporal band - pass filter ( 0.009 f 0.08 hz ) ; spatial smoothing using a gaussian filter ( fwhm = 6 mm ) . the cc400 atlas ( based on the approach described in craddock _ et al_. , ( 2012) ) provided by the neurobureau was used to define the 351 regions - of - interest ( rois ) used in this study .the average signal of each roi was calculated and used as representative of the region . for each child, a correlation matrix was constructed by calculating the spearman s correlation coefficient ( which is robust against outliers and suitable to identify monotonic non - linear relationships ) among the 351 rois ( items ) in order to identify monotonically dependent rois .then , the correlations matrices were corrected for site - effects by using a general linear model .site effects were modeled as a glm ( site as a categorical variable ) and the effect was removed by considering the residuals of this model .p - values corresponding to the spearman s correlation for each pair of rois were calculated .then , the obtained p - values were corrected by false discovery rate ( fdr ) .thus , the dissimilarity matrices for are symmetric with diagonal elements equal zero and non - diagonal elements ranging from zero to one .the higher the correlation , the lower the p - value , and consequently , the lower the dissimilarity between two rois .notice that the p - value associated to each spearman s correlation is not used as a statistical test , but only as a measure of dissimilarity normalized by the variance of the rois .the choice of the proposed dissimilarity measure instead of the standard one minus the correlation coefficient is due to the fact that we are interested in rois that are highly correlated , independent whether they present positive or negative correlation . here , we are interested in calculating how much rois are dependent ( the dissimilarity among them ) and not how they are correlated .for each scenario and each value of ( ) , monte carlo realizations were constructed and tested by our approach .the results obtained by simulations are illustrated in figure [ fig : roc ] , which describes the proportion of rejected null hypotheses for each p - value threshold ( significance level ) . under the null hypothesis ( scenario ( i ) ) ,figure [ fig : roc]a shows that the test is actually controlling the rate of type i error , i.e. , the proportion of falsely rejected null hypotheses is the expected by the p - value threshold . since the uniform distribution for p - values implies that the distribution of the statistic is correctly specified under the null hypothesis , the kolmogorov - smirnov test was applied to compare the p - values distribution with a uniform distribution .the test under the null hypothesis presented a p - value of 0.14 , meaning that there is no statistical evidence to affirm that the monte carlo p - value distribution is not a uniform distribution . in the casethat there is no mixture of populations ( ) and under the alternative hypothesis , i.e. , scenarios ( ii ) , ( iii ) , and ( iv ) , the test identified 100% of the times ( at a p - value threshold of 0.05 ) , the differences in the clustering structures ( figure [ fig : roc ] , panels ( b ) , ( c ) , and ( d ) ) .as the coefficient of mixture level increases , the power of the test decreases , and , as expected , when the mixture level is 50% ( ) , the method is not able to identify any difference between populations ( at a p - value threshold of 0.05 ) .moreover , by comparing the different scenarios under the same , it is possible to verify that the power of the test is higher in scenario ( iv ) , where the number of clusters changes ( figure [ fig : roc]d ) , followed by scenario ( iii ) , which represents a `` swap '' of items between clusters ( figure [ fig : roc]c ) and finally by the case ( scenario ( ii ) ) that one item `` jumps '' from one cluster to another ( figure [ fig : roc]b ) .these results are in accordance to the intuitive notion that the power of the test is proportional to the number of items that are clustered in a different manner between populations .the greater the number of items clustered in a different manner , the higher the power of the test to discriminate it .30% ) of mixture , the number of truly rejected null hypotheses is high , while for higher rates of mixture ( % ) , the statistical test does not reject the null hypothesis.,width=528 ] figure [ fig : z - score ] depicts an illustrative example of one realization of each scenario with in order to show that the method indeed identifies the items that are contributing to the differential clustering among different populations .the x - axis represents the items from 1 to 100 .the y - axis represents the z - scores of the p - values corrected for multiple comparisons by the false discovery rate ( fdr ) method .figure [ fig : z - score ] , panels ( a ) , ( b ) , ( c ) , and ( d ) , represent the items and the respective z - scores that contribute significantly to the differential clustering in scenarios ( i ) , ( ii ) , ( iii ) , and ( iv ) , respectively .items with z - score higher than 1.96 represent the statistically significant ones at a p - value threshold of 0.05 .notice that , as expected , figure [ fig : z - score]a does not present any items as statistically significant because scenario ( i ) was constructed under the null hypothesis .figure [ fig : z - score]b highlights the 20th item as statistically significant , which is exactly the item that `` jumped '' from one cluster to another . figure [ fig : z - score]c shows that items 1 and 21 are statistically significant .items 1 and 21 are the ones that were `` switched '' in our simulations . figure [ fig : z - score]d illustrates a concentration of high z - scores between items 60 and 100 , representing the items that were merged into one cluster in population of scenario ( iv ) .therefore , by analyzing the statistic for , it is also possible to identify which item is contributing to the differential clustering among populations , i.e. , which item is clustered in a different manner among populations .another point to be analyzed is that , in practice , the number of clusters should be estimated .the problem is that there is no consensus in the literature regarding this problem . in other words ,different methods may estimate different number of clusters for the same data set due to the lack of definition of a cluster ( a cluster is usually a result obtained by a clustering algorithm ) .then , we also analyzed how sensitive is the power of anocva regarding the estimated number of clusters . in order to illustrate it ,two further simulations were carried out .first , the control of the rate of false positives in scenario ( i ) was verified by varying the number of clusters from three to seven .the number of repetitions was set to 1000 .notice that the correct number of clusters of is five .figure [ fig : k - estimation]a shows that the rate of rejected null hypothesis is proportional to the expected by the p - value threshold for number of clusters varying from three to seven .this result suggests that for numbers of clusters close to the `` correct '' one , the test is able to control the type i error under the null hypothesis .now , it is necessary to verify the power of the test under the alternative hypothesis with different number of clusters . the simulation to evaluate itis composed of two populations ( ) .each population is composed of subjects , and each subject is composed of items .these items are generated by normal distributions with unit variance in the following manner .for population , items ( ) , ( ) , ( ) , and ( ) ( for ) are centered at positions ( 2 , 0 ) , ( 0 , -2 ) , ( -2 , 0 ) , and ( 0 , 2 ) , respectively . for population ,items ( ) , ( ) , ( ) , and ( ) ( for ) are centered at positions ( 4 , 0 ) , ( 0 , -4 ) , ( -4 , 0 ) , and ( 0 , 4 ) , respectively .the number of repetitions and the mixture tuning variable are set to 1000 and 0.3 , respectively .this simulation describes the scenario that the number of clusters for the matrix of dissimilarities is clearly four .figure [ fig : k - estimation]b shows that the power of the test is optimum when the correct number of clusters is used .thus , these results suggest that the method is still able to reject the null hypothesis with a considerable power under the alternative hypothesis for number of clusters close to the `` correct '' number .therefore , the choice of an objective criterion to determine the number of clusters does not change significantly the results .we do not enter in further discussions regarding the estimation of the number of clusters because it is not the scope of this work . for a good review regarding the estimation of the number of clusters , refer to milligan and cooper ( 1985 ) .sometimes , populations are not balanced in their respective sizes and consequently , the largest population may dominate the average , and the clustering algorithm may bias the assignment . in order to study the performance of anocva in not well balanced populations , we performed the simulation described in scenario 2 with populations varying in proportions of 1:9 , 2:8 , 3:7 , and 4:6 in a total of 40 subjects .power curves are shown in figure [ fig : balance ] .one thousand repetitions were done for each analyzed proportion . by analyzing figure [ fig : balance ] ,one may notice that the power of the test is high when populations are closer to 5:5 and is low when they are not balanced .in other words , the more balanced are the populations , the higher is the power of the test .besides , even when data is poorly balanced ( 1:9 ) , the area under the curve is greater than 0.5 .another point is the definition of what a cluster is .since the definition of a cluster depends on the problem of interest - clusters are usually determined as the result of the application of a clustering algorithm ( for example , k - means , hierarchical clustering , spectral clustering , etc ) - it is natural that the results obtained by anocva may change in function of the clustering procedure and the chosen metric ( for example , euclidean , manhattan , etc ) .thus , the selection of both clustering algorithm and metric depends essentially on the type of data , what is clustered , and the hypothesis to be tested .anocva was applied to the adhd data set , in order to identify rois associated with the disease .since we are interested in identifying rois that are differentially clustered in terms of their connectivity , the clustering algorithm used to determine the labels based on the dissimilarity matrix was the unnormalized spectral clustering algorithm .for details of the implemented spectral clustering algorithm , refer to the appendix .the number of clusters was determined by using the silhouette method .the number of bootstrap samples was set to 1000 .the group of children with hyperactive / impulsive adhd was excluded from our analysis due to the low number of subjects ( 11 children ) .the performed tests and the respective p - values corrected for multiple comparisons by the bonferroni method are listed at table [ table : adhd ] .first , the test was applied to the entire data set ( excluded the group of hyperactive / impulsive adhd due to the low number of subjects ) in order to verify if there is at least one population that differs from the others .the test indicated a significant difference ( p - value=0.020 ) , suggesting that there is at least one population that presents a different clustering structure . in order to identify which populations present different clustering structures ,pairwise comparisons among the groups were carried out . by observing table [ table : adhd ] ,it is not possible to verify significant differences between td versus inattentive adhd ( p - value = 0.700 ) , and combined adhd versus inattentive adhd ( p - value = 0.615 ) , but there are significant differences between td versus combined and inattentive adhd ( p - value 0.001 ) , and td versus combined adhd ( p - value 0.001 ) .these results indicate that the significant differences obtained for td , combined adhd and inattentive adhd were probably due to the differences between td and combined adhd ..anocva applied to the adhd data set .the number of bootstrap samples is set to 1000 .p - values are corrected by bonferroni method for multiple comparisons . [ cols="^,^",options="header " , ] [ table : adhd ] thus , the analysis of the fmri data set was focused on identifying the differences between children with td and children with combined adhd .rois of both populations were clustered by spectral clustering algorithm , and the numbers of clusters for each group were estimated by the silhouette method .the silhouette method estimates the optimum number of clusters by selecting the number of clusters associated with the highest silhouette width . by analyzing figure [ fig : silhouette ], it is possible to see that the estimated number of clusters is four for both data sets . and , respectively ) . ( a ) estimation of the number of clusters in the average dissimilarity matrix of an fmri data set composed of 479 children with td . ( b ) estimation of the number of clusters in the average dissimilarity matrix of an fmri data set composed of 159 children with combined adhd .notice that the silhouette width for one cluster is not defined .the maximum silhouette width is obtained when the number of clusters is four for both cases .these results suggest that the number of clusters in both data sets is four.,width=528 ] the results of the clustering procedure are visualized in figure [ fig : brain ] , where panels ( a ) and ( b ) represent children with td and children with combined adhd , respectively . and , respectively .the number of clusters was estimated by the silhouette method .panel ( c ) highlights the rois that are clustered in a different manner between children with td and adhd .regions highlighted in white represent high z - scores while regions in red represent lower z - scores .the z - scores were calculated by using the p - values obtained by anocva after fdr correction for multiple comparisons.,width=528 ] interestingly , the clusters were composed of anatomically contiguous and almost symmetric areas of the brain , although these constraints were not included _ a priori _ in our analyses .this is consistent with the hypothesis that the spectral clustering method groups areas with similar brain activities in the same cluster .then , each roi was tested in order to identify the one that significantly contribute to the difference in clustering between children with td and with combined adhd .p - values were corrected for multiple comparisons by the fdr method , and then , converted to z - scores .figure [ fig : brain]c illustrates the statistically significant rois at a p - value threshold of 0.05 after fdr correction .the regions highlighted in white are the rois with the highest z - scores , while the regions highlighted in red represent rois with lower z - scores , but still statistically significant . by comparing figure [ fig : brain ]panels ( a ) and ( b ) , it is possible to verify that the highlighted regions in figure [ fig : brain]c correspond to rois that are clustered in a different manner between children with td and with combined adhd . cluster analysis has suggested a very similar network organization between children with td and combined adhd patients .apparently , sensory - motor systems , frontoparietal control networks , visual processing and fronto - temporal systems are similarly distributed between the two groups. however , the application of anocva unveiled that anterior portion of inferior , middle and superior frontal gyri , inferior temporal gyrus , angular gyrus , and some regions of cerebellum , lateral parietal , medial occipital and somato - motor cortices have a distinct clustering organization between the two populations . motor system ( pre and postcentral giri and cerebellum ) alterations in adhd are associated to hyperactivity symptoms , and this finding has already been extensively described in the literature ( for a detailed review , see ) .in addition , dickstein _et al_. carried out a meta - analysis of fmri studies comparing controls and adhd and identified some portions of parietal cortex , inferior prefrontal cortex and primary motor cortex as regions with activation differences between the groups .the inferior frontal cortex highlighted by anocva is described in literature as a key region for inhibition of responses . in this sense, the impulsivity symptoms present in combined adhd can be related to an abnormal participation of this region in the context of global brain network organization , when compared to healthy controls .this finding is reinforced by the findings of recent studies .et al_. investigated the role of this area in therapeutic mechanisms of treatments for adhd .in addition , vasic _ et al_. and whelan _ et al_. explored the neural error signaling in these regions ( in adults ) and the impulsivity of adolescents with adhd .the inferior frontal cortex is also implied to participate in language production , comprehension , and learning and , therefore , our finding is consistent with the language impairment reported in adhd subjects . an interesting finding from the application of the proposed method to the resting state fmri dataset was the identification of angular gyrus as a region with functional abnormalities in adhd in the context of brain networks . although the angular gyrus contributes to the integration of information , playing an important role in many cognitive processes , to the best of our knowledge , there are very few studies in literature suggesting activation differences in this region when comparing adhd to healthy controls .the angular gyrus contributes to the integration of information , and play a role in many cognitive processes .et al_. have found that this region exhibited less activation in adolescents with adhd during a target detection task .moreover , simos _et al_. have shown that angular and supramargial gyrus play a role in brain mechanisms for reading and its correlates in adhd .we note that despite the angular gyrus has a crucial role in the integration of information , both studies have not explored the relevance of this region from a network connectivity perspective . using the proposed method , we properly carried out this analysis and the findings indicate that the role of this region in brain spontaneous activity is different , when comparing the two groups .the existence of temporal and spatial correlation is inherent in fmri data and ignoring intrinsic correlation may lead to misleading or erroneous conclusions .this dependence structure makes clustering analysis more challenging and should be accounted for .notice that the proposed bootstrap incorporates the spatial correlations in the clustering process and also preserves the temporal structure . in order to certify that both the bootstrap based statistical test is corretly working in actual data and the results obtained in this study are not due to numerical fluctuations or another source of error not took into account , we verified the control of the rate of false positives in biological data .the set of 479 children with td was split randomly into two subsets , and the clustering test was applied between them .this procedure was repeated 700 times .the proportion of falsely rejected null hypothesis for p - values lower than 1 , 5 , and 10% were 2.14% , 5.70% , and 9.83% , respectively , confirming that the type i error is effectively controlled in this biological data set . moreover ,the kolmogorov - smirnov test was applied to compare the p - values distribution obtained in the 700 repetitions with the uniform distribution . the kolmogorov - smirnov test indicated a p - value of 0.664 , meaning that there is no statistical evidence to reject the null hypothesis that the p - values distribution obtained in 700 repetitions follows a uniform distribution .furthermore , we also verified in the same manner whether the highlighted rois are indeed statistically significant .the proposed method was applied to each roi , i.e. , 351 p - values were calculated in each repetition .thus , 351 p - values distributions , one for each roi were constructed .each of the 351 p - values distribution was compared with the uniform distribution by the kolmogorov - smirnov test . after correcting the obtained 351 p - values by the kolmogorov - smirnov test by fdr , only two null hypotheses were rejected at a p - value threshold of 0.05 , confirming that the type i error is also controlled for rois .these results suggest that the differences in clustering between children with td and with combined adhd are indeed statistically significant .to the best of our knowledge , the method proposed here is the first one that statistically identifies differences in the clustering structure of two or more populations of subjects simultaneously .however , it is important to discuss some limitations of anocva .first , the method is only defined if the estimated number of clusters for the average of the matrix of dissimilarities is greater than one .it is due to the fact that the silhouette statistic is only defined when the number of clusters is greater than one . in practice ,one may test whether each dissimilarity matrix and the average of dissimilarity matrix are composed of one or more clusters by using the gap statistic proposed by tibshirani _et al_. ( 2002) .if is composed of one cluster while one of the matrices of dissimilarities is composed of more than one cluster , clearly the clustering structures among populations are different .another limitation is the fact that anocva does not identify changes in both rotated and/or translated data . in other words ,the test does not identify alterations that maintain the relative dissimilarities among items . if one is interested in identifying this kind of difference, one simple solution is to test the joint mean by the hotteling s t - squared test .however , it is important to point out that anocva is sensitive to identify different clustering assignments that have the same goodness of fit ( silhouette ) .notice that proposed the use of the average of in order to obtain a goodness of fit .here , we do not use the average value but the distance between the entire vectors .in other words , we take into account the label of the items that are clustered . therefore ,if one or more items are clustered in a different manner among populations , our statistic is able to capture this difference .however , the use of the average value of is not .moreover , anocva requires a considerable number of subjects in each group to be able to reject the null hypothesis ( when the clustering structures are in fact different ) .it is very difficult to define a minimum number of subjects because it depends on the variance , but we suppose that an order of dozens ( based on our simulations ) is necessary .there are other measures for similarity between clustering structures that might be used to develop a statistical test .however , these similarity measures can not be extended in a straightforward manner to simultaneously test more than two populations .the work proposed by was successfully applied in neuroscience to statistically test differences in network community structures . however , again , this method can not test simultaneously more than two populations .notice that in our case , we are interested in comparing controls and several sub - groups of adhd simultaneously . therefore , this method is not applicable .it is necessary to point out that the characteristic of anocva that allows to statistically test whether the structure of the clusters of several populations - not limited to pairwise comparisons - are all equals avoids the increase of committing a type i error due to multiple tests .furthermore , anocva can be used to test any clustering structure , not limited to network community structures such as the one proposed by .another advantage is the use of a bootstrap approach to construct the empirical null hypothesis distribution from the original data .it allows the application of the test to data sets that the underlying probability distribution is unknown .moreover , it is also known that for actual data sets , the bootstrap procedure provides better control of the rate of false positives than asymptotic tests .one question that remains is , what is the difference between anocva and a test for equality of dissimilarities , for example , a t - test for each distance .the main difference is that , for a t - test , only the mean and variance of the measure to be tested is taken into account to determine whether two items are `` far '' or `` close '' , while anocva is a data - based metric that uses the clustering structure to determine how `` far '' the items are from others .furthermore , we also remark that testing whether the data are equally distributed is not the same of testing whether the data are equally clustered , since items may come from very different distributions and be clustered in a quite similar way depending on the clustering algorithm .one may also ask the difference between an f - test of the clustering coefficient and anocva .notice that the clustering coefficient is a measure of degree to which nodes in a graph tend to cluster together , while anocva tests whether the structure of the clusters of several populations are all equals . here ,the purpose of the test was to identify rois that are associated with adhd , but the same analysis can be extended to other large data sets . specifically in neuroscience , the recent generation of huge amounts of data in collaborative projects , such as the autism brain image data exchange ( abide ) project + ( http://fcon_1000.projects.nitrc.org / indi / abide / index.html ) , which generated fmri data of more than 1000 individuals with autism , the adhd-200 project previously described here that provides fmri data of children with adhd , the fmri data center , which is a public repository for fmri + ( http://www.fmridc.org/f/fmridc ) , the alzheimer s disease neuroimaging initiative , which collected magnetic resonance imaging of subjects , and many others that will certainly be produced due to the decreasing costs in data acquisition , makes cluster analysis techniques indispensable to mine information useful to diagnosis , prognosis and therapy .the flexibility of our approach that allows the application of the test on several populations simultaneously ( instead of limiting to pairwise comparisons ) , along with its performance demonstrated in both simulations and actual biological data , will make it applicable to many areas where clustering is a source of concern .spectral clustering refers to a class of techniques which rely on the eigen - structure of a similarity matrix to partition points into clusters with points in the same cluster having high similarity and points in different clusters having low similarity .the similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of attributes in the data set . in our case ,the similarity of two rois is given by one minus the p - value obtained by the spearman s correlation between rois .the spectral clustering algorithm is described as follows : * input * : the similarity matrix , where ( consider of the spearman s correlation ) , and the number of clusters . 1 .compute the laplacian matrix , where is the diagonal matrix with on the diagonal ( ) .2 . compute the first eigenvectors of .3 . let be the matrix containing the vectors as columns .4 . for ,let be the vector corresponding to the row of .5 . cluster the points with the -means algorithm into clusters .* output * : clusters .af was partially supported by fapesp ( 11/07762 - 8 ) and cnpq ( 306319/2010 - 1 ) .dyt was partially supported by pew latin american fellowship .agp was partially supported by fapesp ( 10/50014 - 0 ) .alexander - block , a. , lambiotte , r. , roberts , b. , giedd , j. , gogtay , n. , bullmore , e. ( 2012 ) .the discovery of population differences in network community structure : a new methods and application to brain functional networks in schizophrenia ._ neuroimage _ , * 59 * , 38893900 .bae , e. , bailey , j. , dong , g. ( 2006 ) .clustering similarity comparison using density profiles ._ proceedings of the 19th australian joint conference on artificial intelligence : advances in artificial intelligence _ , 342 - 351 .bullmore , e.t ., suckling , j. , overmeyer , s. , rabe - hesketh , s. , taylor , e. , brammer , m.j .global , voxel , and cluster tests , by theory and permutation , for a difference between two groups of structural mr images of the brain . _ ieee transactions on medical imaging _ , * 18 * , 3242 .dickstein , s.g . ,bannon , k. , castellanos , f.x ., milham , m.p .the neural correlates of attention deficit hyperactivity disorder : an ale meta - analysis . _ j child psychol psychiatry _ , * 47 * , 105162 .fair , d.a ., dosenbach , n.u . ,church , j.a ., cohen , a.l . , brahmbhatt , s. ( 2007 ) .development of distinct control networks through segregation and integration ._ proc natl acad sci u s a _ , * 104 * , 13507 - 12 .furlan , d. , carnevali , i.w . ,bernasconi , b. , sahnane , n. , milani , k. , cerutti , r. , bertolini , v. , chiaravalli , a.m. , bertoni , f. , kwee , i. , pastorino , r. , carlo , c. ( 2011 ) .hierarchical clustering analysis of pathologic and molecular data identifies prognostically and biologically distinct groups of colorectal carcinomas ._ modern pathology _ , 24 , 126 - 37 .kang , h. , ombao , h. , linkletter , c. , long , n. , badre , d. ( 2012 ) .spatio - spectral mixed - effects model for functional magnetic resonance imaging data ._ journal of the american statistical association _ , * 107 * , 568577 .lashkari , d. , sridharan , r. , vul , e. , hsieh , p - j . ,kanwisher , n. , golland , p. ( 2012 ) .search for patterns of functional specificity in the brain : a nonparametric hierarchical bayesian model for group fmri data ._ neuroimage _ , * 59 * , 13481368 .ng , a. , jordan , m , weiss , y. on spectral clustering : analysis and an algorithm . in t. dietterich , s. becker and z. ghahramani ( eds . ) , _ advances in neural information processing systems _ , * 14 * , 849 - 856 . mit press , 2002 .schulz , k.p . ,fan , j. , bdard , a.c . ,clerkin , s.m . ,ivanov , i. , tang , c.y . ,halperin , j.m ., newcorn , j.h .common and unique therapeutic mechanisms of stimulant and nonstimulant treatments forattention - deficit / hyperactivity disorder . _ arch gen psychiatry _ , * 1 * , 95261 .simos , p.g . , rezaie , r. , fletcher , j.m . ,juranek , j. , passaro , a.d . , li , z. , cirino , p.t . ,papanicolaou , a.c .functional disruption of the brain mechanism for reading : effects of comorbidity and task difficulty among children with developmental learning problems ._ neuropsychology _ , * 25 * , 52034 .springel , v. , white , s.d.m ., jenkins , a. , frenk , c.s ., yoshida , n. , gao , l. , navarro , j. , thacker , r. , croton , d. , helly , j. , peacock , j.a . , cole , s. , thomas , p. , couchman , h. , evrard , a. , colberg , j. , pearce , f. ( 2005 ) .simulations of the formation , evolution and clustering of galaxies and quasars ._ nature _ , * 435 * , 629636 .tamm , l. , menon , v. , reiss , a.l .parietal attentional system aberrations during target detection in adolescents with attention deficithyperactivity disorder : event - related fmri evidence ._ am j psychiatry _ , * 163 * , 103343 .tibshirani , r. , walther , g. , hastie , t. ( 2002 ) .estimating the number of clusters in a data set via the gap statistic ._ journal of the royal statistical society : series b ( statistical methodology ) _ , * 63 * , 411-423 .torres , g.j ., basnet , r.b . ,sung , a.h . , mukkamala , s. , ribeiro , b.m .a similarity measure for clustering and its applications ._ proceedings of world academy of science , engineering and technology _ , * 31 * , 490 - 496 .van horn , j.d . ,grethe , j.s . ,kostelec , p. , woodward , j.b . , aslam , j.a . ,rus , d. , rockmore , d. , gazzaniga , m.s .( 2001 ) . the functional magnetic resonance imaging data center ( fmridc ) : the challenges and rewards of large - scale databasing of neuroimaging studies ._ philosophical transactions of the royal society b - biological sciences _ , * 356 * , 13231339 .vasic , n. , plichta , m.m . , wolf , r.c ., fallgatter , a.j ., sosic - vasic , z. , gron , g. reduced neural error signaling in left inferior prefrontal cortex in young adults with adhd . _ journal of attention disorders _ , ( in press ) .wang , y.k . , print , c.g . ,crampin , e.j .biclustering reveals breast cancer tumour subgroups with common clinical features and improves prediction of disease recurrence ._ bmc genomics _ , * 14 * , 102 .whelan , r. , conrod , p.j . ,poline , j.b . ,lourdusamy , a. , banaschewski , t. , barker , g.j . ,bellgrove , m.a . ,bchel , c. , byrne , m. , cummins , t.d ., fauth - bhler , m. , flor , h. , gallinat , j. , heinz , a. , ittermann , b. , mann , k. , martinot , j.l . ,lalor , e.c . ,lathrop , m. , loth , e. , nees , f. , paus , t. , rietschel , m. , smolka , m.n ., spanagel , r. , stephens , d.n . ,struve , m. , thyreau , b. , vollstaedt - klein , s. , robbins , t.w . ,schumann , g. , garavan , h. , imagen consortium .adolescent impulsivity phenotypes characterized by distinct brain networks ._ nat neurosci _ , * 15 * , 9205 .
statistical inference on functional magnetic resonance imaging ( fmri ) data is an important task in brain imaging . one major hypothesis is that the presence or not of a psychiatric disorder can be explained by the differential clustering of neurons in the brain . in view of this fact , it is clearly of interest to address the question of whether the properties of the clusters have changed between groups of patients and controls . the normal method of approaching group differences in brain imaging is to carry out a voxel - wise univariate analysis for a difference between the mean group responses using an appropriate test ( e.g. a t - test ) and to assemble the resulting `` significantly different voxels '' into clusters , testing again at cluster level . in this approach of course , the primary voxel - level test is blind to any cluster structure . direct assessments of differences between groups ( or reproducibility within groups ) at the cluster level have been rare in brain imaging . for this reason , we introduce a novel statistical test called anocva - analysis of cluster structure variability , which statistically tests whether two or more populations are equally clustered using specific features . the proposed method allows us to compare the clustering structure of multiple groups simultaneously , and also to identify features that contribute to the differential clustering . we illustrate the performance of anocva through simulations and an application to an fmri data set composed of children with adhd and controls . results show that there are several differences in the brain s clustering structure between them , corroborating the hypothesis in the literature . furthermore , we identified some brain regions previously not described , generating new hypothesis to be tested empirically . _ keywords _ : clustering ; silhouette method ; statistical test .
the application of the radiation pressure force for the trapping of atoms and neutral particles was pioneered by arthur ashkin .this was followed by a plethora of seminal experiments utilizing the radiation pressure force , for example in the displacement and levitation in air and water of micron - sized particles , and together with steve chu , for the development of a stable three - dimensional atom cooling and trapping experiment using frequency - detuned counter - propagating laser beams . in particular , the demonstration of _ optical tweezers _ , based largely on the transverse gradient force of a single focused gaussian optical beam was a significant contribution to optical trapping in biology . in biological systems ,optical tweezers were first used to trap and manipulate viruses and bacteria .this was followed by a burgeoning number of experiments using optical tweezers for measurements of dna / rna stretching and unfolding , intracellular probing , manipulation of gamete cells , trapping of vesicles , membranes and colloids and dna sequencing using rna polymerase . in particular , for the first time , quantitative biophysical studies of the kinetics of molecular motors ( e.g. myosin and kinesin ) at the single molecule level was made possible with the use of optical tweezers . coupled with conventional positionsensitive detectors ( i.e. using quadrant photodetectors ) , the position of , and force on , a bead tethered to a molecular motor can be measured at the single molecule level .the sensitivities attainable for force and position measurements of particles in optical tweezers are in the sub - piconewton and sub - nanometer regimes , respectively .the application of the optical tweezers technology has led to a more complete biophysical understanding of the kinetics of molecular motors - a quintessential demonstration of new physical techniques yielding new insights into biology .beam position and momentum sensing is particularly crucial for particle sensing in optical tweezers enabling high - precision particle position and force measurements .therefore it is important that such measurements are performed optimally to achieve the highest measurement efficacy .recently , hsu _ showed that the conventional quadrant detection scheme is non - optimal for measurements of the position and momentum of optical beams , even in the absence of classical noise sources .an alternative scheme for the optimal detection of the position and momentum of an optical beam was proposed , based on a spatial homodyne detection scheme .this scheme has also been proven to perform at the quantum limit of light based on cramer - rao informational bounds .therefore , it has become apparent that the use of quadrant detection for particle sensing in optical tweezers systems is non - optimal ; and the introduction of spatial homodyne detection could offer the possibility for greater particle tracking sensitivities . in this paper , we address the pertinent questions for particle sensing technology in optical tweezers systems - have we reached the limit of particle tracking sensitivity and can this limit be surpassed using quantum resources ?we believe that in answering this technique related question , naturally arises a biophysical question - i.e. with significantly enhanced sensitivities , are we able to detect molecular kinetics , at the single molecule level , that were previously unresolvable ?this biophysical question has wide implications as there are many vital protein conformational changes that occur in the angstrom regime , and within millisecond timescales .for example , molecular motors move along nucleic acids in steps of a single - base pair scale ( e.g. 3.4 on dsdna ) and the bacterial dna translocase ftsk moves at speeds of 5 kilobases per second .therefore , enhanced particle sensing could elucidate these finer features with greater sensitivity than conventional particle sensing techniques in optical tweezers systems .this paper begins by formalizing an optimal parameter estimation procedure for particle sensing based on the analysis of the spatial properties of the field scattered by a particle in an optical tweezers .we show that split detection is non - optimal and consequently propose an optimal measurement scheme based on spatial homodyne detection .the efficacy of particle sensing is evaluated using the signal - to - noise ratio ( snr ) and sensitivity measures ; and the efficacy of spatial homodyne detection and split detection systems are compared .an optical field can be formalized and described using a range of parameters - e.g. the polarization , the amplitude - phase quadratures , and the transverse spatial profile .these parameters can be measured using a range of detection techniques ( e.g. polarimetry , direct detection , interferometry and beam profiling ) and an estimate of their values in the presence of classical and quantum noise , and detection inefficiency is obtained . herewe develop a formalism to quantify an arbitrary spatial modification of the field parameterized by a parameter ( e.g. could quantify the displacement of a spatial mode along a transverse axis ) . in principle , an arbitrary field can be treated and the field properties can be modeled using maxwell s equations .however , for spherical fields such as those produced by scattering processes from small particles , after optical imaging of the field , the paraxial approximation is valid and the propagating field can be described using two - dimensional spatial modes in a convenient basis .the sensitivity of measurements on optical fields is ultimately limited by quantum noise on the fields , exhibited typically as shot noise .to understand such limits it is important to use a full quantum mechanical description of the field .the spatial quantum states of an optical field exist within an infinite dimensional hilbert space . depending on the spatial symmetry of an imaged optical field ,the spatial states of the field may be conveniently expanded in the basis of the rectangularly - symmetric tem or circularly - symmetric lg modes .a field of frequency can be represented by the positive frequency part of the electric field operator .we are interested in the transverse information of the field described fully by the slowly varying field envelope operator , given by where is a co - ordinate in the transverse plane of the field , and the summation over the parameters , and is given by in this paper , we adopt the tem mode basis for convenience , such that and are , respectively , the transverse beam amplitude function and the photon annihilation operator for the tem mode with polarization .the mode functions are normalized such that their self - overlap integrals are unity , so that the inner product ^ { * } \cdot { \bf u}^{j'}_{m'n ' } ( { \mbox{\boldmath } } ) d { \mbox{\boldmath } } \nonumber\\ & = & \delta_{mm ' } \delta_{nn ' } \delta_{jj'}.\end{aligned}\ ] ] we now apply an arbitrary spatial perturbation , described by parameter , to the field .( [ spatialfield1 ] ) can then be rewritten as a sum of coherent amplitude components and quantum noise operators , given by , \nonumber\\\end{aligned}\ ] ] where is the coherent amplitude of mode , and is the unit polarization vector .we see from eq .( [ spatialfield ] ) that and can be related to by where , and the normalization constant is given by ^ { * } \cdot \overline{\bf e}^{+}({\mbox{\boldmath}},p ) d { \mbox{\boldmath } } \right]^{-1/2 } .\end{aligned}\ ] ] note that is the mean number of photons passing through the transverse plane of the field per second and in this paper we assume to be real , without loss of generality .the quantum noise operator corresponding to mode is given by .in the limit of small estimate parameter , the taylor expansion of the first bracketed term in eq .( [ spatialfield ] ) is given by } { \partial p } \bigg |_{p=0 } , \ ] ] where the first term on the right - hand side of eq .( [ taylor ] ) indicates that the majority of the power of the field is in the mode . the second term on the right - hand side of eq .( [ taylor ] ) defines the spatial mode corresponding with small changes in the parameter } { \partial p } \bigg |_{p=0}.\ ] ] from eq .( [ taylor ] ) we see that the amplitude of mode is directly proportional to the magnitude of the spatial perturbation of the field . for optical beam position and momentum measurements ,the conventional detection scheme used is split detection ( a one - dimensional quadrant detector ) . in split detection , the optical beam under interrogation is incident centrally on a split detector , as shown in fig .[ schematic ] ( c ) ..,width=264 ] the difference between the photocurrents from the two halves of the split detector contains partial information about the position / momentum of the beam , given by \nonumber\\ & = & \alpha(p ) \tilde x^+_f , \end{aligned}\ ] ] where is the amplitude quadrature operator of the flipped mode with transverse mode amplitude function the amplitude quadrature operator can be written in terms of its coherent amplitude wherein resides the signal due to the parameter , and a quantum noise operator which is ultimately responsible for placing a quantum limit on the measurement sensitivity so that .\ ] ] the term in eq .( [ overlapsd ] ) is the overlap integral between the flipped mode and the displaced mode .hsu _ et al . _ proposed a new displacement measurement scheme that is optimal for detecting beam position and momentum .the spatial homodyne scheme utilizes a homodyne detection setup that has a local oscillator mode optimized for the displacement measurement of the input beam , as shown in fig .[ schematic ] ( d ) .the local oscillator ( lo ) beam interferes with the input beam on a 50/50 beam - splitter .the outputs of the beam - splitter are then detected using a pair of balanced single - element photodetectors , with the difference in photocurrents providing the measurement signal .the spatial homodyne scheme was also proven to perform at the cramer - rao bound , therefore extending the capabilities of the spatial homodyne scheme for the optimal measurement of any spatial parameter ( e.g. the measurement of the orbital angular momentum of light ) .we now proceed to derive the photocurrent for the spatial homodyne detection scheme . the input beam ( as described in eq .( [ spatialfield ] ) ) is interfered with the bright lo beam with mode - shape .the positive frequency part of the electric field operator for the lo given by e^{i \phi},\nonumber\\\end{aligned}\ ] ] where is the phase difference between the local oscillator and the input beam .the photocurrent at each photodetector ( distinguished by the subscripts + and - , respectively ) , assuming detectors of infinite extent , is given by whereby one output of the spatial homodyne attains a phase shift with respect to the other output due to the hard - reflection from the beam - splitter . substituting eqs .( [ spatialfield1 ] ) and ( [ lospatialfield ] ) into eq .( [ temp1 ] ) and taking the subtraction of the photocurrent from the two detectors gives ^ * \cdot \sum_{j , m , n } \tilde{a}_{mn}^j { \bf u}_{mn}^j ( { { \mbox{\boldmath}}},p ) \nonumber\\ & & + e^{i \phi } { \bf w } ( { { \mbox{\boldmath } } } ) \cdot \left ( \sum_{j , m , n } \tilde{a}_{mn}^{j } { \bf u}_{mn}^j ( { { \mbox{\boldmath}}},p ) \right ) ^\dagger \big ] d { { \mbox{\boldmath } } } \nonumber \\ % & = & \alpha_{\rm lo } \left [ e^{-i \phi_{\rm lo } } \sum_{j , m , n } \tilde{a}_{mn}^j \iint_{\infty}^{-\infty } { \bf w}^ * \cdot { \bf u}_{mn}^j d { { \mbox{\boldmath } } } + e^{i \phi_{\rm lo } } \left ( \sum_{j , m , n } \tilde{a}_{mn}^{j } \iint_{\infty}^{-\infty } { \bf w}^ * \cdot { \bf u}_{mn}^j d { { \mbox{\boldmath } } } \right ) ^\dagger \right ] \nonumber \\ & = & \alpha_{\rm lo } \big [ e^{-i \phi } \sum_{j , m , n } \tilde{a}_{mn}^j \left \langle { \bf w } ( { { \mbox{\boldmath } } } ) , { \bf u}_{mn}^j ( { { \mbox{\boldmath}}},p ) \right \rangle \nonumber\\ & & + e^{i \phi } \left ( \sum_{j , m , n } \tilde{a}_{mn}^{j } \left \langle { \bf w } ( { { \mbox{\boldmath } } } ) , { \bf u}_{mn}^j ( { { \mbox{\boldmath}}},p ) \right \rangle \right ) ^\dagger \big ] \nonumber \\ & = & \alpha_{\rm lo } \left [ e^{-i \phi } \tilde{a}_w + e^{i \phi } \tilde{a}_w^\dagger \right ]\nonumber \\ & = & \alpha_{\rm lo } \tilde x^{\phi}_{w } % & = & \alpha_{\rm lo } \left [ \alpha_w \left ( e^{i \phi } + e^{- i \phi } \right ) + \delta \tilde x^{\phi}_{w } \right ] \end{aligned}\ ] ] where is an annihilation operator describing the component of the input field in mode , and by definition the is the quadrature operator of that component at phase angle . in the above ,we have taken the condition and invoked the linearization approximation , thereby removing terms that do not involve .the orthonormality property of modes given in eq .( [ ortho ] ) has also been used .an optimal estimate of the parameter is obtained when the local oscillator mode is chosen to match the associated input mode , as shown in eq .( [ photohomo ] ) .the spatial homodyne detection scheme then extracts from the signal field a quadrature variable associated with the local oscillator field mode , with quadrature phase angle given by .it should be noted that delaubert __ have shown that optimal parameter estimation can be achieved using a photodetector array for the cases where the signal field is shot noise limited or single mode squeezed , so long as the array resolution is sufficiently small .array detection is restricted to amplitude quadrature detection and is not polarization resolving , however in situations where these restrictions are satisfied is formally identical to spatial homodyne detection .we now introduce the snr and sensitivity measures for the spatial homodyne and split detection schemes . for the spatial homodyne detection scheme , the measured signal is the mean signal component of the difference photocurrent in eq .( [ photohomo ] ) , given by where .for matched local oscillator and signal phases such that , the maximal signal is obtained , given by the corresponding noise component is given by where is the variance of the signal field mode .the resulting snr is given by if the optical field is in a coherent state , as is typical of a low noise laser and the snr for the spatial homodyne detection scheme is given by clearly , although experimentally challenging , squeezing the signal mode such that has the capacity to further enhance the snr .alternatively , we introduce the sensitivity measure , which is defined as a change to parameter required to provide for a signal field in a coherent state , given by ^{-1 } = \frac{1}{2 } \left [ \frac{\partial \alpha_w ( p)}{\partial p } \bigg |_{p=0 } \right ] ^{-1}.\ ] ] for comparison , the corresponding snr for the split detection scheme in the coherent state limit is given by with a sensitivity given by ^{-1}.\ ] ]fig . [ schematic ] ( a ) shows a typical optical tweezers setup .a trapping beam in the tem mode is focused onto a scattering particle . in this instancewe assume that the particle is spherical , with a permittivity greater than that of the medium , .if the particle has a diameter larger than the wavelength of the trapping beam , light rays are refracted as they pass through the particle , as shown in fig .2 . this refracted light results in an equal and opposite change of momentum imparted on the particle . due to the intensity profile of the beam , the outer ray is less intense than the inner ray . consequently , the resulting force acts to return the particle to the center of the trapping beam focus . trapping beam impinging on a spherical scattering particle . rays 1 and 2are refracted in the spherical particle , thereby undergoing a change in momentum .a corresponding equal and opposite change in momentum is imparted on the particle resulting in the particle being attracted to the center of the trapping beam . and are the gradient and scattering forces , respectively . and are the respective permittivity of the medium and the sample.,width=245 ] the effective restoring / trapping force is due to two force components - ( i ) the _ gradient force _ resulting from the intensity gradient of the tem trapping beam , that acts transversely toward the high intensity region and ( ii ) the _ scattering force _ resulting from the forward - direction radiation pressure of the trapping beam incident on the particle . in the focal region of the optical tweezers trapthe gradient force is typically dominant .it is important to note that in some optical tweezers experiments the trapped particle has radius less than the wavelength of the trapping laser . in this regime ,the trapping force on the particle is generated due to an induced dipole moment .the dipole moment induced will be along the direction of trapping beam polarization .the assumption that the particle is spherical is no longer important , since the particle has no structural deviations greater than the wavelength of the trapping beam .this allows the particle to be treated as a normal dipole , hence the particle experiences a force due to interaction of its induced dipole moment with the transverse electromagnetic fields of the impinging light .this force is proportional to the intensity of the beam and has the same net result as before ; it acts to return the particle to the center of the trapping beam focus .the position and force sensing of the trapped particle can then be obtained by imaging the scattered field from the particle on a position sensitive detector such as the commonly utilized quadrant photo - detector , or a spatial homodyne detector .the collection efficiency of the light field is given by the numerical aperture ( na ) of the objective lens ( as shown in fig .[ schematic ] ( b ) ) , given by where and are the refractive index and the collection half - angle of the lens , respectively . is related to the lens diameter ( assuming the object is at the focus , with focal length ) by we now formalize all the relevant fields that propagate through the optical tweezers system , as shown in the schematic of the optical tweezers arrangement of fig .[ schematic ] ( a ) .[ schematic ] ( b ) illustrates the wave - front of the trapping and scattered fields .the trapping field is incident from the left of the diagram and is then focused onto a spot , from the focusing lens .the particle is trapped near the center of this focal spot and scatters the incident trapping field , with the forward scattered and residual trapping field being collected by the objective lens .this is followed by imaging into the far - field onto a position sensitive detector .assuming that the trapping field is gaussian and hence in a tem mode , the positive frequency part of the electric field for the trapping beam at the waist of the trap ( denoted by the superscript ) , is given by with the mode - shape function given by where , is the waist size of the trapping beam , and is a unit vector representing the polarization of the trapping field . using the paraxial approximation , the positive frequency part of the electric field of the trapping beam after propagation of a distance from the focus to the objective lens ( of focal length ) is given by where is the wave - vector of the trapping field . with the exception of the replacement , is defined identically to , with the radius of the spot at the objective being given by aperturing due to the finite radius of the objective lens is taken into account via the aperture function given by where can be related to the numerical aperture ( na ) of the imaging system and refractive index of the trapping medium by . in principle, there could be multiple inhomogeneous particles within the optical tweezers focus , scattering the input trapping field .for this scenario , several numerical methods exist to calculate the scattered field - e.g. the finite difference frequency domain and t - matrix hybrid method and the discrete - dipole approximation and point matching method .however , for simplicity we consider the scattering from a single spherical , homogeneous particle with diameter much smaller than the wavelength . the resulting scattered field can be modeled as dipole radiation , having a positive frequency electric field given by where is the coordinate of the field with respect to the center of the optical tweezers , is the coordinate of the field with respect to the displaced particle , , and .the radius of the spherical scattering particle is given by .the scattered field is then collected by the objective lens ( as shown in fig .[ schematic ]( b ) ) , with the corresponding positive frequency part of the electric field given by \nonumber\\ & & \cdot \sqrt{\frac{f_{\rm o}}{r_{\rm o } ' } } e^{i k ( r_{\rm o } - f_{\rm o } ) } \pi_{r}({\mbox{\boldmath}})\\ & = & -i k \sqrt{\frac{\hbar \omega}{\epsilon_{0 } c } } \sqrt{\frac{f_{\rm o}}{r_{\rm o } ' } } \frac{e^{-ik ( r_{\rm o}-r_{\rm o}'-f_{\rm o})}}{r_{\rm o } ' } \pi_{r}({\mbox{\boldmath } } ) \nonumber\\ & & \cdot \big [ \left ( \hat { \bf r_{\rm o } } ' \times \hat { \bf r_{\rm o } } ' \times { \bf u}^{\rm o}_{00 } \right ) \hat { \bf l } + \left ( \hat { \bf r_{\rm o } } ' \times \hat { \bf r_{\rm o } } ' \times { \bf u}^{\rm o}_{00 } \right ) \hat { \bf n } \big ] , \nonumber\\\end{aligned}\ ] ] where and .the unit vectors , , and are used to include the effect of the objective lens on the polarization of the scattered field , where and .the term describes the compression of the intensity of the scattered field due to the change in propagation direction induced by the objective lens . to simplify the equation ,we have defined the constant given by since the total field after the objective lens consists of both the scattered field and the residual trapping field , we now include both fields to describe the total field after the objective lens , given by after the objective lens , the beam is focused onto a detector in the far - field image plane , via the use of an imaging lens . assuming the lens is thin and ideal , the field in the image plane is obtained by taking the fourier transform of eq .( [ etot ] ) , given by where are the transverse co - ordinates in the image plane .it is important to note that the analysis presented here is independent of the absolute scaling of the image plane co - ordinates . in an experimental situationa scaling factor is introduced that depends on the choice of magnification lenses used .the critical parameters for assessing sensitivity of particle monitoring are , and .these parameters can now be calculated using eqs .( [ alphap ] ) , ( [ v ] ) and ( [ w ] ) . using eq .( [ alphap ] ) we now find where we have assumed that the trap power is greater than the scattered power , as is the case for scattering from a small particle ; and for simplicity that only the scattered field is apertured by the objective lens .the latter assumption is reasonable for optical tweezers systems with a sufficiently large trap waist size and numerical aperture . in this paper , we restrict our analysis to the realistic scenario of and choose a trapping field waist size of 4 m . with these parameters ,trap field clipping due to the aperture causes only 15 ppm loss and is therefore negligible .using eq .( [ v ] ) we obtain where we have used the relations for and given in eqs .( [ ortho ] ) and ( [ atrap ] ) , respectively . now using eq .( [ w ] ) we obtain the functional form for the mode that contains information about the particle position , given by note that this mode is only dependent on the scattered field .we now calculate the snr of the spatial homodyne and split detection schemes for particle sensing in an optical tweezers arrangement . substituting the expressions obtained in eqs .( [ atrap ] ) - ( [ wgamma ] ) into eq .( [ optimalsnr ] ) , the snr for the spatial homodyne detection scheme is given by where the effective aperture function in the image plane co - ordinates is given by \pi_{r}({\mbox{\boldmath } } ) \big ) .\end{aligned}\ ] ] in a similar manner using eq .( [ snrsd ] ) , the snr for the split detection scheme is given by correspondingly , the sensitivities for the spatial homodyne and split detection schemes can be conveniently calculated using eqs .( [ senssh ] ) and ( [ senssd ] ) , respectively .a formal description for the trapping and scattered fields in an optical tweezers configuration was presented in section [ sec : opttweezer ] .we now numerically solve for the scattered field from a particle trapped in the optical tweezers .we utilize the field imaging system shown in fig .[ schematic ] ( b ) to image the scattered field into a propagating optical beam that is subsequently detected .we compare the snr and sensitivity of both split and spatial homodyne detection schemes ( described in section [ sec : estimate ] ) . as mentioned in the preceding section , the origin of the co - ordinate systemis defined to be at the focal point of the optical tweezers focusing lens system .the optical fields propagate in the direction and the scattering particle was assumed to be spherical and homogeneous .we model particle displacement in the - plane , to illustrate the effect on the scattered field in the transverse plane .the far - field intensity distribution arriving at the detector is given by the interference between the trapping and forward scattered fields calculated from eq .( [ farfieldintensity ] ) and shown in fig .[ fig : sdintf ] . as the trapping field is far more intense than the scattered field, we have subtracted its intensity from the images shown in this figure as well as subsequent figures , to make visible the interference fringes between scattered and trapping fields .nm , particle radius m , permittivity of the medium , permittivity of the particle , and objectives with na = 0.99 and focal spot size of m . we assume absorptive losses in the sample are negligible . figures ( a)-(c ) and ( d)-(e ) assume the trapping field is linearly and -polarized , respectively .the color bar shows scale of the intensity distribution .the particle displacements are given by ( a ) , ( d ) : m ; ( b ) , ( e ) : m ; and ( c ) , ( f ) : .,width=321 ] note that the terms due to just the scattered field have been ignored to reduce numerical error , justified since the total scattered power is four orders of magnitude smaller than the trapping beam power .the detection area was chosen to be larger than the area of the calculated image field to avoid inaccuracies due to clipping of the image .notice that as the particle moves in one direction , the intensity distribution shifts in the opposite direction , due to the lensing effect of the objective .note also the difference in intensity distribution between the and trapping beam polarization directions - i.e. the interference pattern appears `` compressed '' along the polarization axis due to the dipole scattering distribution of the particle .the snr for the split and spatial homodyne detection schemes were calculated , the results of which are shown in figs .[ snrplot ] ( a)-(c ) . and -polarized trapping fields , respectively .the lo spatial modes for the small displacement measurements are ( d ) : and ( e ) -polarized trapping fields , whilst for large displacement measurements are ( f ) : and ( g ) -polarized trapping fields.,width=283 ] the snr of the split detection scheme was evaluated by applying eq .( [ eqn : snrsd ] ) to the calculated interference signal , shown in fig . [ snrplot ] ( a ) . to calculate the snr for the spatial homodyne detection , the optimal lo mode first had to be determined .improved snr is possible with spatial homodyne detection when compared with split detection for all particle displacement regimes .however , the optimal lo mode depends on the position of the particle , so to achieve this a dynamical mode optimization routine would need to be implemented . herewe present results with detection optimized for two specific cases : ( i ) for a particle located close to the origin ( ) as modeled in the theory section ; and ( ii ) for a particle displaced from the origin by a factor of order .for the small displacement limit , the lo field was determined from the first order term in the taylor expansion of eq .( [ w ] ) for the scattered field .the resulting snr is shown in fig .[ snrplot ] ( b ) ; with the corresponding lo spatial modes assuming and linearly polarized trapping fields shown in figs .[ snrplot ] ( d ) and ( e ) , respectively .one observes that for displacements significantly less than the trapping beam waist size the snr is linear , with the optimum sensitivity - corresponding to maximum slope in the snr - occurring at zero displacement and significantly surpassing that achievable with split detection .particle tracking with optimum sensitivity is possible in this linear regime . at particle displacements of around m ,however , the snr peaks .small displacements of a particle around these points leave the snr unchanged .hence the signal read out from the spatial homodyne detector also remains unchanged , with the result that particle tracking becomes ineffective .as the particle position increases further , it moves out of the trapping field , causing a drop in the total scattered power and consequential exponential decay in the snr .it is possible to recalculate the lo field mode to optimize the sensitivity for particles fluctuating around any arbitrary position by performing a taylor expansion in of the scattered field about that position , and retaining only the first order term .[ snrplot ] ( c ) shows the resulting snr and corresponding lo mode shapes when the lo mode is optimized for particles fluctuating around 0.4 m .notice that now the maximum snr slope , and hence optimum sensitivity , is shifted from zero displacement to displacements of around 0.4 m .hence , we see that as the tracked particle moves , it is possible to dynamically adjust the lo field shape to optimize the measurement sensitivity and hence the particle tracking . .the solid and dashed lines are for linearly - and -polarized trapping fields , respectively .the axis on the right shows the minimum detectable displacement assuming 200 mw trapping power , nm , particle radius m , permittivity of the medium , permittivity of the particle , and objectives with focal spot size of m . we assume absorptive losses in the sample are negligible . the _ split detection non - optimality _ shaded area shows the particle sensing sensitivity loss due to incomplete information detection from split detection .the _ quantum resources _ shaded area indicates the region where quantum resources such as squeezed light can be used to further enhance the sensitivity of particle sensing measurements.,width=321 ] we now numerically evaluate the sensitivities of the split and spatial homodyne schemes in the small displacement limit , given by eqs .( [ senssd ] ) and ( [ senssh ] ) respectively .the sensitivity is the minimum detectable displacement , defined as the displacement required to change the snr by .the respective sensitivity curves for ( i ) split and ( ii ) spatial homodyne detection versus the numerical aperture of the objective lens are shown in fig .[ sensitivityplot ] .the minimum detectable displacement for both the split and homodyne detection schemes decrease with increasing na of the collection lens .as the na increases , more of the scattered field is collected , therefore providing more information about the scattering particle .the spatial homodyne outperforms the split detection scheme for all na values .this is due to the spatial homodyne scheme providing optimal information extraction of the detected field whereas the split detection scheme only measures partial information of the detected field , as derived in eq .( [ overlapsd ] ) . therefore curve ( ii ) is the quantum limit for particle sensing in optical tweezers systems . in order to perform measurements below this quantum limit , non - classical resourceshave to be used .for example , squeezed light in the spatial mode corresponding to the displacement signal mode can be injected into the optical tweezers system to reduce the quantum noise floor and therefore enhance position sensing .we have developed a formalism for particle sensing in optical tweezers via the analysis of the transverse spatial modes imaged from a scattered field .the conventional quadrant detection scheme , used ubiquitously in optical tweezers experiments , was shown to only detect partial information from the scattered light field .we propose instead the use of spatial homodyne detection whereby optimal information from the scattering particle can be obtained via the appropriate transverse spatial mode - shaping of the lo field .a numerical simulation of the snr and sensitivity of both split and spatial homodyne detection was presented and we demonstrate that up to an order of magnitude improvement in the sensitivity of spatial homodyne over split detection can be achieved .this work was supported by the royal society of new zealand marsden fund , and by the australian research council discovery project dp0985078 .
particle sensing in optical tweezers systems provides information on the position , velocity and force of the specimen particles . the conventional quadrant detection scheme is applied ubiquitously in optical tweezers experiments to quantify these parameters . in this paper we show that quadrant detection is non - optimal for particle sensing in optical tweezers and propose an alternative optimal particle sensing scheme based on spatial homodyne detection . a formalism for particle sensing in terms of transverse spatial modes is developed and numerical simulations of the efficacy of both quadrant and spatial homodyne detection are shown . we demonstrate that an order of magnitude improvement in particle sensing sensitivity can be achieved using spatial homodyne over quadrant detection .
various wireless communication setups can be modeled as interference channels consisting of multiple coexisting transmitter - receiver pairs . to reduce the interference in such systems , there are mainly two categories of receiver structures [ 1]-[2 ] .the first category are maximum likelihood ( ml)-based receivers achieving the highest possible rates [ 1 ] . however , the ml - based estimation may be practically infeasible , as the size of the search space grows exponentially with the codeword length , the number of antennas , and the number of transmitters [ 1 ] .the second category are linear receivers ( lr ) which have low complexity in filtering the received signals through a linear structure for decoding .lrs are often proposed based on the criteria of zero - forcing ( zf ) and minimum mean - square error ( mmse ) [ 1]-[4 ] .recently , a novel linear receiver referred to as integer - forcing linear receiver ( iflr ) has been designed to simultaneously recover the transmitted messages in point - to - point multiple - input multiple - output ( mimo ) systems [ 5 ] .this idea was derived from the compute - and - forward scheme [ 6 ] .based on noisy linear combinations of the transmitted messages , iflr recovers independent equations of messages through a linear receiver structure . in this way ,in contrast to mmse and zf schemes , instead of combating , iflr exploits the interference for a higher throughput .application of the iflr scheme in mimo multi - pair two - way relaying is proposed in [ 7 ] .it is shown in [ 8 ] and [ 9 ] that precoding in iflr can achieve the full diversity and the capacity of gaussian mimo channels up to a gap , respectively .also , [ 10 ] applies successive decoding in iflr and proves its sum rate optimality .iflr recovers all desirable and undesirable transmitted messages by decoding sufficient number of the best independent equations in terms of achievable rate .hence , considering iflr in interference networks , the complexity of the lattice decoding and also the best equation selection process grows considerably with the number of transmitters and data streams .the combination of iflr and interference alignment [ 11 ] , referred to as integer - forcing interference alignment ( ifia ) , is proposed in [ 12 ] to decode sufficient equations to recover the desirable messages .however , ifia requires channel state information at the transmitter ( csit ) .this is the motivation for our paper in which we design an efficient low - complexity receiver for interference channels with no need for csit . here, we propose a linear receiver scheme , referred to as integer - forcing message recovering ( ifmr ) , for interference networks . benefiting from a special equation structure of iflr, we propose a novel receiver model in which the required number of decodings is limited to twice the number of desirable messages . in our ifmr , independent integer combinations of the desirable messages are recovered in each receiver .each integer combination , referred to as desirable combined message ( dcm ) , is recovered by decoding two independent equations . here , with a new formulation , the equations can be optimized such that a dcm is recovered with maximum achievable rate . despite of its much less complexity , we prove that our sequential approach in optimizing dcms achieves the same rate as the optimal approach when we can jointly optimize dcms ( theorem 1 ) . instead of np hard exhaustive search in optimizing the equations of ifmr , we present a practical and efficient suboptimal algorithm to maximize the achievable rate in polynomial time. the proposed algorithm iterates in three steps , one for the coefficient factors of the two equations and the others for the coefficient vectors of an undesirable combined message ( ucm ) and dcm .the associated problem with each step is solved in polynomial time .the convergence of the proposed algorithm is also proved ( theorem 3 ) .hence , our ifmr scheme provides a low - complexity scheme in recovering the desirable messages through a few decodings of near - optimal integer combinations in interference channels .our scheme is different and much less complex compared to the iflr scheme that uses a large number of equations for message recovery .particularly , the complexity of ifmr does not depend on the number of transmitters and the data streams of the interfering transmitters .also , as opposed to ifia , our scheme requires no csit .we evaluate the performance of our scheme and compare the results with the minimum mean - square error ( mmse ) and zero - forcing ( zf ) , as well as the iflr schemes .the results indicate that , in all signal - to - noise ratios ( snrs ) , our ifmr scheme outperforms the mmse and zf schemes , in terms of achievable rate , substantially .also , the ifmr scheme achieves slightly less rates in moderate snrs , compared to iflr , with significantly less implementation complexity .in addition , our proposed algorithm provides a tight lower bound for the results obtained via the np hard exhaustive search .for instance , consider a three - pair interference channel with single antenna at the transmitters / receivers .then , the achievable rate of the exhaustive search is only 1 db better than our proposed algorithm in 1 bit / channel use .the remainder of this paper is organized as follows . in sectionii , the system model and iflr are briefly described .section iii presents the ifmr scheme .numerical results are given in section iv .finally , section v concludes this paper .* notations : * the operators , , , , and stand for conjugate transpose , determinant , trace , frobenius norm , and the space spanned by the column vectors of matrix , respectively .the and are the dimensional integer field and dimensional real field , respectively .moreover , denotes .the operator refers to the generalized inequality associated with the positive semidefinite cone .also , represents the partial derivative of function with respect to vector .finally , and stand for the identity matrix and the vector with all elements equal to one , respectively .we consider -pair interference channels where transmitters are transmitting independent data streams to receivers simultaneously , as shown in fig 1 . it is assumed that there is no coordination among the transmitters and receivers .we assume no csit and , as a result , we do not use beamforming .this is an acceptable assumption in simple setups with no coordinations and central processing units in which channel state information ( csi ) feedback and beamforming is infeasible . incorporating partial csitis left for future work . in this system ,the -th transmitter and receiver are equipped with and antennas , respectively .the matrix denotes the channel matrix from transmitter to receiver , with dimension .the elements of are assumed to be independent identically distributed ( iid ) gaussian variables with variance .we focus on real - valued channels. however , our scheme and results are directly applicable to complex - valued channels via a real - valued decomposition , as in [ 5]-[6 ] .transmitter exploits a lattice encoder with power constraint to map message streams to a real - valued codeword matrix with dimension , where is the codeword length . according to fig .1 , the received signal at receiver is given by where is iid additive white gaussian noise with the variance .since the objective of our proposed approach is to limit the complexity of the iflr scheme [ 5 ] for interference channels , it is interesting to briefly review this scheme as follows . the readers familiar with the iflr scheme can skip this part .let us rewrite ( 1 ) as where ] .since is of size , the iflr scheme recovers independent equations from . the independent equations with equation coefficient vectors ( ecvs ) , , totally shown by matrix ^* ] , where ^*} ] and ] by solving ( 13 ) with the + assumption of given and .step ii : update by solving ( 21 ) with the assumption of + given and ] .+ } \right),r\left ( { \left [ { \begin{array}{*{20}{c } } { d_2^k{\mathbf{a}^k}}\\ { e_2^k{\mathbf{c}^k } } \end{array } } \right ] } \right ) } \right\} ] , we can not achieve a tight bound with an approach similar to step ii .therefore , we propose a search over integer space which can obtain an efficient suboptimal solution of ( 22 ) as follows .first , we optimize ( 22 ) with a relaxation on the constraint as .then , we search over a -dimensional quantization sphere which has the obtained real valued solution as its center , and find the best candidate according to ( 22 ) .since is a convex quadratic function , the proposed search can achieve a tight suboptimal solution of ( 22 ) when the quantization sphere has sufficiently large radius .the quantization scheme will be further discussed in the sequel . here, the problem ( 22 ) is relaxed as to obtain the solution of ( 23 ) in closed form , we use the same procedure as in [ 16 , subsection iii.a ] to convert ( 23 ) to an equivalent problem as where , and is an auxiliary parameter . as details are given in appendix iv ,the solution of ( 24 ) is where is obtained according to the considered three cases in appendix iv .also , functions and are defined as follows as a polynomial - time approach to search over the quantization sphere , we can consider slowest descent lines with directions of the eignevectors of the hessien of the cost function in ( 12 ) , i.e. , , which crosses the center in ( 25 ) .then , the closest integer points to the lines and independent of are checked to find the best candidate .this approach is based on the slowest descent method which can efficiently search over discrete points [ 17 ] .assume that the quantization radius is and the number of the slowest descent lines is .it is straightforward to show that our approach needs to search over at most integer points . through the following lemma, we can exclude those from the quantization sphere for which the rate ( 7 ) are zero .it also determines the maximum required radius for the quantization sphere , which guarantees to include the optimal solution of ( 22 ) .the lemma is of interest because it reduces the complexity for searching in the quantization sphere .assume , , , , and are given .the search space with the following norm leads to rate in ( 7 ) . where is the maximum singular value of .see appendix v. algorithm 1 , summarized in table 1 , is iterated until a convergence threshold , considered by the algorithm designer , is reached . in the simulation results , we will show the performance of our polynomial time suboptimal algorithm in comparison with the np - hard optimal exhaustive search of the equations and ucm and dcm coefficients over the cost function of ( 10 ) .the following theorem proves the convergence of algorithm 1 .algorithm 1 is convergent .see appendix vi .in this section , we provide simulation results that demonstrate the performance of the proposed ifmr scheme .consider a three pair interference channel in which each node is equipped with antennas , unless otherwise stated .the elements of the channel matrices are assumed to have gaussian distribution with variance 1 , i.e. , , .the additive white gaussian noise has .the convergence threshold parameter in algorithm 1 is set to .we average over 10000 randomly generated channel realizations . in figs . 3 and 4, we evaluate the achievable rates of our proposed ifmr scheme and compare the results with the state - of - the - art works , i.e. , mmse and zf [ 2 ] , and iflr scheme [ 5 ] , for and , respectively . as observed , algorithm 1 can achieve almost the same performance as in the optimal exhaustive search - based scheme .for instance , in the cases with and , the performance degradation , compared to the optimal exhaustive search - based approach , is less than 1 db in 1 bit / channel use and 1 db in 2 bit / channel use , respectively .it is also observed that the ifmr scheme outperforms the conventional mmse and zf receivers at all snrs , and the performance gap increases with snr which is because of the increase in interference .also , the ifmr scheme achieves slightly higher rates compared to the iflr scheme at low snrs .it is due to the fact that the optimal equations recovered from ( 6 ) may have zero elements with high probability at low snrs [ 18 ] , whereby a subset of the equations would be enough for recovering the desirable messages .note that the iflr scheme leads to better achievable rates compared to the ifmr scheme at high snrs , at the expense of much higher complexity .for example , the iflr scheme has 2 db improvement compared to the ifmr scheme at 1.15 bit / channel use in the one antenna case ( fig .3 ) and 2.5 db improvement at 2.5 bit / channel use in two antennas case ( fig .4 ) . that is because , in comparison with ifmr , the iflr scheme has more flexibility in decoding the interference as equations . in fig . 5, we investigate the average number of required iterations as a function of snr for the cases with .it is observed that for all considered snrs less than 5 iterations are required for the algorithm convergence .thus , our algorithm can be effectively applied in delay - constrained applications .6 shows the throughput versus the target rate for the case with .the throughput is defined as , e.g. , [ 19 , eq .( 4 ) ] as observed , for small values of , the throughput increases with the rate almost linearly , because with high probability the data is correctly decoded . on the other hand , the outage probability increases and the throughput goes to zero for large values of .moreover , depending on the snr , there may be a finite optimum for the target rate in terms of throughput . in figs .7 and 8 , the effect of the number of receiving antennas is assessed on the achievable rate and the outage probability of the proposed algorithm when each transmitter has one antenna .the outage probability is defined as . here , = 1 bit / channel use is considered . as can be observed from fig .7 , the achievable rate increases with the number of antennas .for example , in sum rate of 2 bit / channel use , the system with improves the power efficiency by 4 db and 10 db compared to the cases with and , respectively . also , from fig .8 , the ifmr scheme results in diversity , i.e. , the slope of the outage probability curves at high snrs , approximately equal to .in this paper , we proposed a low - complexity linear receiver scheme , referred to as ifmr , for interference channels . in ifmr , an integer combination of the desirable messages of each receiver can be recovered with the help of only two equations independently of the number of transmitters and data streams .we first proved that the sequential selection of the integer combinations can achieve the same rate as in the optimally joint selection .then , we proposed a suboptimal algorithm to optimize the required equations and integer combinations in polynomial time and proved its convergence . despite of its much less complexity for ifmr , our proposed algorithm can achieve almost the same performance as in the exhaustive search scheme .the ifmr scheme also shows a significantly better performance , in terms of the achievable rate , in comparison with the mmse and zf schemes .let the independent dcm coefficient vectors be selected by the sequential method in ( 10 ) . according to the constraint in ( 10 ) , we have .hence , the achievable rate of the sequential technique is .suppose also that the independent set , i.e. , , are the optimum solution of ( 9 ) . without loss of generality , assume that .thus , the achievable rate of the optimal technique is . using contradiction , assume .hence , . from ( 10 ) , is obtained from two equations which have the maximum achievable rate among all set of two equations whose associated dcm coefficient vectors are linearly independent of .this implies that every dcm coefficient vector with a rate higher than is linearly dependent to the set .thus , we conclude exists in the . as a result , for all , we have which indicates that .however , this contradicts the assumption of linear - independency of these equations . hence , .for every vector in , we can write then , from the cauchy - schwarz inequality , we conclude .thus , is semi - definite .from the definition of in ( 14 ) and adding then subtracting a term , we can write - \left [ { \begin{array}{*{20}{c } } { { \mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { { \mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } } \right]{\left ( { \frac{1}{{\text{snr}}}{\mathbf{i } } + { \mathbf{h}_{kk}}\frac{{{\mathbf{a}^k}{\mathbf{a}^k}^*}}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\frac{{{\mathbf{c}^k}{\mathbf{c}^k}^*}}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}\mathbf{h}_k^ * } \right)^ { - 1}}\\ \times \left [ { \begin{array}{*{20}{c } } { { \mathbf{h}_{kk}}{\mathbf{a}^k}}&{{\mathbf{h}_k}{\mathbf{c}^k } } \end{array } } \right]+ \left [ { \begin{array}{*{20}{c } } { { \mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { { \mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } } \right]{\left ( { \frac{1}{{\text{snr}}}{\mathbf{i } } + { \mathbf{h}_{kk}}\frac{{{\mathbf{a}^k}{\mathbf{a}^k}^*}}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\frac{{{\mathbf{c}^k}{\mathbf{c}^k}^*}}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}\mathbf{h}_k^ * } \right)^ { - 1 } } \left [ { \begin{array}{*{20}{c } } { { \mathbf{h}_{kk}}{\mathbf{a}^k}}&{{\mathbf{h}_k}{\mathbf{c}^k } } \end{array } } \right ]\\- \left [ { \begin{array}{*{20}{c } } { { \mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { { \mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } } \right]{\left ( { \frac{1}{{\text{snr}}}{\mathbf{i } } + { \mathbf{h}_{kk}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\mathbf{h}_k^ * } \right)^ { - 1}}\left [ { \begin{array}{*{20}{c } } { { \mathbf{h}_{kk}}{\mathbf{a}^k}}&{{\mathbf{h}_k}{\mathbf{c}^k } } \end{array } } \right].\end{gathered}\ ] ] according to matrix inverses identities in [ 20 , eqs . ( 159 ) and ( 165 ) ] , we can rewrite ( 30 ) as + \text{snr}\left [ { \begin{array}{*{20}{c } } { \frac{1}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}{\mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { \frac{1}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}{\mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } } \right ] \left [ { \begin{array}{*{20}{c } } { \frac{1}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}{\mathbf{h}_{kk}}{\mathbf{a}^k}}&{\frac{1}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}{\mathbf{h}_k}{\mathbf{c}_k } } \end{array } } \right ] \biggr)^ { - 1 } \\+ \left [ { \begin{array}{*{20}{c } } { { \mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { { \mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } } \right ] \biggl\{\left ( { \frac{1}{{\text{snr}}}{\mathbf{i } } + { \mathbf{h}_{kk}}\frac{{{\mathbf{a}_k}{\mathbf{a}_k}^*}}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\frac{{{\mathbf{c}_k}{\mathbf{c}_k}^*}}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}\mathbf{h}_k^ * } \right)^ { - 1}\\ \times \left ( { { \mathbf{h}_{kk}}\mathbf{h}_{kk}^ * - { \mathbf{h}_{kk}}\frac{{{\mathbf{a}_k}{\mathbf{a}_k}^*}}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\mathbf{h}_k^ * - { \mathbf{h}_k}\frac{{{\mathbf{c}_k}{\mathbf{c}_k}^*}}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}\mathbf{h}_k^ * } \right ) \left ( { \frac{1}{{\text{snr}}}{\mathbf{i } } + { \mathbf{h}_{kk}}\mathbf{h}_{kk}^ * + { \mathbf{h}_k}\mathbf{h}_k^ * } \right)^ { - 1}\biggr\ } \\ \times \left[ { \begin{array}{*{20}{c } } { { \mathbf{h}_{kk}}{\mathbf{a}^k}}&{{\mathbf{h}_k}{\mathbf{c}_k } } \end{array } } \right].\end{gathered}\ ] ] it is straightforward to show that matrices , , and with + \text{snr}\left [ \begin{array}{*{20}{c } } { \frac{1}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}{\mathbf{a}^k}^*\mathbf{h}_{kk}^*}\\ { \frac{1}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}{\mathbf{c}^k}^*\mathbf{h}_k^ * } \end{array } \right ] \left [ { \begin{array}{*{20}{c } } { \frac{1}{{{\mathbf{a}^k}^*{\mathbf{a}^k}}}{\mathbf{h}_{kk}}{\mathbf{a}^k}}&{\frac{1}{{{\mathbf{c}^k}^*{\mathbf{c}^k}}}{\mathbf{h}_k}{\mathbf{c}^k } } \end{array } } \right ] \biggr)^ { - 1},\nonumber\end{aligned}\ ] ] are positive definite . according to lemma 1 , since and are positive semi - definite matrices , the matrix with is also semi - definite .hence , the overall matrix , which is sum of a positive definite matrix and a semi - definite matrix , is positive definite .for ( 24 ) , we further define a function where minimizes for given .let denote the solution of .there are three cases according to the relationship of and , one of which includes the solution of ( 24 ) . :if , we have hence , ( 24 ) is changed to which can be effectively solved by setting the derivative of with respect to equal to zero . hence , according to ( 12 ) , the optimal is given by which respectively leads to : if , then we have thus , similar to case 1 , we can find the as : if , then we have in which can be found by the bisection method . in this case ,( 24 ) is rephrased as which can be solved by setting the derivative of with respect to equal to zero . with the same arguments and using some manipulations , is obtained by where equation with ecv has rate ( 5 ) equal to zero if [ 5 ] .thus , the rate ( 7 ) is zero if or accordingly , we should have + or , which completes the proof .for each , assume , where corresponds to the -th iteration .for the iteration of step i , we have , in step ii , , and in step iii , . according to ,the latter is guaranteed when we assume the quantization sphere has sufficiently large radius to find a suitable . even for a small quantization sphere with no candidate, we can update as which in the worst case of leads to . hence , at the end of iteration . in this way , in each iteration , the function either decreases or remains unchanged , and is lower bounded by zero .thus , the proposed algorithm is convergent. 1 d. tse and p. viswanath , _ fundamentals of wireless communication_. cambridge univ .press , 2005 .s. verdu , _ multiuser detection_. cambridge univ . press , 1998 .r. lupas and s. verdu , `` linear multiuser detectors for synchronous code - division multiple - access channels , '' _ ieee trans .inf . theory _ ,123 - 136 , jan . 1989 .o. oyman and a. paulraj , `` design and analysis of linear distributed mimo relaying algorithms , '' _ iee proc .565 - 572 , aug . 2006 .j. zhan , b. nazer , u. erez , and m. gastpar , `` integer - forcing linear receivers , '' _ ieee trans .inf . theory _ ,60 , no . 12 , pp . 7661 - 7685 , dec .b. nazer and m. gastpar , `` compute - and - forward : harnessing interference through structured codes , '' _ ieee trans .inf . theory _ ,6463 - 6486 , oct .azimi - abarghouyi , m. nasiri - kenari , b. maham , and m. hejazi , `` integer forcing - and - forward transceiver design for mimo multi - pair two - way relaying , '' _ ieee trans . veh .8865 - 8877 , nov .a. sakzad and e. viterbo , `` full diversity unitary precoded integer - forcing , '' _ ieee trans .wireless commun . _ , vol .14 , no . 8 , pp . 4316 - 4327 , aug . 2015 .o. ordentlich and u. erez , `` precoded integer - forcing universally achieves the mimo capacity to within a constant gap , '' _ ieee trans .inf . theory _ ,323 - 340 , jan . 2015 .o. ordentlich , u. erez , and b. nazer , `` successive integer - forcing and its sum - rate optimality , '' in _ proc .ieee 51st annu .allerton conf .commun . , control , comput ._ , monticello , il , usa , oct .2013 , pp .282 - 292 .v. r. cadambe and s. a. jafar , `` interference alignment and degrees of freedom of the k -user interference channel , '' _ ieee trans .inf . theory _ ,54 , no . 8 , pp .3425 - 3441 , aug .v. ntranos , v. r. cadambe , b. nazer , and g. caire , `` integer - forcing interference alignment , '' in _ proc .ieee isit2013 _ , istanbul , turkey , july 2013 , pp .574 - 578 .a. k. lenstra , h. w. lenstra , l. lovasz , `` factoring polynomials with rational coefficients , '' _ math ._ , v. 261 , 1982 , pp .515 - 534 .j. park and s. boyd , `` a semidefinite programming method for integer convex quadratic minimization '' , submitted to _siam journal on optimization_. accessed on april 2015 .[ online ] .available : https://arxiv.org/abs/1504.07672v5 m. grant , s. boyd , and y. ye , `` cvx : matlab software for disciplined convex programming , '' 2006 .[ online ] .available : http://www.stanford.edu/ boyd / cvx y. liang , vv .veeravalli , hv .poor , `` resource allocation for wireless fading relay channels : max - min solution , '' _ ieee trans .inf . theory _ ,3432 - 3453 , sep . 2007p. spasojevic and c. n. georghiades , `` the slowest descent method and its application to sequence estimation , '' _ ieee trans .9 , pp . 1592 - 1604 , sep .m. hejazi and m. nasiri - kenari , `` simplified compute - and - forward and its performance analysis , '' _ iet commun ._ , vol . 7 , no . 18 , pp . 2054 - 2063 , dec . 2013. b. makki , t. svensson , t. eriksson and m. debbah , `` on feedback resource allocation in multiple - input - single - output systems using partial csi feedback , '' _ ieee trans .816 - 825 , march 2015 .k.b . petersen and m.s .pedersen , _ the matrix cookbook_. technical university of denmark , 2006 .
in this paper , we propose a scheme referred to as integer - forcing message recovering ( ifmr ) to enable receivers to recover their desirable messages in interference channels . compared to the state - of - the - art integer - forcing linear receiver ( iflr ) , our proposed ifmr approach needs to decode considerably less number of messages . in our method , each receiver recovers independent linear integer combinations of the desirable messages each from two independent equations . we propose an efficient algorithm to sequentially find the equations and integer combinations with maximum rates . we evaluate the performance of our scheme and compare the results with the minimum mean - square error ( mmse ) and zero - forcing ( zf ) , as well as the iflr schemes . the results indicate that our ifmr scheme outperforms the mmse and zf schemes , in terms of achievable rate , considerably . also , compared to iflr , the ifmr scheme achieves slightly less rates in moderate signal - to - noise ratios , with significantly less implementation complexity .
nowadays graphics processing units ( gpu ) play a rather important role in high - performance computing ( hpc ) .the proportion of computing systems equipped with the special accelerators ( gpus , mic , etc . ) is growing among supercomputers , which is reflected in the well - known top500 list of the most powerful supercomputing systems of the world . the most popular programming languages for general - purpose computing on gpu ( gpgpu )are cuda ( for nvidia gpus only ) and open computing language ( opencl ) .currently more than 80% of all scientific researches are using gpu accelerators performed with cuda .one of the main approaches to study high - energy physics ( hep ) phenomena is the lattice monte carlo simulations . in 1974kenneth g. wilson proposed a formulation of quantum chromodynamics ( qcd ) on a space - time lattice a lattice gauge theory ( lgt ) , which allows to calculate infinite - dimensional path integral with the procedure of computation of finite sums .lgt has many important features , in particular , lgt makes it possible to study low - energy limit of qcd , which is not achievable by analytic methods . in the limit of an infinite number of lattice sites and zero lattice spacing ,lgt becomes an ordinary quantum field theory ( qft ) .numerical results , obtained by means of lattice approximation , depend on number of lattice sites , the using of lattices with large size is preferable . moreover , some phenomena could be studied on big lattices only , because small lattices are not sensitive to such effects .the use of large lattices puts special demands on computer systems on which the investigation is performed .thus , the need for high computing performance in addition to the existence of well parallelized algorithms makes the application of gpus for lattice simulations particularly important .now hep is one of the main consumers of supercomputing facilities .major hep research collaborations develop software environments to achieve their scientific goals ( see below ) .the standard practice is to incorporate into packages special utilities for data exchange among collaborations . while software development programmers optimize their code according to the hardware available for collaboration .so , due to severe competition among hardware manufacturers it is necessary to take into consideration the cross - platform principles while constructing a new hep package . of all scientific researches using gpu accelerators are performed with opencl , scaledwidth=60.0% ] obviously , each software package has limited scope of scientific tasks , which could be solved with it .current hep development leads to emerging the problems beyond this scope .the spontaneous vacuum magnetization at high temperature , phase transition behavior in external fields , dependence of the phase transition order in o(n ) models on coupling constant value and so on are some of such tasks . in 2008we created the ids package , which allows to research quantum effects in external chromomagnetic field .it was written in ati il for amd / ati gpus and provided fast derivation of a huge statistic , desired for solving applied tasks .after the deprecation to maintain ati il by amd , the demand to port the package to a modern gpgpu language has been appeared .currently of all scientific researches using gpu accelerators are performed with opencl ( see figure [ fig : openclstat ] ) .so , opencl was chosen to provide a multi - platform usage . in this paperwe present qcdgpu package and describe its program architecture and possibilities .the general aim of the package is production of lattice gauge field configurations for pre described models with following statistical processing of different measurable quantities .the current version of qcdgpu allows to study su(2 ) , su(3 ) gauge theories as well as o(n ) models .the extension of available groups in the package may be made by linking the file with appropriate algebra and core of the monte carlo procedure for particular groups .the number of space - time dimensions is one of the run - time parameters in qcdgpu . space - time is assumed by default .the list of measurable quantities may be changed accordingly to the problem investigated .all lattice simulations as well as measurements can be performed whether with single or double precisions .the package is easily - scalable to the number of available devices .the paper is organized as follows .the overview of existing packages is made in the sect.[sect : relworks ] .the structure of the qcdgpu package is shown in the sect.[sect : architecture ] .the description of multi - gpu mode and the capability of distributed simulations are provided in the sect.[sect : multigpu ] .some performance results are shown in sect.[sect : performance ] .the last sect.[sect : discussion ] is devoted to discussion of the scope of the package and summarizes the results .while a great amount of new experimental data appeared , modern high - energy physics requires extremely resource - intensive computations , in particular monte carlo lattice simulations .therefore , only big scientific collaborations , which have enough computation time on supercomputers can run such simulations . as usualsuch collaborations ( ukqcd , usqcd , twqcd , ptqcd , etc . ) have own software packages for simulations .the most well - known among them are : * * fermiqcd * : an open c++ library for development of parallel lattice quantum field theory computations , * * milc * : an open code of high performance research software written in c for doing lattice gauge theory simulations on several different ( mimd ) parallel computers , * * qdp++/chroma * : package supporting data - parallel programming constructs for lattice field theory and in particular lattice qcd .the first paper relating application of gpus in hep lattice simulations was published in 2007 .the authors of this work used opengl as programming language and for the first time denoted the need to store lattice data in the form of four - component vectors .shortly after of the publication , nvidia unveiled a new architecture cuda and opengl ceased to be used as a gpgpu - computation language in further works .recently some open - source software packages have been developed targeted to use gpus : * * quda * : a library for performing calculations in lattice qcd on cuda - ready gpus , * * ptqcd * : a collection of lattice su(n ) gauge production programs on cuda - ready gpus , * * culgt * : code for gauge fixing in lattice gauge field theories with cuda - ready gpus , and several other packages with closed - access source codes ( , , , , , ) .some hep collaborations link special gpu - libraries for their projects to engage gpgpu computing possibility without code refactoring .in particular , milc collaboration uses quda package , but only in single - device mode now .usqcd collaboration also has powered its qdp++/chroma software with cuda . most of the gpu - targeted packages are based on nvidia cuda environment. however , the development of hpc market leads to increasing of cross - platform heterogeneous computing importance . in spite of unchallenged leadership of cudathe software packages adapted for cuda - ready devices could not be executed on other ( even hardware - compatible ) accelerators because of its closed standard .the qcdgpu package architecture is based on the full platform - independence principle .all modules of the package are written in c++ and opencl with minimal external libraries dependence .qcdgpu can run equally well both on windows operating system ( os ) , and on linux without any code modifications . the os - independence is implemented by the inclusion into the package of special file platform.h , which adapts the os - dependent commands by means of precompiler directives .the package is tested on different oss , opencl sdk of different vendors and on several devices : * * os : * windows xp , windows 7 , opensuse 11.4 , opensuse 12.2 ; * * sdk : * nvidia cuda sdk 5.5 , amd app sdk 2.8.1 , intel sdk for applications opencl 2013 ; * * devices : * amd radeon hd7970 , hd6970 , hd5870 , hd5850 , hd4870 , hd4850 ( single - precision mode ) , nvidia geforce gtx 560 ti , nvidia geforce gtx 560 m , intel core i7 - 2600 , intel core i7 - 2630qm , amd phenom ii x6 .the package can be executed on all versions of opencl ( 1.0 , 1.1 , 1.2 ) without any code changes .the schematic diagram of the qcdgpu is shown in figure [ fig : structure ] .the package core compounds of the block clinterface , which provides the interaction of host code with computing devices .it performs all services for the preparation of devices , run of kernels and release the host memory and devices .next important block is the block with physical model description model description ( suncl or oncl depending on the model under investigation ) .memory organization on the host in accordance with the physical model ( gauge group , space - time dimension , etc . )is made in this block , as well as preparation and configuration of kernels in accordance with simulation conditions .algorithms based on pseudo - random number generators are the base of monte carlo procedure .library prngcl performs the function of generating pseudo - random numbers with required generator selection .the block big lattice is designed to provide the possibility to produce simulations on large lattices , and to use multiple devices on a single host .the package performs all the necessary calculations on computing devices and provides the final simulation results to the host. statistical analysis of the results over the run is performed by the block data analysis .validation of the data is performed by the block cpu results checking , which produces control measurements of required quantities on the last gauge configuration by cpu means .the block is basically designed for debugging ( for example , if a device has non - ecc memory ) , it can independently produce correspondent lattice gauge configurations on the same pseudo - random numbers as the device. the block can be turned off to save host resources .the interaction of several copies of the main program on different hosts is performed by a separate program qcdgpu - dispatcher .the program realizes task scheduling for the available copies of the main program according to the parameter space of the problem under investigation .the results of simulations are written in separate text files that are sent for further processing by external means .each file contains the startup parameters needed for reproduction of the run , as well as run average values and average configurations quantities table . also , for the possibility of resuming the interrupted simulation , the possibility of regular saving of computation package state is realized, it allows to interrupt the simulation at any time , and provides basic fault tolerance ( power or hardware failure ) . saving frequencyis set at the beginning of the simulations by the corresponding parameter .saved file with the computation state is portable ( the calculation can be continued on another device ) .detailed description of the package blocks is below .block clinterface is designed to hide all preparatory work for opencl - kernels startup from the main program . at the same timefine adjustment of all programming units is available .every memory object and kernel obtain its own i d number and further kernel execution , memory object binding to the kernel , results output and so on are carried out by this i d number .this principle is initially used in the opencl standard , but the numbering of objects is produced mainly for the purposes of memory usage monitoring , users are not granted with the possibility to use these internal i d numbers .as far as lattice simulation implies multiply startup of the same kernels , block clinterface allows to adjust parameters of each kernel startup by default .meanwhile , there is only kernel s i d number in startup command arguments , which makes the program code shorter . at the same timethere is a separate control of workgroupsize in case of intel opencl sdk usage , as it returns overestimated values workgroupsize for some devices .the unit also controls compute device errors .all errors are recorded in .log - file .noncritical errors do nt cause stop of the program .the block also performs caching of previously compiled programs to reduce the startup time of their launch .built - in compute cache is realized only in the nvidia cuda sdk .amd app sdk stores only the last compiled program , while intel sdk for applications opencl makes re - compilation while each startup . in spite of compute cache in the nvidia cuda sdk, there is a problem with the re - compilation necessity of dependent files ( included in the device - side opencl - program by directive # include ) sdk up to 5.5 version does not monitor such file changes .all mentioned above necessitates to create own compute cache .this compute cache is realized by creating .bin - files with compiled code for a particular device with distinct compilation parameters ..inf files are created along with these files , in which compilation - specific and additional parameters ( program number , platform , device , program options , md5-hash of program source , compilation timestamp ) are indicated .for each source code md5-hash is calculated , which is de facto i d number of the source code version .new .bin and .inf are created if startup parameters change . while changing md5-hash of the source code , old .bin and .inf files are overwritten .such internal compute cache can be turned off in startup parameters .the unit can perform a run - time profiling of kernels and memory objects .it allows to optimize new kernels during designing of them . by default profilingis turned off .startup parameters are passed to kernels by 3 means : 1 . by determined parameters of internal precompiler ; 2 . by constant buffers ; 3 . directly by binding values . undoubtedly , from the performance point of view , the most preferred way of passing parameters to kernels is the first mean .but it necessitates its re - compilation .that s why only rarely changed parameters are passed by this mean ( lattice geometry , gauge group , precision and so on ) .often changed parameters are passed mainly by the second mean ( coupling constant values , magnetic field flux and so on ) , which are common for all kernels .specific parameters for the particular kernel are passed by the third mean ( reducing size , memory offsets , etc . ) .such division allows to use computational time efficiently , which is formed from both program execution time , and its compilation time .the package block of qcdgpu , which is responsible for the physical model description , is modules suncl and oncl , which provide simulation su(n ) gluodynamics and o(n ) models correspondingly .as it is well known , the gauge fields in the group su(n ) are presented with complex matrices .these matrices are associated with lattice links . in case of o(n )models the fields are set with -vectors , which are associated with lattice sites .that s why it is possible to unify storage of lattice data in the memory by the following means .the fastest index number of lattice site .next index spatial direction of lattice link . in case of o(n)-model, this index is not used .the slowest index is connected with gauge group . at the same timegroup matrices are represented as structures of defined quantity 4-vectors , each of which contains a piece of information about corresponding matrix .this is due to the architecture of gpu - devices memory . in order to carry out lattice update ,lattice is traditionally divided into even and odd sites ( checkerboard scheme ) , and , if necessary , into separate parts , which makes it possible to study big lattices ( see below ) .( pseudo)heat - bath algorithm is used for su(n ) model update .improved metropolis algorithm is used for o(n ) update .due to the equalizing of array lengths , as well as offsets in accordance with workgroup size , it achieves coalesced memory access , which has a positive effect on overall performance .all pseudo - random numbers ( prn ) needed for kernel operation are produced by own library prngcl .the library is a porting and development of the library prngcal , written for amd / ati gpus on ati cal .the most popular pseudo - random number generators ( prng ) , which are used in hep lattice simulations ( ranmar , ranecu , xor128 , xor7 , mrg32k3a , ranlux with different luxury levels ) , are realized in it .realization of park - miller prng and `` constant generator '' , which produces a given constant , is included for testing and debugging purposes .the selection of the generator used is made by one external parameter , which gives the possibility to check obtained results stability in relation to prng used . by default , the package generates prns by the number of threads equal to the parameter value cl_device_image3d_max_width ( maximal width of 3d image ) , which the device returns . in practicethis parameter connects to gpu device architecture and does not depend on the vendor .our observations show , that using this parameter as launched threads quantity allows to achieve the best performance .users can choose the quantity of threads for prns producing manually .as soon as almost all realized generators do not have a general scheme of multi - stream execution ( apart from mrg32k3a ) prng parallelization is made by various initializations of seed tables .every prng thread keeps own seed table with own initial values .the generator initialization is made by one value of the external parameter ( randseries ) , on which base prng seed tables are initialized . at the same time , if this parameter equals zero , then system time is chosen as its value .the value of parameter randseries definitely reproduces simulation results , so the value along with the name of prng used is presented in output files .one of the main features of package qcdgpu is its possibility to launch in multi - gpu mode . at this time several devices on the host systemcan be used for one simulation .it is evident that it is necessary to divide lattice into parts . in the general case dividing latticeis made into unequal parts , which partly allows to hide the difference in devices performances . since the package is mainly designed to research finite - temperature effects , dividing lattice into parts is made in first spatial coordinate direction .this is due to the fact that the temperature is associated with the time direction and the case high temperature corresponds to .divided lattice simulation differs from a full lattice simulation only in necessity to carry out the exchanging values of boundary sites . to divide the lattice into partsso - called second - level checkerboard scheme is used .it is carried out in the alternate update of odd and even parts of the lattice . in this case , the border information exchange is performed in asynchronous mode while even sites update is being made , fulfil the information at border sites is taken place and vise verse .so while dividing the lattice , it is preferable to use an even quantity of parts . dividing the latticeis also used in cases when compute device memory is not enough for the simulation .the package also allows to make simulations on several hosts at the same time . each copy of the main computational program launches on the corresponding host , while parameters passing and launch control are carried out by external program qcdgpu - dispatcher .a very simple scheme is used for interaction of the control and computational programs : in case of computational program launch in this mode , the program waits for the special file finish.txt deleting before simulation begins .if there is qcdgpu and qcdgpu - dispatcher in general folder , it means `` the previous run is completed you can collect the results , waiting for the next run '' . after collecting files with simulation results, the control program creates a special file init.dat , in which parameters of next run are written , and file finish.txt is deleted .as soon as the computational program does not find the file finish.txt , it reads new parameters of the launch from the file init.dat .the cycle repeats . at the same time in case of using several copies of computational program qcdgpu - dispatcher sequentially looks through catalogues and set a new task for the first free host .names of files with results contain simulation finish time and unique prefix of computational program copy , which makes each file unique .in order to demonstrate some benchmarks we performed at several mc simulations for o(4 ) , su(2 ) , su(3 ) models on various lattices on the following gpus : nvidia geforce gtx 560 m ( windows 7 ) , nvidia geforce gtx 560 ti , amd radeon hd 7970 ( opensuse 12.2 ) , hd 6970 , hd 5870 ( opensuse 11.4 ) . for all mc simulationsthe `` hot '' lattice initialization and ranlux pseudo - random number generator with luxury level 3 were used . for o(4 ) model tries were used to update each lattice site ( this provides acceptance rate up to 50% ) . for su(2 ) and su(3 ) models and reunitarization were used .one bulk sweep was performed to decorrelate configurations to be measured .there are two types of sweeps : thermalization and working sweeps . during thermalizationsweep each lattice element ( sites or links ) is updated . in working sweep the lattice is updated and some quantities are measured .so , working sweeps are some longer than thermalization sweeps . herewe present the timings for working sweeps only ..the timings of one working sweep for o(4 ) , su(2 ) and su(3 ) models in single- and double - precision mode ( in seconds ) . [ cols="^,^,^,^,^,^",options="header " , ] [ tab : tab1 ] the performance results are collected in the table 1 .the gauge model and computing devices used in mc simulations are shown in first and second columns , correspondingly . due to the memory limit of particular computing device whole latticeshould be divided into several parts to perform mc simulations .the number of parts is presented in the third column .obviously , the bigger number of lattice parts means the bigger number of boundary elements transmission between host and device ( or between devices in multi - gpu mode ) , which reduces the overall performance .the last two columns contain the timings for single- and double - precision mode simulations , correspondingly .among all the data in the table the case of cooperative simulation of one lattice on two different opencl platforms is shown ( amd radeon hd 7970 on amd app sdk 2.8 and nvidia geforce gtx 560 ti on nvidia cuda 5.5 ) . in this casethe timings are better only on 10 - 25% than simulation on a single best device .nevertheless , simultaneous multi - platform simulations might be interesting for very big lattices . in cardoso andbicudo reported the timings ( with double precision ) and ( with dp ) seconds per single sweep on lattice for su(2 ) and su(3 ) models , respectively .the authors used nvidia geforce gtx 580 and nvidia cuda .we performed the same performance measurements with qcdgpu ( lattice sweep with reunitarization , without any measurements ) and obtain the following values in seconds : ( with dp ) for su(2 ) and ( with dp ) for su(3 ) on nvidia geforce gtx 560 ti and ( with dp ) for su(2 ) and ( with dp ) for su(3 ) on amd radeon hd 7970 . in actual mc simulations the trivial parallelization scheme ( each computing device receives from dispatcher unique parameters set for simulation of whole lattice ) we often use .the best performance results are obtained in this case . undoubtedly , due to many tuning parameters ( such as number of lattice parts , size of parts for different computing devices , part sequence , workgroup sizes , etc . )the qcdgpu package performance of multi - gpu mode is the subject for special research .in the present work a new package qcdgpu is introduced .this package is designed for monte carlo simulations of su(n ) gluodynamics in external field and o(n ) model on opencl - compatible devices .the package allows to carry out the simulations for very big lattices in single- or multi - gpu mode .simulations can be run with single or double precision .the package claims low demands to the host cpu and practically does not load it , which provides the possibility to use the package along with other traditional computational programs .if the size of the lattice under investigation allows its location in device memory , all necessary operations are carried out on device memory , the result returns to the host program after finishing the simulation . if the lattice is too big to locate it in device memory , it is divided into parts and work on separate parts is made by all computing devices available at the host .qcdgpu package allows the simultaneous run of several instances of the computational program on all tested oss .the current version of the program uses trivial parallelization scheme to distribute computing every computing node gets separate task for simulation . at the same time os type on every used hostis not important .the main requirement is to create a folder with shared access at the host .now it is made within the local network framework and by the means of virtual private network ( vpn ) organization for remote nodes .using small pieces of text information between task scheduling module qcdgpu - dispatcher and the host does not impose load on the network .built - in mechanism for saving the computational state while achieving some conditions ( every n sweeps or every m seconds ) allows to interrupt long calculations without threat of data loss and to continue them on another available device .it is also very useful while often power failures . * using the base of the built - in profiling mechanism , as well as additional micro - benchmarks to make automatic adjustments of package start - up parameters ; * realization of rhmc algorithm for including fermionic fields on a lattice ; * running of one simulation on several hosts at the same time on the base of mpi , which is very important in case of accounting of fermionic fields . using mixed - precision possibilitywill be realized to reduce amount of exchanging information . in this workwe did not bring attention to the details of realized algorithms , as well as to detailed description of the package parameters .the package is being constantly developed . in the first place methods and algorithms needed for actual research in the quantum chromoplasma laboratory of dnepropetrovsk national universityis realized .obtained physical results are published in particular in the works : , , etc .
the multi - gpu open - source package qcdgpu for lattice monte carlo simulations of pure su(n ) gluodynamics in external magnetic field at finite temperature and o(n ) model is developed . the code is implemented in opencl , tested on amd and nvidia gpus , amd and intel cpus and may run on other opencl - compatible devices . the package contains minimal external library dependencies and is os platform - independent . it is optimized for heterogeneous computing due to the possibility of dividing the lattice into non - equivalent parts to hide the difference in performances of the devices used . qcdgpu has client - server part for distributed simulations . the package is designed to produce lattice gauge configurations as well as to analyze previously generated ones . qcdgpu may be executed in fault - tolerant mode . monte carlo procedure core is based on prngcl library for pseudo - random numbers generation on opencl - compatible devices , which contains several most popular pseudo - random number generators . _ keywords : _ lattice gauge theory , monte carlo simulations , gpgpu , opencl
in the preceding work , referred to as paper i , we presented a method of evaluating thermonuclear reaction rates that is based on the monte carlo technique .the method allows for calculating statistically meaningful values : the recommended reaction rate is derived from the median of the total reaction rate probability density function , while the 0.16 and 0.84 quantiles of the cumulative distribution provide values for the _ low rate _ and the _ high rate _ , respectively .we refer to such rates as monte carlo reaction rates " in order to distinguish them from results obtained using previous techniques ( which we call classical reaction rates " ) .as explained in paper i , we will strictly avoid using the statistically meaningless expressions lower limit " ( or minimum " ) and upper limit " ( or maximum " ) of the total reaction rate . for detailed information on our method ,see paper i. in the present work , referred to in the following as paper ii , we present our numerical results of charged - particle thermonuclear reaction rates for a=14 to 40 nuclei on a grid of temperatures ranging from t=0.01 gk to 10 gk .these reaction rates are assumed to involve _ bare nuclei in the laboratory_. the rates of reactions induced on lighter target nuclei , a , are not easily analyzed in terms of the present techniques and require different procedures ( see , for example , descouvemont et al . for an evaluation of big bang nuclear reaction rates using r - matrix theory ) . the higher target mass cutoff , a=40 , was entirely dictated by limitations in resources and time .for use in stellar model calculations , the results presented here must be corrected , if appropriate , for ( i ) electron screening at elevated densities , and ( ii ) thermal excitations of the target nucleus at elevated temperatures .details will be provided below .we emphasize that the present reaction rates are overwhelmingly based on _ experimental nuclear physics information_. only in exceptional situations , for example , when a certain nuclear property had not been measured yet , did we resort to nuclear theory . in the subsequent work ( paper iii ) we will provide the complete nuclear physics data input used to derive our new monte carlo reaction rates . in the fourth paper of this series ( paper iv )we compare our new reaction rates to previous results .paper ii is organized as follows . in sec .2 we summarize briefly our monte carlo technique .an overview of the literature sources for the nuclear physics input data used to derive our results is provided in sec .detailed examples for how to interpret the new monte carlo reaction rates are given in sec .the extrapolation of the laboratory reaction rate to high temperatures is described in sec . 5 , while modifications of the reaction rate that are necessary for use in stellar model calculations are discussed in sec .the calculation of reverse rates is described in sec .a summary is given in sec .appendix a contains information regarding statistical hypothesis tests .our monte carlo reaction rates are presented in tabular and graphical format in app .the expressions used for calculating thermonuclear reaction rates are given in paper i. each nuclear input quantity entering in the calculation of the reaction rates is associated with a specific probability density function : a gaussian distribution for resonance energies ; a lognormal distribution for measured resonance strengths , nonresonant s - factors and partial widths ; a porter - thomas distribution for measured upper limits of partial widths ; and so on .once a probability density function is chosen for each input quantity , the total reaction rate and the associated uncertainty can be estimated using a monte carlo calculation .in particular , a random value is drawn for each input quantity according to the corresponding probability density function and the total reaction rate computed from these sample values is recorded .the procedure is repeated many times until enough samples of the reaction rate have been generated to estimate the property of the output reaction rate probability density function with the required statistical precision .correlations between quantities have been considered carefully : for example , if the strength of a narrow resonance is estimated from a reduced width or a spectroscopic factor , then the uncertainty in the resonance energy enters both in the boltzmann factor and in the penetration factor .thus the same random value of the resonance energy , drawn in this case from a gaussian probability density function , must be used in both expressions .our main goal is to calculate the probability density function for the _ total _ reaction rate and to characterize this distribution in terms of certain parameters : for the central ( recommended ) value of the reaction rate we chose the _ median _ which is equal to the 0.50 quantile of the cumulative distribution , while the low and high values of the reaction rate are chosen to coincide with the 0.16 and 0.84 quantiles , respectively . with this choice the confidence level ( or the coverage probability )amounts to 68% . a reliable estimation of more detailed information , such as the uncertainty contribution of each nuclear input quantity to the total rate , does not seem feasible at this time because of limitations in present - day computing power . see paper i for details . a computer code ,` ratesmc ` , has been written in order to calculate total reaction rates from resonant and nonresonant input using the monte carlo technique . for resonancesthe code computes reaction rates either from analytical expressions or , if required , by numerical integration .the latter procedure is computationally slow , since one integration has to be performed for each randomly sampled set of input quantities , but gives the most accurate results if the partial widths of a resonance are known .upper limits of nuclear input quantities and interferences between levels are also taken into account in the random sampling .the user controls the total number of random samples and hence the precision of the monte carlo results . for more information ,see paper i. the reaction rate output of the code is discussed in sec. [ output ] . a detailed description of the input file to `ratesmc ` can be found in paper iii .an overview regarding the reaction rates evaluated in the present work is provided in tab .[ tab : master ] . for each of the reactions listed herewe also give the q - value and some literature sources of the nuclear data input .the list of literature sources provided here is not meant to be comprehensive . for more information on the nuclear physics input ,see the captions to the reaction rate tables in app .b. the complete nuclear physics input used in the present work is provided in paper iii .@ lll + & & + + & & + (p,) & 10207.42.00 & grres et al . + (,) & 6226.3.6 & gai et al . , lugaro et al . + (,) & 4414.6.5 & tilley et al . , grres et al . + (,) & 4013.74.07 & de oliveira et al . , wilmes et al. + (,) & 3529.1.6 & mao et al . , fisker et al . + (p,) & 600.27.25 & iliadis et al . + (,) & 4729.85.00 & tilley et al . , mohr + (p,) & 5606.5.5 & fox et al . , chafa et al . + (p,) & 1191.82.11 & chafa et al . , newton et al . + (p,) & 7994.8.6 & wiescher et al . , becker et al . + (p,) & 3981.09.62 & lorentz - wirzba et al . , la cognata et al . + (,) & 9668.1.6 & giesen et al . , dababneh et al . + (p,) & 3923.5.4 & bardayan et al . + (p,) & 6411.2.6 & adekola , bardayan et al . + (p,) & 2882.15.73 & adekola , bardayan et al . + (p,) & 2193 & vancraeynest et al . , couder et al . + (p,) & 2431.69.14 & rolfs et al . + (,) mg & 9316.55.01 & schmalbrock et al . + (p,) & 6739.6.4 & iliadis et al . + (p,) & 8794.11.02 & grres et al . , hale et al . + (,) mg & 10614.78.03 & wolke et al . , ugalde et al . + (,n) mg & .29.04 & jaeger et al . , koehler + (p,) mg & 5504.18.34 & dauria et al . , ruiz et al . + (p,) mg & 7580.3.4 & stegmller et al . , jenkins et al . + (p,) mg & 11692.68.01 & hale et al . , rowland et al . + (p,) & 2376.13.00 & hale et al . , rowland et al . + (p,) & 122 & caggiano et al . , he et al . + (p,) & 1872 & visser et al . , lotay et al . + (p,) & 2271.6.5 & powell et al . , engel et al . + (,) & 9984.14.01 & strandberg et al . + (p,) & 6306.45.05 & iliadis et al . , iliadis et al . + (p,) & 6306.45.05 & iliadis et al . , iliadis et al . + (p,) & 6078.15.05 & iliadis et al . , iliadis et al . + (p,) & 8271.05.12 & iliadis et al . , iliadis et al . + (p,) & 3304 & herndl et al . , schatz et al . + (p,) & 3408 & herndl et al . + (p,) & 5513.7.5 & parpottas et al . , wrede + (p,) & 7462.96.16 & vogelaar et al . , lotay et al . + (p,) & 11585.11.12 & iliadis et al . , harissopulos et al . + (p,) mg & 1600.96.12 & endt , iliadis et al . + (p,) & 861 & caggiano et al . , gade et al . + (p,) & 2063 & iliadis et al . + (p,) & 2748.8.6 & endt , graff et al . + (p,) & 5594.5.3 & iliadis et al . + (p,) & 7296.93.19 & iliadis et al . + (p,) & 2460 & herndl et al. + (p,) & 4399 & iliadis et al . , bardayan et al . + (p,) & 8863.78.21 & iliadis et al . + (p,) & 1915.97.18 & iliadis et al . + (p,) & 290 & iliadis et al . , wrede et al . + (p,) & 1574 & iliadis et al . + (p,) & 2276.7.4 & iliadis et al . + (p,) & 2420 & herndl et al . + (p,) & 3343 & herndl et al . , schatz et al . + (p,) & 8506.97.05 & iliadis et al . , rpke et al . + (p,) & 1866.21.13 & iliadis et al . , rpke et al . + (p,) & 84.5.7 & herndl et al . , trinder et al . + (p,) & 1668 & iliadis et al . + (p,) & 1857.63.09 & iliadis et al . + (p,) & 2556 & herndl et al . , doornenbal et al . + (p,) & 538 & iliadis et al . , hansper et al . + (p,) & 1085.09.08 & iliadis et al . + the superscripts , and for refer to the total , ground state and isomeric state rate , respectively .+ reaction q - value from audi , wapstra and thibault , unless noted otherwise .the quoted uncertainty represents one standard deviation ; an entry .00 " implies that the experimental uncertainty amounts to a few ev only .such a small value is entirely negligible in the present astrophysical context .+ see paper iii for a complete bibliography .+ from eronen et al .+ from mukherjee et al .+ using results from yazidjian et al .+ from yoneda et al .+ calculated using the new value for the mass of from mukherjee et al .+ we comment briefly on the q - values which , except for a few updates , are adopted from audi , wapstra and thibault .it is gratifying to see that for many reactions the uncertainties in are less than kev . in some cases, however , the uncertainties are rather large ( kev ) .notice that resonance energies are frequently calculated by subtracting the q - value from a measured excitation energy ( see paper i ) .thus , even if the excitation energy has been measured precisely , the total uncertainty in the resonance energy may be dominated by a large uncertainty in the q - value .it is certainly worthwhile to improve such q - values with large uncertainties in future measurements considering that the resonance energy enters exponentially in the expressions for resonant reaction rates ( see eqs .( 1 ) and ( 10 ) of paper i ) .a number of charged - particle thermonuclear reactions in the a=14 to 40 range have been excluded from the present evaluation .below we provide a few examples and the reasons for disregarding them so that the reader obtains an impression on the scope of the present work .the (p,) and (p,) reactions are excluded here since their rates are strongly influenced by interfering resonance tails and nonresonant contributions .such cases can not be analyzed easily using the present procedure and require a different approach , such as r - matrix theory .similar comments apply to the (p,) reaction .we disregarded the (p,) mg reaction despite the fact that new information became available recently : ( i ) the spin - parities of all expected resonances below e kev are tentative ; ( ii ) the mirror state assignments are uncertain ; ( iii ) the spectroscopic factors are unknown ; and ( iv ) there may be missing levels close to the proton threshold in mg .these sources of error prohibit a reliable estimation of reaction rates based on monte carlo techniques . a recent study explored the nuclear structure of levels in order to derive a rate for the (p,) reaction .unfortunately , the proton decay of expected low - energy resonances could not be measured ( the detection threshold was kev ) .in addition , the spin - parities as well as -ray partial widths of most resonances are unknown. we felt that too much information is missing at this time for calculating reliable reaction rates .finally , we attempted to calculate monte carlo rates for the (p,) reaction .however , the effort proved futile although several recent studies addressed the nuclear structure of important levels in the compound nucleus . at present , the mirror state assignments are uncertain and most of the spectroscopic factors and -ray partial widths are unknown . for these reasons ,monte carlo rates can not be calculated reliably .numerical values of monte carlo thermonuclear reaction rates are given in columns 2 - 4 of the tables presented in app .[ tabgraph ] for a grid of temperatures between t=0.01 gk and 10 gk .each reaction rate table is accompanied by two figures .the first of these displays the reaction rate ratios and ; a visual inspection immediately reveals the reaction rate uncertainty at a given temperature .the second figure shows monte carlo reaction rate probability density functions ( in red ) for six selected temperatures : , , , , and gk ; they span conditions encountered in red giants , agb stars , classical novae , massive stars and type i x - ray bursts . the complete nuclear physics input used to compute our reaction ratesis presented in paper iii .the new reaction rates are compared with previous results in paper iv .the parameters of the lognormal approximations to the reaction rate probability density functions are given in columns 5 - 6 of the reaction rate tables in app .[ tabgraph ] .the lognormal distributions are also displayed ( in black ) in the second figure following a given rate table .note that the black line does _ not _ represent a fit to the data but its parameters are directly derived from the distribution of randomly sampled reaction rates .that is , the lognormal parameters are computed from the expectation value and the variance for ( since ] ; see sec .4.2 of paper i ) .a measure for the quality of the lognormal approximation , the anderson - darling ( a - d ) test statistic , is presented in column 6 of the rate tables and is also given in each panel of the following figures ( denoted by a - d " in both tables and figures ) .details regarding the anderson - darling test are provided in app .[ stattest ] . in brief, a value of less than indicates that the monte carlo reaction rate probability density function is consistent with a lognormal distribution . for values in the range of lognormal hypothesis is rejected by the anderson - darling test .nevertheless , the lognormal approximation holds reasonably well in this range , as can be seen by inspecting the graphs following the rate tables , and thus seems adequate for use in reaction network calculations . for values in excess of lognormal approximation is not only rejected by the anderson - darling test but , in addition , deviates _ visually _ from the actual monte carlo reaction rate probability density function .below we give some examples , ordered according to increasing complexity , in order to explain how to interpret and to use our numerical results .the examples are for illustrative purposes only and have been simplified by disregarding minor rate contributions . for the full results, the reader should consult the rate tables .consider first the reaction rate for the proton capture on at a temperature of gk . from tab .[ tab : si28pg ] we find a recommended rate of .this value represents the median monte carlo rate and is obtained from the 0.50 quantile of the cumulative reaction rate distribution .the 0.16 and 0.84 quantiles provide the low and high monte carlo reaction rates , yielding and , respectively .the corresponding monte carlo reaction rate probability density function is shown in fig .[ fig : pdfsi28pg]a as a red histogram .the black solid line represents a lognormal approximation .the values of the lognormal parameters , and , are given in tab .[ tab : si28pg ] .in addition , the anderson - darling test statistic , , is listed in the table .this small value indicates that in this case the monte carlo reaction rate probability density function follows a lognormal distribution , as is also apparent from visual inspection of fig .[ fig : pdfsi28pg]a .consequently , we can point out a few interesting observations .first , from the relatively large value of , corresponding to a factor uncertainty of , it follows immediately that the reaction rate distribution is skewed ( see sec .4.2 of paper i ) .second , the median monte carlo reaction rate is related to the lognormal location parameter by .third , the values of the low and high monte carlo reaction rates are related to the lognormal spread parameter by ^{1/2}) ] + and $ ] ( sec .4.2 of paper i ) .+ + when the lognormal approximation is in agreement with the + monte carlo reaction rate probability density function , which + applies in the majority of cases , the parameters and are + related to the low , median and high monte carlo reaction rate + by and ( see eq .( 39 ) of + paper i ) .alternatively , the low , median and high rates can be + obtained from the lognormal parameters by using , + and ( see eq . ( 40 ) of paper i ) .these + relationships apply to a coverage probability of 68% . also , the + lognormal parameter directly indicates the factor uncertainty , + , and the skewness of the reaction rate distribution ; a + value of corresponds to a nearly symmetric ( that is , + gaussian ) distribution , while for larger values the distribution + is noticeably skewed ( sec .4.2 of paper i ) .+ + a - d anderson - darling test statistic , , indicating how well the + monte carlo reaction rates are approximated by a lognormal + distribution .a value in excess of indicates that the + reaction rate probability density function is _ not _ lognormal .+ for a value in excess of the lognormal approximation + starts to deviate _ visually _ from the reaction rate probability + density function , as can be seen by inspecting the graphs + following the reaction rate tables .note that regardless of the + magnitude of , the values of and listed in the tables + define a lognormal distribution of the same expectation value + and variance as the actual monte carlo probability density + function .if no value of a - d is provided , then the rates either + have been found from extrapolation to high temperature ( see + below ) or are determined by entirely different means ( see + comments ) .+ + ( ) values given in parenthesis are usually _ not _ obtained from the + monte carlo method , but are found from extrapolation to + elevated temperatures . in this case , the _ recommended _ rates of + column 3 are calculated by normalizing hauser - feshbach results + to the monte carlo rate at the matching temperature , + ( see sec .[ extrap ] ) . for the value of the lognormal location + parameteris found from , while the lognormal + width parameter is approximated by the value of at + and is held constant .the low and high rates are then obtained + from and .( 39 ) and ( 40 ) of + paper i .no value of a - d is provided in this case . in exceptional + cases ,the extrapolated hauser - feshbach rates become smaller + than the monte carlo rates .since this result is unphysical , the + hauser - feshbach results are disregarded and the monte carlo + rates are placed in parenthesis for .+ each reaction rate table is accompanied by two figures .the first of these displays the reaction rate ratios and at temperatures below ; a visual inspection immediately reveals the reaction rate uncertainty at a given temperature .the second figure shows for selected temperatures ( t=0.03 , 0.06 , 0.1 , 0.3 , 0.6 and 1.0 gk ) the monte carlo reaction rate probability density functions ( in red ) , together with their lognormal approximations ( in black ) .the latter curves are calculated with the lognormal parameters and that are listed in columns 5 and 6 of the reaction rate tables . for a given reaction, each panel in the second figure displays the temperature , t , and the anderson - darling test statistic , a - d .note that for each reaction the value of can easily be obtained in two ways : it is equal to the lowest temperature for which no value of a - d is listed ; it is also equal to the highest temperature shown in the figure displaying the reaction rate uncertainties .+ example : for the (p,) reaction at gk , we obtain the following numerical results from tab . [ tab : na23pa ] , + + + + + + + a - d + + the first three numbers refer to the low , median and high reaction rates .the reaction rate probability density function , assumed to follow a lognormal distribution , is determined by the lognormal parameters and , this function is plotted as a black line in the second figure following tab .[ tab : na23pa ] . the anderson - darling statistic amounts to and , since , the a - d test rejects the hypothesis that the reaction rate probability density function is given by a lognormal distribution .the deviation of the actual monte carlo reaction rate probability density function , shown as a red histogram , and the lognormal distribution is barely visible in the figure .although the lognormal approximation is rejected by the a - d test , it is obvious that the black line is in close agreement with the red histogram .consequently , the lognormal function given above represents a useful approximation for stellar model studies .the reader may also verify that the above numerical results agree with eqs .( 39 ) and ( 40 ) of paper i. + & & & & & & + + & & & & & & + 0.010 & 2.66 & 3.61 & 5.06 & -4.245 & 3.24 & 1.96 + 0.011 & 1.81 & 2.49 & 3.46 & -4.053 & 3.27 & 1.33 + 0.012 & 9.98 & 1.38 & 1.91 & -3.882 & 3.24 & 1.73 + 0.013 & 4.60 & 6.30 & 8.84 & -3.729 & 3.26 & 1.64 + 0.014 & 1.81 & 2.49 & 3.57 & -3.591 & 3.29 & 3.62 + 0.015 & 6.35 & 8.67 & 1.22 & -3.467 & 3.25 & 2.04 + 0.016 & 1.97 & 2.73 & 3.83 & -3.353 & 3.30 & 1.29 + 0.018 & 1.51 & 2.07 & 2.92 & -3.150 & 3.30 & 1.27 + 0.020 & 8.54 & 1.17 & 1.64 & -2.977 & 3.25 & 1.93 + 0.025 & 2.80 & 3.80 & 5.35 & -2.628 & 3.25 & 2.72 + 0.030 & 3.94 & 5.37 & 7.56 & -2.363 & 3.26 & 2.87 + 0.040 & 1.86 & 2.53 & 3.52 & -1.978 & 3.19 & 2.82 + 0.050 & 2.83 & 3.90 & 5.43 & -1.705 & 3.28 & 1.97 + 0.060 & 2.28 & 3.09 & 4.39 & -1.497 & 3.28 & 3.81 + 0.070 & 1.19 & 1.62 & 2.24 & -1.332 & 3.22 & 2.42 + 0.080 & 4.68 & 6.38 & 8.85 & -1.195 & 3.20 & 2.46 + 0.090 & 1.48 & 2.03 & 2.83 & -1.080 & 3.19 & 1.22 + 0.100 & 3.90 & 5.33 & 7.56 & -9.823 & 3.26 & 3.59 + 0.110 & 9.39 & 1.27 & 1.75 & -8.959 & 3.12 & 1.85 + 0.120 & 2.02 & 2.75 & 3.80 & -8.189 & 3.18 & 1.66 + 0.130 & 4.02 & 5.46 & 7.58 & -7.503 & 3.23 & 3.05 + 0.140 & 7.61 & 1.02 & 1.40 & -6.875 & 3.11 & 2.55 + 0.150 & 1.36 & 1.80 & 2.50 & -6.302 & 3.10 & 4.34 + 0.160 & 2.34 & 3.10 & 4.26 & -5.760 & 2.98 & 4.78 + 0.180 & 6.51 & 8.37 & 1.12 & -4.765 & 2.72 & 8.20 + 0.200 & 1.67 & 2.08 & 2.66 & -3.859 & 2.37 & 5.92 + 0.250 & 1.24 & 1.45 & 1.74 & -1.920 & 1.73 & 8.47 + 0.300 & 5.81 & 6.58 & 7.55 & -4.119 & 1.35 & 6.31 + 0.350 & 1.92 & 2.14 & 2.42 & 7.670 & 1.20 & 3.93 + 0.400 & 4.97 & 5.53 & 6.15 & 1.711 & 1.07 & 1.23 + 0.450 & 1.09 & 1.20 & 1.34 & 2.493 & 1.02 & 1.77 + 0.500 & 2.14 & 2.34 & 2.58 & 3.155 & 9.57 & 6.17 + 0.600 & 6.25 & 6.84 & 7.50 & 4.227 & 9.28 & 1.28 + 0.700 & 1.43 & 1.57 & 1.72 & 5.057 & 9.39 & 5.18 + 0.800 & 2.74 & 3.01 & 3.31 & 5.707 & 9.55 & 3.00 + 0.900 & 4.59 & 5.05 & 5.58 & 6.226 & 9.95 & 6.04 + 1.000 & 6.95 & 7.72 & 8.52 & 6.647 & 1.03 & 3.46 + 1.250 & 1.47 & 1.63 & 1.81 & 7.401 & 1.05 & 3.00 + 1.500 & 2.42 & 2.69 & 2.99 & 7.898 & 1.06 & 2.43 + 1.750 & 3.46 & 3.83 & 4.24 & 8.252 & 1.02 & 6.91 + 2.000 & 4.54 & 5.01 & 5.52 & 8.519 & 9.87 & 3.11 + 2.500 & 6.81 & 7.40 & 8.08 & 8.911 & 8.80 & 8.33 + 3.000 & 9.08 & 9.81 & 1.06 & 9.193 & 7.82 & 2.15 + 3.500 & 1.13 & 1.21 & 1.29 & 9.402 & 6.94 & 2.05 + 4.000 & 1.34 & 1.41 & 1.50 & 9.559 & 5.87 & 1.86 + 5.000 & 1.68 & 1.75 & 1.84 & 9.773 & 4.77 & 9.99 + 6.000 & 1.92 & 2.00 & 2.08 & 9.902 & 4.08 & 2.73 + 7.000 & 2.09 & 2.17 & 2.25 & 9.983 & 3.82 & 2.62 + 8.000 & 2.20 & 2.28 & 2.36 & 1.003 & 3.69 & 2.31 + 9.000 & 2.26 & 2.35 & 2.43 & 1.006 & 3.63 & 1.69 + 10.000 & 2.29 & 2.38 & 2.46 & 1.008 & 3.60 & 1.50 + comments : resonance energies are deduced from excitation energies and the reaction q - value ( tab . [tab : master ] ) .for the first five resonances , between e and 597 kev , we use the strengths measured by grres et al . , their total widths being unknown or small .the next five resonances are broad and their partial widths are known .it must be pointed out that only the -ray transition to the ground state has been measured for the e and 1230 kev resonances , so that their contribution should be considered as a lower limit .following grres et al . , we calculate the direct capture contribution using experimental spectroscopic factors from bommer et al .only five transitions contribute significantly to the total s - factor , which can be parametrized as kev b ( with in kev ) .( note that the nonresonant s - factor reported by ref . does not represent the direct capture s - factor , but includes contributions from resonance tails . )+ & & & & & & + + & & & & & & + 0.010 & 1.29 & 2.66 & 6.09 & -1.233 & 8.05 & 9.05 + 0.011 & 1.30 & 2.74 & 6.59 & -1.187 & 8.08 & 6.51 + 0.012 & 7.71 & 1.63 & 3.99 & -1.146 & 8.29 & 1.11 + 0.013 & 2.92 & 6.42 & 1.55 & -1.109 & 8.31 & 8.00 + 0.014 & 8.14 & 1.75 & 4.30 & -1.076 & 8.41 & 1.00 + 0.015 & 1.59 & 3.53 & 8.48 & -1.046 & 8.45 & 8.01 + 0.016 & 2.55 & 5.43 & 1.35 & -1.019 & 8.49 & 9.89 + 0.018 & 3.24 & 7.18 & 1.74 & -9.699 & 8.59 & 8.47 + 0.020 & 2.04 & 4.58 & 1.16 & -9.282 & 8.78 & 9.42 + 0.025 & 8.77 & 2.08 & 5.30 & -8.442 & 9.02 & 5.63 + 0.030 & 5.39 & 1.25 & 3.30 & -7.801 & 9.19 & 7.40 + 0.040 & 8.24 & 1.69 & 3.98 & -6.849 & 8.27 & 1.09 + 0.050 & 1.72 & 6.25 & 1.44 & -6.053 & 1.01 & 6.69 + 0.060 & 6.04 & 3.52 & 9.48 & -5.431 & 1.30 & 1.05 + 0.070 & 5.29 & 3.66 & 1.02 & -4.973 & 1.42 & 1.23 + 0.080 & 1.62 & 1.18 & 3.33 & -4.628 & 1.46 & 1.31 + 0.090 & 2.50 & 1.76 & 4.96 & -4.355 & 1.42 & 1.25 + 0.100 & 2.39 & 1.53 & 4.23 & -4.137 & 1.36 & 1.16 + 0.110 & 1.54 & 8.86 & 2.43 & -3.956 & 1.29 & 1.05 + 0.120 & 8.24 & 4.02 & 1.05 & -3.802 & 1.21 & 9.64 + 0.130 & 3.41 & 1.41 & 3.59 & -3.670 & 1.11 & 8.17 + 0.140 & 1.25 & 4.57 & 1.07 & -3.552 & 1.03 & 7.49 + 0.150 & 3.93 & 1.28 & 2.78 & -3.446 & 9.47 & 6.19 + 0.160 & 1.17 & 3.38 & 6.76 & -3.347 & 8.95 & 5.00 + 0.180 & 7.81 & 1.91 & 3.79 & -3.165 & 8.44 & 1.45 + 0.200 & 4.02 & 9.11 & 2.17 & -3.000 & 8.84 & 4.12 + 0.250 & 1.31 & 3.45 & 9.64 & -2.635 & 9.87 & 5.28 + 0.300 & 1.02 & 1.48 & 2.61 & -2.254 & 5.20 & 8.26 + 0.350 & 8.03 & 9.89 & 1.24 & -1.842 & 2.26 & 5.96 + 0.400 & 2.47 & 3.00 & 3.68 & -1.502 & 2.02 & 3.67 + 0.450 & 3.59 & 4.36 & 5.36 & -1.234 & 2.01 & 7.57 + 0.500 & 3.03 & 3.68 & 4.53 & -1.021 & 2.01 & 5.87 + 0.600 & 7.22 & 8.77 & 1.08 & -7.036 & 2.01 & 5.91 + 0.700 & 6.72 & 8.15 & 1.00 & -4.806 & 2.00 & 5.75 + 0.800 & 3.49 & 4.23 & 5.19 & -3.160 & 2.00 & 5.65 + 0.900 & 1.23 & 1.49 & 1.83 & -1.899 & 1.99 & 5.79 + 1.000 & 3.34 & 4.03 & 4.95 & -9.037 & 1.98 & 5.83 + 1.250 & 1.94 & 2.33 & 2.85 & 8.511 & 1.94 & 6.57 + 1.500 & 6.14 & 7.34 & 8.91 & 1.998 & 1.86 & 8.07 + 1.750 & 1.40 & 1.66 & 2.00 & 2.816 & 1.75 & 1.06 + 2.000 & 2.65 & 3.10 & 3.68 & 3.441 & 1.62 & 1.67 + 2.500 & ( 6.86 ) & ( 7.80 ) & ( 8.96 ) & ( 4.361 ) & ( 1.36 ) & + 3.000 & ( 1.36 ) & ( 1.52 ) & ( 1.71 ) & ( 5.026 ) & ( 1.15 ) & + 3.500 & ( 2.28 ) & ( 2.50 ) & ( 2.78 ) & ( 5.529 ) & ( 1.04 ) & + 4.000 & ( 3.39 ) & ( 3.70 ) & ( 4.08 ) & ( 5.919 ) & ( 9.61 ) & + 5.000 & ( 5.83 ) & ( 6.35 ) & ( 6.94 ) & ( 6.455 ) & ( 8.86 ) & + 6.000 & ( 8.18 ) & ( 8.89 ) & ( 9.69 ) & ( 6.791 ) & ( 8.57 ) & + 7.000 & ( 1.02 ) & ( 1.11 ) & ( 1.21 ) & ( 7.011 ) & ( 8.63 ) & + 8.000 & ( 1.18 ) & ( 1.28 ) & ( 1.40 ) & ( 7.158 ) & ( 8.76 ) & + 9.000 & ( 1.30 ) & ( 1.42 ) & ( 1.55 ) & ( 7.258 ) & ( 8.88 ) & + 10.000 & ( 1.39 ) & ( 1.51 ) & ( 1.65 ) & ( 7.323 ) & ( 8.98 ) & + comments : similar to previous evaluations , we consider the first eight natural - parity levels above , and the first one below , the -particle threshold .resonance energies are deduced from excitation energies and the reaction q - value ( tab . [tab : master ] ) .resonance strengths and radiative widths for the seven higher - energy resonances have been measured by gai et al . .the total widths of the and 178 kev resonances , which are equal to the radiative widths , and of the and 2056 kev resonances , which are equal to the -particle widths , are known from the literature . for the -28 and 178 kev resonances we follow lugaro et al . by adopting for the -particle reduced widths values of and , respectively , based on the results of the -particle transfer experiment by cunsolo et al .we assume a factor of two uncertainty for the adopted value of the subthreshold resonance .note that in lugaro et al . there is an apparent confusion between two levels , as it is the 6.2 mev state that was observed by ref . , not the 6.4 mev state . the non - resonant s - factor is adopted from the measurement by grres et al .however , it includes contributions from both direct capture and from broad - resonance tails .since interference effects are expected between the levels , we assume a factor of 3 uncertainty in the non - resonant s - factor . above t = 2.13 gk the total rateis extrapolated using hauser - feshbach results .+ & & & & & & + + & & & & & & + 0.010 & 2.62 & 4.78 & 9.25 & -1.435 & 6.38 & 1.66 + 0.011 & 4.47 & 8.38 & 1.63 & -1.383 & 6.45 & 1.79 + 0.012 & 4.36 & 8.15 & 1.55 & -1.337 & 6.39 & 1.46 + 0.013 & 2.60 & 4.88 & 9.39 & -1.296 & 6.48 & 1.67 + 0.014 & 1.02 & 1.95 & 3.80 & -1.260 & 6.48 & 2.75 + 0.015 & 2.97 & 5.63 & 1.08 & -1.226 & 6.39 & 1.24 + 0.016 & 6.44 & 1.18 & 2.31 & -1.195 & 6.37 & 2.91 + 0.018 & 1.44 & 2.71 & 5.22 & -1.141 & 6.37 & 2.53 + 0.020 & 1.57 & 2.86 & 5.48 & -1.094 & 6.23 & 3.56 + 0.025 & 1.90 & 3.50 & 6.61 & -1.000 & 6.31 & 9.76 + 0.030 & 2.41 & 4.41 & 8.61 & -9.290 & 6.31 & 3.06 + 0.040 & 7.88 & 1.44 & 2.77 & -8.251 & 6.19 & 4.20 + 0.050 & 1.42 & 2.52 & 4.71 & -7.503 & 6.01 & 5.26 + 0.060 & 5.71 & 1.04 & 1.93 & -6.904 & 5.97 & 6.69 + 0.070 & 7.56 & 1.78 & 4.85 & -6.385 & 8.86 & 8.24 + 0.080 & 9.08 & 1.85 & 5.37 & -5.912 & 8.18 & 5.10 + 0.090 & 4.81 & 6.06 & 7.82 & -5.345 & 2.45 & 5.02 + 0.100 & 1.19 & 1.42 & 1.70 & -4.800 & 1.80 & 3.39 + 0.110 & 1.14 & 1.34 & 1.57 & -4.345 & 1.62 & 3.82 + 0.120 & 5.09 & 5.90 & 6.82 & -3.967 & 1.47 & 3.87 + 0.130 & 1.26 & 1.44 & 1.64 & -3.648 & 1.35 & 3.47 + 0.140 & 1.94 & 2.20 & 2.49 & -3.375 & 1.25 & 3.29 + 0.150 & 2.07 & 2.33 & 2.61 & -3.139 & 1.17 & 3.23 + 0.160 & 1.63 & 1.82 & 2.03 & -2.933 & 1.11 & 3.41 + 0.180 & 4.99 & 5.53 & 6.12 & -2.592 & 1.01 & 3.19 + 0.200 & 7.58 & 8.33 & 9.17 & -2.321 & 9.53 & 3.33 + 0.250 & 9.60 & 1.05 & 1.15 & -1.837 & 8.87 & 3.28 + 0.300 & 2.29 & 2.50 & 2.73 & -1.520 & 8.79 & 3.83 + 0.350 & 2.13 & 2.33 & 2.54 & -1.297 & 8.91 & 3.33 + 0.400 & 1.10 & 1.21 & 1.32 & -1.132 & 9.09 & 3.21 + 0.450 & 3.89 & 4.28 & 4.69 & -1.006 & 9.25 & 3.29 + 0.500 & 1.06 & 1.17 & 1.28 & -9.057 & 9.32 & 3.29 + 0.600 & 4.98 & 5.46 & 5.96 & -7.514 & 8.82 & 3.72 + 0.700 & 1.83 & 1.97 & 2.11 & -6.232 & 7.08 & 5.43 + 0.800 & 6.56 & 6.92 & 7.27 & -4.975 & 5.11 & 2.71 + 0.900 & 2.26 & 2.36 & 2.47 & -3.746 & 4.41 & 2.92 + 1.000 & 6.91 & 7.23 & 7.57 & -2.627 & 4.54 & 3.18 + 1.250 & 6.13 & 6.44 & 6.78 & -4.392 & 5.11 & 4.22 + 1.500 & 2.74 & 2.89 & 3.05 & 1.063 & 5.35 & 3.48 + 1.750 & 7.98 & 8.42 & 8.89 & 2.131 & 5.42 & 3.24 + 2.000 & 1.76 & 1.86 & 1.96 & 2.923 & 5.39 & 2.91 + 2.500 & 5.23 & 5.51 & 5.80 & 4.009 & 5.24 & 2.53 + 3.000 & 1.06 & 1.11 & 1.17 & 4.713 & 5.07 & 2.49 + 3.500 & 1.73 & 1.81 & 1.91 & 5.202 & 4.95 & 3.44 + 4.000 & 2.47 & 2.59 & 2.72 & 5.556 & 4.88 & 2.69 + 5.000 & 3.94 & 4.13 & 4.34 & 6.024 & 4.88 & 1.74 + 6.000 & 5.22 & 5.48 & 5.76 & 6.306 & 4.97 & 2.45 + 7.000 & 6.22 & 6.54 & 6.87 & 6.483 & 5.09 & 3.08 + 8.000 & 6.95 & 7.31 & 7.70 & 6.595 & 5.20 & 3.73 + 9.000 & 7.45 & 7.84 & 8.27 & 6.666 & 5.31 & 4.50 + 10.000 & 7.76 & 8.18 & 8.63 & 6.708 & 5.40 & 5.45 + comments : in total , 17 resonances are taken into account for calculating the reaction rates .resonance energies are derived from level energies and the reaction q - value ( tab . [tab : master ] ) , except for the levels at 5672 and 5790 kev ( e and 1375 kev ) , for which more accurate values are adopted from ref .resonance strengths are adopted from rolfs , charlesworth and azuma ; kieser et al . ; becker et al . ; and grres et al . ( where the latter resonance strength uncertainties listed in their tab .i have been modified to include an additional 7% uncertainty from the stopping power ) .note that the strengths from these data sets are in mutual agreement .a number of levels have been disregarded : ( i ) e kev ( e kev with ) since the formation via is isospin - forbidden ( also , the experimental upper limit for the strength is less than the observed strength of a lower - lying resonance ) ; ( ii ) levels with since their population as resonances in is forbidden according to angular momentum selection rules ; and ( iii ) e kev ( ; e kev ) since this ( unobserved ) resonance is presumably negligible compared to the observed e kev ( ) resonance . for the doublet at e kev we use the summed strength adopted by ref .for the lowest - lying observed resonance , e kev , both the resonance strength and the lifetime have been measured ; from these results we deduce values for and in order to integrate the rate contribution of this resonance numerically .the direct capture s - factor is adopted from grres et al .note that their result is not based on experimental spectroscopic factors , but was estimated using an arbitrary value of for the -particle spectroscopic factors of all final states ; we assign a factor of 2 uncertainty to the direct capture s - factor ( yielding a fractional uncertainty of 0.79 ; see the numerical example at the end of sec .5.1.2 of paper i ) .the case of the undetected , low - energy resonance at e kev ( e kev with ) requires further mention .the level is weakly populated in the ( , t) study of middleton et al . andthe differential cross sections for many levels are listed in their tab .i. the observed intensity can only be related to an experimental _ upper limit _ of the -particle spectroscopic factor ( since the state may have been populated via non - direct transfer ) .experimental spectroscopic factors for levels that are strongly populated in ( , t) are presented by tab . 2 of cooper .from these results we derive spectroscopic factors of s and s for the e and 4860 kev levels , respectively .since the -particle partial width for the latter state is experimentally known ( ev ) we find an experimental upper limit of ev , where the _ ratio _ of single - particle -widths is found from the ratio of penetration factors .the population of this t=1 state in is suppressed by isospin selection rules .consequently , we may not use in this case for the mean value of the dimensionless reduced -particle width of the porter - thomas distribution ( see discussion in paper i ) .since mean values of for isospin - forbidden population via -particle capture from a t=0 target to a t=1 state do not exist at present , we arbitrarily assume an isospin suppression factor of 0.001 ( see also ref . ) ; thus this level is randomly sampled using .raising by an order of magnitude would increase the total rates only at low temperatures , near t=80 mk , by about a factor of 8 .+ & & & & & & + + & & & & & & + 0.010 & 1.60 & 2.13 & 2.88 & -1.397 & 2.99 & 2.12 + 0.011 & 2.84 & 3.82 & 5.16 & -1.345 & 3.02 & 1.91 + 0.012 & 2.83 & 3.82 & 5.17 & -1.299 & 3.04 & 1.34 + 0.013 & 1.72 & 2.30 & 3.14 & -1.258 & 3.01 & 3.03 + 0.014 & 7.06 & 9.40 & 1.29 & -1.221 & 3.01 & 2.53 + 0.015 & 2.01 & 2.71 & 3.71 & -1.187 & 3.06 & 2.86 + 0.016 & 4.47 & 6.00 & 8.24 & -1.156 & 3.06 & 2.80 + 0.018 & 1.05 & 1.40 & 1.91 & -1.102 & 2.98 & 3.14 + 0.020 & 1.15 & 1.52 & 2.09 & -1.055 & 2.99 & 3.75 + 0.025 & 1.41 & 1.88 & 2.61 & -9.606 & 3.02 & 3.95 + 0.030 & 1.86 & 2.48 & 3.35 & -8.888 & 2.97 & 1.99 + 0.040 & 6.53 & 8.69 & 1.19 & -7.842 & 3.02 & 3.15 + 0.050 & 1.11 & 1.49 & 2.02 & -7.097 & 2.99 & 1.49 + 0.060 & 3.29 & 4.38 & 5.94 & -6.529 & 2.96 & 3.67 + 0.070 & 3.15 & 4.15 & 5.57 & -6.074 & 2.88 & 3.36 + 0.080 & 1.90 & 2.52 & 3.44 & -5.663 & 3.03 & 3.98 + 0.090 & 1.42 & 2.43 & 4.44 & -5.204 & 5.63 & 8.44 + 0.100 & 9.90 & 1.92 & 3.79 & -4.769 & 6.66 & 9.92 + 0.110 & 3.80 & 7.55 & 1.51 & -4.402 & 6.86 & 4.66 + 0.120 & 8.11 & 1.62 & 3.24 & -4.096 & 6.90 & 3.95 + 0.130 & 1.08 & 2.15 & 4.32 & -3.837 & 6.89 & 4.13 + 0.140 & 1.00 & 1.98 & 3.95 & -3.615 & 6.82 & 5.43 + 0.150 & 7.01 & 1.36 & 2.69 & -3.422 & 6.66 & 1.02 + 0.160 & 4.02 & 7.49 & 1.45 & -3.250 & 6.34 & 2.57 + 0.180 & 8.97 & 1.44 & 2.55 & -2.952 & 5.18 & 1.51 + 0.200 & 1.45 & 1.95 & 2.92 & -2.691 & 3.65 & 4.15 + 0.250 & 3.84 & 4.34 & 4.98 & -2.155 & 1.38 & 1.15 + 0.300 & 2.05 & 2.26 & 2.48 & -1.761 & 9.61 & 6.55 + 0.350 & 3.95 & 4.30 & 4.68 & -1.466 & 8.43 & 5.12 + 0.400 & 3.87 & 4.19 & 4.55 & -1.238 & 8.12 & 3.72 + 0.450 & 2.35 & 2.54 & 2.77 & -1.058 & 8.27 & 4.29 + 0.500 & 1.00 & 1.09 & 1.19 & -9.121 & 8.57 & 3.88 + 0.600 & 8.96 & 9.81 & 1.08 & -6.927 & 9.12 & 2.30 + 0.700 & 4.26 & 4.69 & 5.15 & -5.364 & 9.45 & 2.41 + 0.800 & 1.38 & 1.52 & 1.67 & -4.188 & 9.46 & 2.80 + 0.900 & 3.56 & 3.91 & 4.27 & -3.244 & 9.04 & 3.02 + 1.000 & 8.16 & 8.85 & 9.60 & -2.425 & 8.09 & 4.24 + 1.250 & 5.20 & 5.47 & 5.78 & -6.018 & 5.18 & 1.13 + 1.500 & 2.46 & 2.57 & 2.68 & 9.430 & 4.25 & 1.63 + 1.750 & 8.29 & 8.64 & 9.02 & 2.157 & 4.20 & 2.90 + 2.000 & 2.12 & 2.21 & 2.30 & 3.094 & 4.21 & 3.19 + 2.500 & 7.92 & 8.25 & 8.60 & 4.413 & 4.13 & 3.45 + 3.000 & 1.90 & 1.98 & 2.06 & 5.288 & 3.98 & 3.96 + 3.500 & 3.55 & 3.68 & 3.83 & 5.910 & 3.82 & 4.11 + 4.000 & 5.66 & 5.87 & 6.09 & 6.375 & 3.66 & 3.96 + 5.000 & 1.09 & 1.13 & 1.17 & 7.029 & 3.42 & 4.02 + 6.000 & 1.69 & 1.75 & 1.80 & 7.465 & 3.30 & 2.51 + 7.000 & 2.30 & 2.38 & 2.46 & 7.774 & 3.25 & 3.52 + 8.000 & 2.89 & 2.99 & 3.08 & 8.003 & 3.25 & 3.36 + 9.000 & 3.44 & 3.55 & 3.67 & 8.176 & 3.26 & 2.69 + 10.000 & 3.94 & 4.06 & 4.20 & 8.310 & 3.28 & 2.79 + comments : strengths have been measured directly for 61 resonances from mev up to mev .the experimental data cover the entire temperature range of interest .there is a high level of coherence in these data sets as they all use the mev ( 5/2 ) or mev ( 1/2 ) resonances as references .we use the input data extracted from the same references as in angulo et al . ( nacre ) , except for the resonances between and 2.086 mev , where we adopt the new and more precise measurement of wilmes et al . . below 0.5 mevwe use ( as in nacre ) results from the ( , t ) transfer reaction experiment of de oliveira et al . , in particular for the kev resonance whose contribution dominates the reaction rate in the range of 0.1 to 0.2 gk . the published resonance strength , ev , assumes a factor of 2 uncertainty for the dwba analysis .resonance energies are obtained from the excitation energies listed in table 19.9 of tilley et al . and kev . the largest contribution from the three near - threshold levelsis associated with the 3.91 mev state , but their total absolute contribution is too small to be of any astrophysical importance .the direct capture s - factor was calculated by de oliveira et al . and amounts to mev b. + & & & & & & + + & & & & & & + 0.010 & 6.29 & 9.20 & 1.34 & -1.544 & 3.82 & 1.80 + 0.011 & 1.83 & 2.67 & 3.88 & -1.487 & 3.83 & 2.22 + 0.012 & 2.79 & 4.09 & 6.01 & -1.437 & 3.84 & 3.38 + 0.013 & 2.52 & 3.68 & 5.35 & -1.392 & 3.82 & 5.08 + 0.014 & 1.45 & 2.11 & 3.09 & -1.351 & 3.81 & 4.18 + 0.015 & 5.87 & 8.50 & 1.24 & -1.314 & 3.78 & 3.07 + 0.016 & 1.72 & 2.48 & 3.61 & -1.280 & 3.72 & 2.79 + 0.018 & 6.56 & 9.61 & 1.41 & -1.221 & 3.85 & 1.84 + 0.020 & 1.14 & 1.66 & 2.44 & -1.169 & 3.79 & 3.17 + 0.025 & 3.34 & 4.93 & 7.16 & -1.066 & 3.80 & 5.00 + 0.030 & 8.77 & 1.28 & 1.85 & -9.877 & 3.81 & 4.11 + 0.040 & 8.35 & 1.21 & 1.76 & -8.730 & 3.75 & 3.30 + 0.050 & 2.81 & 4.17 & 6.03 & -7.917 & 3.83 & 7.59 + 0.060 & 1.43 & 2.08 & 3.05 & -7.295 & 3.76 & 3.98 + 0.070 & 2.04 & 2.99 & 4.30 & -6.798 & 3.77 & 3.52 + 0.080 & 1.25 & 1.79 & 2.61 & -6.389 & 3.74 & 4.85 + 0.090 & 4.50 & 6.53 & 9.58 & -6.029 & 3.85 & 9.68 + 0.100 & 2.23 & 4.48 & 1.10 & -5.598 & 8.03 & 2.36 + 0.110 & 2.12 & 6.00 & 1.77 & -5.115 & 1.05 & 1.24 + 0.120 & 1.48 & 4.38 & 1.31 & -4.688 & 1.09 & 2.12 + 0.130 & 5.59 & 1.67 & 5.00 & -4.324 & 1.09 & 1.82 + 0.140 & 1.25 & 3.73 & 1.12 & -4.013 & 1.09 & 1.88 + 0.150 & 1.85 & 5.48 & 1.65 & -3.744 & 1.09 & 1.96 + 0.160 & 1.93 & 5.73 & 1.72 & -3.510 & 1.09 & 2.04 + 0.180 & 9.52 & 2.82 & 8.47 & -3.120 & 1.09 & 2.16 + 0.200 & 2.11 & 6.28 & 1.87 & -2.810 & 1.09 & 2.26 + 0.250 & 5.34 & 1.58 & 4.74 & -2.257 & 1.09 & 2.40 + 0.300 & 2.04 & 6.00 & 1.80 & -1.893 & 1.09 & 2.50 + 0.350 & 2.64 & 7.77 & 2.33 & -1.637 & 1.09 & 2.61 + 0.400 & 1.77 & 5.16 & 1.55 & -1.447 & 1.09 & 2.95 + 0.450 & 7.69 & 2.22 & 6.65 & -1.301 & 1.07 & 4.67 + 0.500 & 2.57 & 7.13 & 2.10 & -1.183 & 1.04 & 1.29 + 0.600 & 1.94 & 4.43 & 1.18 & -9.955 & 8.97 & 1.34 + 0.700 & ( 1.12 ) & ( 2.58 ) & ( 5.97 ) & ( -8.261 ) & ( 8.37 ) & + 0.800 & ( 4.94 ) & ( 1.14 ) & ( 2.64 ) & ( -6.775 ) & ( 8.37 ) & + 0.900 & ( 1.65 ) & ( 3.81 ) & ( 8.81 ) & ( -5.569 ) & ( 8.37 ) & + 1.000 & ( 4.45 ) & ( 1.03 ) & ( 2.37 ) & ( -4.579 ) & ( 8.37 ) & + 1.250 & ( 2.83 ) & ( 6.53 ) & ( 1.51 ) & ( -2.729 ) & ( 8.37 ) & + 1.500 & ( 9.80 ) & ( 2.26 ) & ( 5.23 ) & ( -1.486 ) & ( 8.37 ) & + 1.750 & ( 2.46 ) & ( 5.68 ) & ( 1.31 ) & ( -5.648 ) & ( 8.37 ) & + 2.000 & ( 5.01 ) & ( 1.16 ) & ( 2.67 ) & ( 1.464 ) & ( 8.37 ) & + 2.500 & ( 1.40 ) & ( 3.23 ) & ( 7.46 ) & ( 1.172 ) & ( 8.37 ) & + 3.000 & ( 2.85 ) & ( 6.59 ) & ( 1.52 ) & ( 1.885 ) & ( 8.37 ) & + 3.500 & ( 4.85 ) & ( 1.12 ) & ( 2.59 ) & ( 2.417 ) & ( 8.37 ) & + 4.000 & ( 7.36 ) & ( 1.70 ) & ( 3.93 ) & ( 2.833 ) & ( 8.37 ) & + 5.000 & ( 1.37 ) & ( 3.16 ) & ( 7.31 ) & ( 3.454 ) & ( 8.37 ) & + 6.000 & ( 2.15 ) & ( 4.98 ) & ( 1.15 ) & ( 3.907 ) & ( 8.37 ) & + 7.000 & ( 3.07 ) & ( 7.08 ) & ( 1.64 ) & ( 4.261 ) & ( 8.37 ) & + 8.000 & ( 4.09 ) & ( 9.44 ) & ( 2.18 ) & ( 4.548 ) & ( 8.37 ) & + 9.000 & ( 5.20 ) & ( 1.20 ) & ( 2.78 ) & ( 4.789 ) & ( 8.37 ) & + 10.000 & ( 6.62 ) & ( 1.53 ) & ( 3.53 ) & ( 5.030 ) & ( 8.37 ) & + comments :resonance energies are calculated from excitation energies and the q - value ( tab . [tab : master ] ) .total widths are deduced from the averaged dsam lifetime measured values , except for the first two resonances at 505 and 849 kev , for which the results of davids et al . are used .alpha - particle partial widths are calculated using the -particle branching ratios adopted by ref . , except for the two first resonances .for those we use results from -particle transfer experiments to mirror states and assume , as in fisker et al . , the equality of reduced -particle widths between mirror levels .this hypothesis has been questioned because of an apparent disagreement between the reduced -particle widths of the 849 kev resonance ( as derived from the total width and the -particle branching ratio ) and of the corresponding mirror state ( as obtained from a ( , t) transfer experiment ) . however , a new measurement of the -particle branching ratio has not confirmed this disagreement : for the first two resonances the measured -particle branching ratios are compatible with the values deduced from transfer reactions .we adopt a factor of uncertainty for the resulting -particle partial widths . for the mirror level of the 1563 kev resonance ,the available spectroscopic data ( and ) are not judged sufficient to extract a reliable -particle partial width and thus we disregard this resonance . for the direct capture componentwe adopt a constant value of 20 mev b , with an assumed uncertainty of 40% . above 0.6 gk the total reaction rateis extrapolated using results computed with the talys statistical model code .+ & & & & & & + + & & & & & & + 0.01 & 6.67 & 7.20 & 7.73 & -5.559 & 7.36 & + 0.011 & 7.03 & 7.58 & 8.14 & -5.324 & 7.35 & + 0.012 & 5.64 & 6.08 & 6.53 & -5.115 & 7.33 & + 0.013 & 3.63 & 3.91 & 4.20 & -4.929 & 7.31 & + 0.014 & 1.94 & 2.09 & 2.25 & -4.762 & 7.30 & + 0.015 & 8.91 & 9.60 & 1.03 & -4.609 & 7.30 & + 0.016 & 3.59 & 3.87 & 4.15 & -4.470 & 7.27 & + 0.018 & 4.21 & 4.54 & 4.87 & -4.224 & 7.23 & + 0.02 & 3.51 & 3.78 & 4.05 & -4.012 & 7.20 & + 0.025 & 2.43 & 2.62 & 2.80 & -3.588 & 7.12 & + 0.03 & 6.12 & 6.59 & 7.05 & -3.265 & 7.06 & + 0.04 & 6.66 & 7.16 & 7.66 & -2.797 & 6.98 & + 0.05 & 1.85 & 1.98 & 2.12 & -2.464 & 6.96 & + 0.06 & 2.31 & 2.48 & 2.66 & -2.212 & 6.98 & + 0.07 & 1.73 & 1.86 & 1.99 & -2.010 & 6.99 & + 0.08 & 9.03 & 9.71 & 1.04 & -1.845 & 7.02 & + 0.09 & 3.64 & 3.92 & 4.19 & -1.705 & 7.02 & + 0.1 & 1.21 & 1.30 & 1.39 & -1.586 & 7.05 & + 0.11 & 3.44 & 3.70 & 3.96 & -1.481 & 7.02 & + 0.12 & 8.66 & 9.31 & 9.97 & -1.389 & 7.01 & + 0.13 & 1.98 & 2.12 & 2.27 & -1.306 & 7.03 & + 0.14 & 4.15 & 4.46 & 4.77 & -1.232 & 7.01 & + 0.15 & 8.13 & 8.74 & 9.36 & -1.165 & 7.00 & + 0.16 & 1.50 & 1.62 & 1.73 & -1.103 & 6.97 & + 0.18 & 4.45 & 4.78 & 5.11 & -9.948 & 6.97 & + 0.2 & 1.13 & 1.21 & 1.30 & -9.020 & 6.97 & + 0.25 & 7.23 & 7.77 & 8.31 & -7.160 & 6.92 & + 0.3 & 2.95 & 3.17 & 3.39 & -5.754 & 6.90 & + 0.35 & 9.04 & 9.70 & 1.04 & -4.636 & 6.84 & + 0.4 & 2.26 & 2.43 & 2.59 & -3.717 & 6.81 & + 0.45 & 4.90 & 5.25 & 5.61 & -2.947 & 6.76 & + 0.5 & 9.50 & 1.02 & 1.09 & -2.283 & 6.72 & + 0.6 & 2.81 & 3.01 & 3.21 & -1.200 & 6.63 & + 0.7 & 6.66 & 7.12 & 7.59 & -3.397 & 6.52 & + 0.8 & 1.35 & 1.44 & 1.54 & 3.646 & 6.42 & + 0.9 & 2.45 & 2.61 & 2.78 & 9.594 & 6.29 & + 1.0 & 4.07 & 4.34 & 4.61 & 1.468 & 6.15 & + 1.25 & 1.12 & 1.19 & 1.26 & 2.476 & 5.80 & + 1.5 & 2.42 & 2.56 & 2.70 & 3.243 & 5.44 & + 1.75 & 4.44 & 4.68 & 4.92 & 3.846 & 5.15 & + 2.0 & 7.30 & 7.67 & 8.04 & 4.340 & 4.87 & + 2.5 & 1.59 & 1.66 & 1.73 & 5.112 & 4.37 & + 3.0 & 2.92 & 3.04 & 3.17 & 5.717 & 4.11 & + 3.5 & 4.71 & 4.92 & 5.12 & 6.198 & 4.17 & + 4.0 & 6.96 & 7.27 & 7.58 & 6.589 & 4.27 & + 5.0 & 1.28 & 1.33 & 1.39 & 7.193 & 4.12 & + 6.0 & 2.02 & 2.11 & 2.20 & 7.654 & 4.27 & + 7.0 & 2.90 & 3.02 & 3.15 & 8.013 & 4.13 & + 8.0 & 3.88 & 4.05 & 4.22 & 8.306 & 4.20 & + 9.0 & 4.95 & 5.17 & 5.39 & 8.551 & 4.26 & + 10.0 & 6.10 & 6.37 & 6.64 & 8.759 & 4.24 & + comments : _ this is the only reaction analyzed here for which the rate uncertainties are not derived from the monte carlo method_. the reaction rates , including uncertainties , are adopted from iliadis et al . . for temperatures of t gk, the rates are calculated assuming a constant s factor of mev b ( see fig . 2 in iliadis et al .the two resonances at and 3.26 mev ( tilley et al . ) are negligible for the total rate .the lognormal parameters and are computed from columns 2 - 4 by using eq .( 39 ) from paper i. + & & & & & & + + & & & & & & + 0.010 & 5.10 & 7.34 & 1.07 & -1.569 & 3.74 & 2.78 + 0.011 & 1.53 & 2.22 & 3.24 & -1.512 & 3.73 & 3.20 + 0.012 & 2.43 & 3.51 & 5.07 & -1.461 & 3.70 & 2.46 + 0.013 & 2.18 & 3.17 & 4.62 & -1.416 & 3.77 & 2.23 + 0.014 & 1.29 & 1.87 & 2.71 & -1.375 & 3.73 & 3.80 + 0.015 & 5.19 & 7.54 & 1.10 & -1.338 & 3.81 & 7.61 + 0.016 & 1.55 & 2.27 & 3.26 & -1.304 & 3.72 & 3.48 + 0.018 & 6.28 & 9.00 & 1.31 & -1.244 & 3.70 & 5.71 + 0.020 & 1.08 & 1.56 & 2.25 & -1.193 & 3.67 & 3.35 + 0.025 & 3.41 & 4.93 & 7.20 & -1.089 & 3.70 & 1.98 + 0.030 & 9.20 & 1.32 & 1.91 & -1.010 & 3.71 & 8.31 + 0.040 & 9.01 & 1.30 & 1.89 & -8.953 & 3.73 & 4.00 + 0.050 & 3.20 & 4.66 & 6.72 & -8.136 & 3.67 & 7.37 + 0.060 & 1.67 & 2.39 & 3.46 & -7.511 & 3.70 & 1.19 + 0.070 & 2.40 & 3.45 & 5.03 & -7.014 & 3.69 & 5.30 + 0.080 & 1.47 & 2.11 & 3.09 & -6.602 & 3.69 & 6.26 + 0.090 & 4.70 & 6.78 & 9.71 & -6.256 & 3.63 & 5.44 + 0.100 & 9.27 & 1.33 & 1.92 & -5.958 & 3.65 & 2.78 + 0.110 & 1.28 & 1.83 & 2.64 & -5.696 & 3.63 & 5.45 + 0.120 & 1.28 & 1.84 & 2.67 & -5.465 & 3.64 & 3.38 + 0.130 & 1.01 & 1.45 & 2.10 & -5.258 & 3.64 & 6.57 + 0.140 & 6.66 & 9.48 & 1.37 & -5.071 & 3.52 & 9.78 + 0.150 & 3.65 & 5.12 & 7.44 & -4.901 & 3.56 & 1.31 + 0.160 & 1.68 & 2.38 & 3.39 & -4.748 & 3.53 & 1.65 + 0.180 & 2.61 & 3.73 & 5.29 & -4.474 & 3.53 & 2.36 + 0.200 & 2.97 & 4.12 & 5.79 & -4.233 & 3.33 & 1.74 + 0.250 & 3.32 & 4.22 & 5.34 & -3.541 & 2.41 & 3.74 + 0.300 & 2.21 & 2.86 & 3.71 & -2.888 & 2.60 & 3.70 + 0.350 & 2.51 & 3.21 & 4.12 & -2.416 & 2.52 & 3.69 + 0.400 & 8.77 & 1.11 & 1.41 & -2.062 & 2.40 & 3.97 + 0.450 & 1.41 & 1.75 & 2.19 & -1.786 & 2.25 & 5.14 + 0.500 & 1.31 & 1.60 & 1.98 & -1.564 & 2.10 & 6.61 + 0.600 & 3.82 & 4.56 & 5.49 & -1.229 & 1.82 & 8.35 + 0.700 & 4.34 & 5.08 & 5.98 & -9.885 & 1.61 & 6.21 + 0.800 & 2.69 & 3.12 & 3.62 & -8.072 & 1.48 & 3.26 + 0.900 & 1.11 & 1.28 & 1.47 & -6.662 & 1.40 & 2.44 + 1.000 & 3.43 & 3.94 & 4.50 & -5.539 & 1.35 & 2.54 + 1.250 & 2.54 & 2.90 & 3.30 & -3.542 & 1.30 & 4.08 + 1.500 & 9.32 & 1.06 & 1.21 & -2.243 & 1.29 & 5.15 + 1.750 & 2.30 & 2.62 & 2.97 & -1.341 & 1.29 & 5.88 + 2.000 & 4.45 & 5.06 & 5.74 & -6.827 & 1.29 & 6.58 + 2.500 & 1.08 & 1.23 & 1.40 & 2.081 & 1.26 & 7.39 + 3.000 & 1.95 & 2.20 & 2.48 & 7.893 & 1.21 & 8.83 + 3.500 & 3.00 & 3.37 & 3.77 & 1.214 & 1.13 & 1.02 + 4.000 & 4.29 & 4.76 & 5.27 & 1.560 & 1.03 & 1.07 + 5.000 & 8.05 & 8.71 & 9.44 & 2.166 & 7.97 & 1.25 + 6.000 & 1.49 & 1.58 & 1.68 & 2.761 & 5.94 & 9.35 + 7.000 & 2.72 & 2.86 & 3.00 & 3.352 & 5.05 & 8.39 + 8.000 & ( 4.70 ) & ( 4.94 ) & ( 5.20 ) & ( 3.901 ) & ( 5.13 ) & + 9.000 & ( 7.60 ) & ( 8.01 ) & ( 8.47 ) & ( 4.385 ) & ( 5.49 ) & + 10.000 & ( 1.15 ) & ( 1.21 ) & ( 1.28 ) & ( 4.800 ) & ( 5.83 ) & + comments : in total , 22 resonances at energies of e - 7671 kev are considered .resonance energies are calculated using the excitation energies listed in tab .20.17 of tilley et al .resonance strengths are adopted from refs . . for the e kev resonance , the -particle and -raypartial widths are obtained from the measured resonance strength together with a value of ev from ( , ) scattering . for the e kev resonance ,the partial widths are deduced from measured values of the resonance strength and the mean lifetime .the -particle width is identified with the larger solution of the quadratic equation , in agreement with the -particle width deduced from the ( , d ) study of ref .the direct capture s - factor is adopted from fig . 1 of mohr , where we disregard the transitions arising from the e kev resonance tail ( they are explicitly taken into account in the numerical integration of the resonant reaction rate ) . + & & & & & & + + & & & & & & + 0.010 & 2.95 & 3.58 & 4.38 & -5.629 & 2.02 & 7.75 + 0.011 & 3.10 & 3.78 & 4.64 & -5.393 & 2.01 & 1.77 + 0.012 & 2.51 & 3.09 & 3.81 & -5.183 & 2.08 & 2.58 + 0.013 & 1.64 & 1.99 & 2.45 & -4.996 & 2.03 & 7.69 + 0.014 & 8.83 & 1.08 & 1.33 & -4.827 & 2.05 & 7.02 + 0.015 & 4.08 & 4.98 & 6.14 & -4.675 & 2.07 & 3.50 + 0.016 & 1.65 & 2.02 & 2.50 & -4.534 & 2.05 & 5.34 + 0.018 & 1.98 & 2.42 & 2.96 & -4.287 & 2.03 & 2.96 + 0.020 & 1.69 & 2.06 & 2.51 & -4.072 & 1.99 & 4.32 + 0.025 & 1.67 & 1.97 & 2.32 & -3.616 & 1.62 & 2.78 + 0.030 & 8.35 & 9.69 & 1.12 & -3.227 & 1.46 & 5.61 + 0.040 & 2.13 & 2.47 & 2.88 & -2.672 & 1.53 & 2.19 + 0.050 & 6.56 & 7.62 & 8.88 & -2.330 & 1.52 & 2.21 + 0.060 & 6.61 & 7.66 & 8.88 & -2.099 & 1.48 & 3.45 + 0.070 & 3.61 & 4.13 & 4.76 & -1.930 & 1.40 & 3.91 + 0.080 & 1.38 & 1.58 & 1.79 & -1.797 & 1.32 & 2.82 + 0.090 & 4.31 & 4.93 & 5.66 & -1.682 & 1.37 & 3.36 + 0.100 & 1.21 & 1.39 & 1.61 & -1.579 & 1.44 & 7.56 + 0.110 & 3.13 & 3.64 & 4.25 & -1.482 & 1.54 & 8.33 + 0.120 & 7.69 & 9.02 & 1.07 & -1.391 & 1.64 & 1.22 + 0.130 & 1.78 & 2.11 & 2.52 & -1.306 & 1.75 & 1.27 + 0.140 & 3.87 & 4.65 & 5.65 & -1.227 & 1.92 & 3.24 + 0.150 & 7.96 & 9.65 & 1.19 & -1.154 & 2.03 & 3.39 + 0.160 & 1.53 & 1.87 & 2.33 & -1.088 & 2.16 & 5.58 + 0.180 & 4.85 & 5.99 & 7.57 & -9.710 & 2.30 & 4.94 + 0.200 & 1.29 & 1.60 & 2.02 & -8.729 & 2.32 & 7.83 + 0.250 & 8.64 & 1.05 & 1.32 & -6.841 & 2.21 & 1.05 + 0.300 & 3.89 & 4.61 & 5.57 & -5.371 & 1.84 & 5.84 + 0.350 & 1.64 & 1.87 & 2.15 & -3.974 & 1.37 & 2.27 + 0.400 & 6.87 & 7.71 & 8.69 & -2.561 & 1.18 & 4.84 + 0.450 & 2.53 & 2.85 & 3.22 & -1.253 & 1.21 & 4.77 + 0.500 & 7.85 & 8.86 & 1.01 & -1.186 & 1.26 & 9.20 + 0.600 & 4.59 & 5.20 & 5.94 & 1.651 & 1.29 & 1.61 + 0.700 & 1.65 & 1.87 & 2.13 & 2.932 & 1.28 & 1.11 + 0.800 & 4.31 & 4.88 & 5.55 & 3.889 & 1.26 & 1.98 + 0.900 & 9.03 & 1.02 & 1.15 & 4.626 & 1.22 & 1.06 + 1.000 & 1.62 & 1.82 & 2.06 & 5.206 & 1.22 & 1.66 + 1.250 & 4.49 & 5.02 & 5.66 & 6.222 & 1.15 & 8.66 + 1.500 & 8.59 & 9.59 & 1.08 & 6.868 & 1.13 & 1.14 + 1.750 & 1.33 & 1.48 & 1.66 & 7.305 & 1.10 & 7.23 + 2.000 & 1.82 & 2.02 & 2.25 & 7.613 & 1.07 & 5.69 + 2.500 & 2.69 & 2.99 & 3.32 & 8.004 & 1.05 & 3.17 + 3.000 & 3.38 & 3.74 & 4.13 & 8.227 & 1.02 & 5.57 + 3.500 & 3.85 & 4.25 & 4.71 & 8.356 & 9.98 & 1.53 + 4.000 & 4.14 & 4.56 & 5.03 & 8.425 & 9.86 & 1.79 + 5.000 & 4.33 & 4.78 & 5.26 & 8.471 & 9.55 & 5.42 + 6.000 & 4.24 & 4.66 & 5.14 & 8.449 & 9.60 & 1.29 + 7.000 & 4.04 & 4.44 & 4.90 & 8.400 & 9.72 & 5.26 + 8.000 & 3.80 & 4.18 & 4.61 & 8.340 & 9.59 & 6.81 + 9.000 & 3.58 & 3.93 & 4.32 & 8.276 & 9.56 & 4.09 + 10.000 & 3.34 & 3.68 & 4.04 & 8.211 & 9.47 & 6.12 + comments : in total , 16 resonances in the range of e kev are taken into account , including two subthreshold resonances at kev ( ) and kev ( ) .the energy of the resonance at e kev is obtained from the corrected excitation energy e kev reported in chafa et al .for the e and 490 kev resonances , the strengths are adopted from fox et al .for e kev , strengths are reported in ref .these results have to be renormalized ( see comments in ref . ) by using the correct resonance strength for e=609 kev in (p,) . for the e kev resonance , the strength value from ref . and the renormalized value from ref . are in good agreement .strengths are also presented in ref . , but have been disregarded since they seem to deviate from the results of refs . by a factor of 2 .two - level interferences between the resonances at kev and kev , between the resonances at kev and kev , and between the resonances at kev and kev , are explicitly taken into account .since the signs of the interferences are unknown , the signs are sampled randomly using a binary probability density function ( see sec 4.4 in paper i ) .the direct capture s - factor is adopted from the recent measurement of newton et al . .+ & & & & & & + + & & & & & & + 0.010 & 5.05 & 5.89 & 6.99 & -5.578 & 1.65 & 5.08 + 0.011 & 5.35 & 6.21 & 7.28 & -5.343 & 1.56 & 4.30 + 0.012 & 4.37 & 5.04 & 5.85 & -5.134 & 1.49 & 2.34 + 0.013 & 2.89 & 3.30 & 3.82 & -4.946 & 1.42 & 4.30 + 0.014 & 1.60 & 1.82 & 2.09 & -4.775 & 1.35 & 2.65 + 0.015 & 7.69 & 8.75 & 10.00 & -4.618 & 1.32 & 5.43 + 0.016 & 3.32 & 3.77 & 4.30 & -4.472 & 1.29 & 6.47 + 0.018 & 5.87 & 6.81 & 7.92 & -4.183 & 1.49 & 6.69 + 0.020 & 1.29 & 1.57 & 1.92 & -3.869 & 1.98 & 4.34 + 0.025 & 1.25 & 1.52 & 1.85 & -3.182 & 2.02 & 3.89 + 0.030 & 1.44 & 1.73 & 2.09 & -2.708 & 1.88 & 3.13 + 0.040 & 5.11 & 6.07 & 7.14 & -2.122 & 1.70 & 4.95 + 0.050 & 1.60 & 1.89 & 2.24 & -1.778 & 1.69 & 2.67 + 0.060 & 1.51 & 1.79 & 2.10 & -1.554 & 1.68 & 2.57 + 0.070 & 7.23 & 8.59 & 1.02 & -1.397 & 1.71 & 4.52 + 0.080 & 2.33 & 2.75 & 3.27 & -1.280 & 1.73 & 2.72 + 0.090 & 6.03 & 7.07 & 8.31 & -1.186 & 1.62 & 1.88 + 0.100 & 1.59 & 1.80 & 2.06 & -1.092 & 1.29 & 2.21 + 0.110 & 4.88 & 5.27 & 5.73 & -9.848 & 8.26 & 1.50 + 0.120 & 1.58 & 1.69 & 1.79 & -8.688 & 6.24 & 2.27 + 0.130 & 4.84 & 5.12 & 5.41 & -7.578 & 5.71 & 3.05 + 0.140 & 1.32 & 1.40 & 1.48 & -6.573 & 5.96 & 2.23 + 0.150 & 3.19 & 3.38 & 3.58 & -5.691 & 5.87 & 2.87 + 0.160 & 6.91 & 7.35 & 7.80 & -4.914 & 6.06 & 2.78 + 0.180 & 2.52 & 2.68 & 2.84 & -3.620 & 5.99 & 1.03 + 0.200 & 7.01 & 7.43 & 7.89 & -2.600 & 6.01 & 2.33 + 0.250 & 4.29 & 4.54 & 4.82 & -7.885 & 5.74 & 3.99 + 0.300 & 1.59 & 1.68 & 1.77 & 5.161 & 5.41 & 2.71 + 0.350 & 5.87 & 6.28 & 6.74 & 1.839 & 6.91 & 1.77 + 0.400 & 2.33 & 2.55 & 2.82 & 3.242 & 9.70 & 1.24 + 0.450 & 8.24 & 9.13 & 1.02 & 4.517 & 1.08 & 9.51 + 0.500 & 2.41 & 2.68 & 3.00 & 5.593 & 1.11 & 8.86 + 0.600 & 1.26 & 1.40 & 1.56 & 7.243 & 1.08 & 9.60 + 0.700 & 4.15 & 4.58 & 5.08 & 8.432 & 1.01 & 1.15 + 0.800 & 1.03 & 1.12 & 1.24 & 9.328 & 9.39 & 1.36 + 0.900 & 2.09 & 2.27 & 2.49 & 1.003 & 8.69 & 1.55 + 1.000 & 3.74 & 4.03 & 4.38 & 1.061 & 8.03 & 1.63 + 1.250 & 1.12 & 1.19 & 1.28 & 1.169 & 6.58 & 1.78 + 1.500 & 2.51 & 2.64 & 2.80 & 1.249 & 5.46 & 1.24 + 1.750 & 4.76 & 4.99 & 5.24 & 1.312 & 4.73 & 7.52 + 2.000 & 8.03 & 8.39 & 8.76 & 1.364 & 4.44 & 5.91 + 2.500 & ( 1.71 ) & ( 1.79 ) & ( 1.87 ) & ( 1.440 ) & ( 4.43 ) & + 3.000 & ( 3.04 ) & ( 3.18 ) & ( 3.33 ) & ( 1.497 ) & ( 4.43 ) & + 3.500 & ( 4.86 ) & ( 5.08 ) & ( 5.31 ) & ( 1.544 ) & ( 4.43 ) & + 4.000 & ( 7.14 ) & ( 7.46 ) & ( 7.80 ) & ( 1.583 ) & ( 4.43 ) & + 5.000 & ( 1.30 ) & ( 1.36 ) & ( 1.42 ) & ( 1.642 ) & ( 4.43 ) & + 6.000 & ( 2.02 ) & ( 2.12 ) & ( 2.21 ) & ( 1.687 ) & ( 4.43 ) & + 7.000 & ( 2.85 ) & ( 2.98 ) & ( 3.12 ) & ( 1.721 ) & ( 4.43 ) & + 8.000 & ( 3.76 ) & ( 3.93 ) & ( 4.10 ) & ( 1.749 ) & ( 4.43 ) & + 9.000 & ( 4.71 ) & ( 4.92 ) & ( 5.14 ) & ( 1.771 ) & ( 4.43 ) & + 10.000 & ( 5.90 ) & ( 6.17 ) & ( 6.45 ) & ( 1.794 ) & ( 4.43 ) & + comments : in total , 24 resonances in the range of e kev are taken into account , including two subthreshold resonances at kev ( ) and kev ( ) .the energy of the resonance at e kev is obtained from the corrected excitation energy e kev reported in chafa et al .for the e kev resonance , the weighted average values of the energies and strengths measured by chafa et al . , newton et al . and moazen et al . are adopted .for all resonances above e kev , the partial widths are adopted from the r - matrix analysis ( see tab . 3 of kieser et al .two - level interferences between the resonances at kev and kev , and between the resonances at kev and kev , are explicitly taken into account . since the signs of the interferences are unknown , the signs are sampled randomly using a binary probability density function ( see sec 4.4 in paper i ) .+ & & & & & & + + & & & & & & + 0.010 & 9.63 & 1.75 & 3.16 & -5.240 & 5.88 & 4.90 + 0.011 & 7.72 & 1.35 & 2.38 & -5.035 & 5.57 & 9.17 + 0.012 & 4.68 & 7.89 & 1.34 & -4.858 & 5.21 & 2.31 + 0.013 & 2.37 & 3.71 & 6.11 & -4.702 & 4.74 & 5.17 + 0.014 & 1.01 & 1.51 & 2.38 & -4.562 & 4.28 & 6.99 + 0.015 & 3.87 & 5.49 & 8.19 & -4.432 & 3.81 & 7.68 + 0.016 & 1.34 & 1.83 & 2.62 & -4.312 & 3.39 & 8.60 + 0.018 & 1.27 & 1.67 & 2.24 & -4.092 & 2.86 & 3.69 + 0.020 & 9.38 & 1.20 & 1.56 & -3.896 & 2.54 & 2.34 + 0.025 & 5.71 & 7.15 & 9.16 & -3.486 & 2.42 & 5.14 + 0.030 & 1.42 & 1.79 & 2.29 & -3.165 & 2.40 & 1.98 + 0.040 & 1.77 & 2.21 & 2.95 & -2.677 & 3.90 & 2.13 + 0.050 & 1.08 & 1.35 & 1.78 & -2.263 & 4.85 & 3.90 + 0.060 & 9.32 & 1.23 & 1.67 & -1.818 & 3.46 & 4.55 + 0.070 & 3.46 & 4.60 & 6.22 & -1.459 & 2.95 & 6.30 + 0.080 & 5.34 & 7.11 & 9.54 & -1.185 & 2.89 & 2.20 + 0.090 & 4.44 & 5.91 & 7.90 & -9.735 & 2.90 & 2.15 + 0.100 & 2.38 & 3.17 & 4.24 & -8.054 & 2.90 & 2.36 + 0.110 & 9.32 & 1.24 & 1.66 & -6.691 & 2.90 & 2.52 + 0.120 & 2.87 & 3.82 & 5.10 & -5.566 & 2.91 & 2.62 + 0.130 & 7.36 & 9.80 & 1.31 & -4.623 & 2.91 & 2.72 + 0.140 & 1.64 & 2.18 & 2.91 & -3.824 & 2.91 & 2.77 + 0.150 & 3.26 & 4.33 & 5.79 & -3.138 & 2.91 & 2.81 + 0.160 & 5.90 & 7.85 & 1.05 & -2.543 & 2.91 & 2.83 + 0.180 & 1.56 & 2.08 & 2.78 & -1.568 & 2.91 & 2.84 + 0.200 & 3.36 & 4.46 & 5.96 & -8.048 & 2.91 & 2.85 + 0.250 & 1.26 & 1.68 & 2.25 & 5.206 & 2.90 & 2.84 + 0.300 & 2.92 & 3.87 & 5.17 & 1.356 & 2.89 & 2.76 + 0.350 & 5.14 & 6.80 & 9.07 & 1.920 & 2.88 & 2.63 + 0.400 & 7.68 & 1.02 & 1.35 & 2.321 & 2.86 & 2.48 + 0.450 & 1.03 & 1.37 & 1.81 & 2.617 & 2.82 & 2.33 + 0.500 & 1.31 & 1.72 & 2.27 & 2.846 & 2.78 & 2.11 + 0.600 & 1.87 & 2.42 & 3.15 & 3.188 & 2.63 & 3.49 + 0.700 & 2.54 & 3.20 & 4.08 & 3.471 & 2.38 & 6.04 + 0.800 & 3.52 & 4.28 & 5.33 & 3.766 & 2.10 & 1.45 + 0.900 & 5.02 & 6.00 & 7.25 & 4.101 & 1.85 & 2.13 + 1.000 & 7.33 & 8.59 & 1.02 & 4.465 & 1.73 & 8.19 + 1.250 & 1.81 & 2.10 & 2.50 & 5.362 & 1.70 & 1.51 + 1.500 & 3.76 & 4.34 & 5.14 & 6.087 & 1.66 & 1.26 + 1.750 & 6.53 & 7.52 & 8.81 & 6.632 & 1.60 & 8.37 + 2.000 & 9.88 & 1.14 & 1.33 & 7.044 & 1.55 & 4.47 + 2.500 & 1.75 & 2.00 & 2.30 & 7.604 & 1.43 & 2.66 + 3.000 & 2.49 & 2.84 & 3.25 & 7.953 & 1.37 & 1.79 + 3.500 & 3.17 & 3.58 & 4.07 & 8.186 & 1.29 & 1.01 + 4.000 & 3.74 & 4.21 & 4.76 & 8.348 & 1.23 & 7.40 + 5.000 & 4.65 & 5.20 & 5.85 & 8.559 & 1.17 & 3.63 + 6.000 & ( 6.16 ) & ( 6.91 ) & ( 7.76 ) & ( 8.841 ) & ( 1.15 ) & + 7.000 & ( 8.59 ) & ( 9.64 ) & ( 1.08 ) & ( 9.174 ) & ( 1.15 ) & + 8.000 & ( 1.12 ) & ( 1.25 ) & ( 1.41 ) & ( 9.436 ) & ( 1.15 ) & + 9.000 & ( 1.38 ) & ( 1.55 ) & ( 1.74 ) & ( 9.648 ) & ( 1.15 ) & + 10.000 & ( 1.65 ) & ( 1.85 ) & ( 2.07 ) & ( 9.823 ) & ( 1.15 ) & + comments : resonance energies are calculated from excitation energies and the q - value , except when a more precise value of the resonance energy was measured directly .resonance strengths have been measured directly for 19 resonances at energies of e kev by wiescher et al . and vogelaar et al . . at variance with angulo et al . , we use published and derived values of partial widths in order to integrate broad resonance contributions numerically .additional data were provided by + studies and by other experiments . for e kev , the proton partial width is deduced from indirect measurements ; the ( , d ) transfer experiment by champagne and pitt finds a value of = 2.0 ev , while a recent study using the trojan horse method obtains = ( 3.1 ev . for the radiative and total widthwe adopt the values quoted in wiescher et al . as a private communication from k. allen ( ev , kev ) ; a 40% uncertainty is assumed for these values . for e kev , the proton partial width is calculated from the measured ( p, ) resonance strength and the derived value agrees with the result of the recent trojan horse study . the radiative width , ev , is again adopted from ref . ( private communication from k. allen ) , while an upper limit of kev is given in ref .for e kev , the resonance energy , strength , total width and proton partial width ( deduced from ) have all been measured precisely .the direct capture s - factor is adopted from wiescher et al . . above the rate is extrapolated using hauser - feshbach results .+ & & & & & & + + & & & & & & + 0.010 & 1.49 & 1.93 & 2.49 & -4.540 & 2.57 & 3.13 + 0.011 & 1.14 & 1.52 & 1.99 & -4.334 & 2.78 & 1.34 + 0.012 & 6.64 & 9.00 & 1.19 & -4.156 & 3.00 & 2.63 + 0.013 & 3.11 & 4.35 & 5.94 & -3.999 & 3.22 & 4.75 + 0.014 & 1.21 & 1.80 & 2.50 & -3.858 & 3.59 & 8.34 + 0.015 & 4.21 & 6.60 & 9.72 & -3.728 & 4.05 & 9.10 + 0.016 & 1.33 & 2.24 & 3.42 & -3.607 & 4.55 & 1.29 + 0.018 & 1.03 & 2.01 & 3.39 & -3.389 & 5.52 & 2.84 + 0.020 & 6.70 & 1.52 & 2.61 & -3.192 & 6.36 & 4.55 + 0.025 & 3.24 & 9.17 & 1.71 & -2.787 & 7.71 & 6.73 + 0.030 & 6.99 & 2.34 & 4.45 & -2.469 & 8.40 & 8.62 + 0.040 & 7.54 & 2.58 & 5.05 & -1.997 & 8.64 & 8.18 + 0.050 & 3.35 & 9.04 & 1.63 & -1.636 & 7.05 & 7.57 + 0.060 & 2.21 & 2.93 & 3.92 & -1.273 & 2.65 & 4.55 + 0.070 & 7.71 & 8.54 & 9.45 & -9.367 & 1.00 & 1.28 + 0.080 & 1.16 & 1.25 & 1.35 & -6.682 & 7.66 & 2.74 + 0.090 & 9.57 & 1.03 & 1.10 & -4.578 & 7.32 & 2.54 + 0.100 & 5.12 & 5.49 & 5.90 & -2.902 & 7.25 & 3.28 + 0.110 & 1.99 & 2.14 & 2.30 & -1.541 & 7.27 & 3.45 + 0.120 & 6.13 & 6.58 & 7.07 & -4.176 & 7.27 & 3.07 + 0.130 & 1.57 & 1.69 & 1.81 & 5.248 & 7.24 & 2.60 + 0.140 & 3.50 & 3.76 & 4.04 & 1.325 & 7.25 & 2.85 + 0.150 & 6.95 & 7.47 & 8.02 & 2.011 & 7.26 & 2.46 + 0.160 & 1.26 & 1.35 & 1.45 & 2.605 & 7.24 & 3.19 + 0.180 & 3.35 & 3.59 & 3.86 & 3.582 & 7.26 & 3.17 + 0.200 & 7.21 & 7.72 & 8.30 & 4.348 & 7.26 & 3.99 + 0.250 & 2.74 & 2.95 & 3.17 & 5.685 & 7.32 & 2.61 + 0.300 & 6.43 & 6.96 & 7.53 & 6.545 & 7.96 & 3.09 + 0.350 & 1.16 & 1.29 & 1.43 & 7.162 & 1.01 & 1.24 + 0.400 & 1.83 & 2.10 & 2.45 & 7.656 & 1.41 & 1.21 + 0.450 & 2.69 & 3.36 & 4.21 & 8.123 & 2.08 & 2.42 + 0.500 & 4.02 & 5.55 & 7.44 & 8.611 & 2.77 & 3.39 + 0.600 & 1.11 & 1.69 & 2.35 & 9.703 & 3.48 & 3.71 + 0.700 & 3.49 & 5.33 & 7.03 & 1.083 & 3.30 & 4.49 + 0.800 & 9.42 & 1.42 & 1.77 & 1.179 & 3.04 & 6.05 + 0.900 & 2.20 & 3.17 & 3.87 & 1.261 & 2.71 & 6.39 + 1.000 & 4.30 & 6.17 & 7.43 & 1.327 & 2.66 & 7.66 + 1.250 & 1.50 & 2.07 & 2.45 & 1.448 & 2.44 & 7.56 + 1.500 & 3.33 & 4.64 & 5.53 & 1.529 & 2.48 & 7.58 + 1.750 & 5.80 & 8.11 & 9.78 & 1.585 & 2.52 & 6.26 + 2.000 & 8.54 & 1.21 & 1.49 & 1.626 & 2.65 & 5.62 + 2.500 & 1.43 & 2.04 & 2.57 & 1.679 & 2.74 & 5.02 + 3.000 & 1.97 & 2.71 & 3.63 & 1.710 & 2.85 & 4.24 + 3.500 & 2.39 & 3.32 & 4.50 & 1.731 & 2.90 & 5.14 + 4.000 & 2.70 & 3.87 & 5.23 & 1.745 & 3.03 & 6.57 + 5.000 & 3.27 & 4.51 & 6.31 & 1.763 & 3.00 & 8.61 + 6.000 & 3.72 & 5.24 & 7.11 & 1.776 & 2.95 & 1.08 + 7.000 & 4.20 & 5.41 & 7.74 & 1.785 & 2.75 & 1.27 + 8.000 & 4.80 & 6.17 & 8.41 & 1.796 & 2.53 & 1.43 + 9.000 & 5.50 & 6.66 & 9.10 & 1.807 & 2.27 & 1.54 + 10.000 & 6.26 & 7.68 & 9.84 & 1.817 & 2.04 & 1.46 + comments : up to a resonance energy of mev , the energies and partial widths are adopted from the same sources as for the (p, reaction ( see comments of tab .[ tab : o18pgresults ] ) . at higher energies ,partial widths have been measured up to mev .a broad resonance near kev , which has not been detected in the ( p, ) channel , was observed by yagi , mak et al . and lorentz - wirzba et al .these measurements are in agreement regarding the proton width , but not for the -particle width ( = 317 kev , 150 kev or 90 kev ) .thus we adopt a value of kev .the effects of the interference between this resonance and the broad 1/2 resonance at e kev are included in our results .we do not introduce any nonresonant rate contribution as in lorentz - wirzba et al . or mak et al .presumably this contribution originates from the tails of higher - lying broad resonances that are already taken into account since we numerically integrate the partial rates of all 70 resonances .+ & & & & & & + + & & & & & & + 0.010 & 5.80 & 2.42 & 5.99 & -1.421 & 1.40 & 1.06 + 0.011 & 2.59 & 8.08 & 1.88 & -1.363 & 1.27 & 1.07 + 0.012 & 3.26 & 1.17 & 2.77 & -1.314 & 1.46 & 1.65 + 0.013 & 1.54 & 8.43 & 2.16 & -1.272 & 1.77 & 1.91 + 0.014 & 3.56 & 3.29 & 1.00 & -1.237 & 2.08 & 1.76 + 0.015 & 5.24 & 7.53 & 3.03 & -1.206 & 2.37 & 1.46 + 0.016 & 5.44 & 1.12 & 6.05 & -1.180 & 2.61 & 1.17 + 0.018 & 2.68 & 9.39 & 8.97 & -1.135 & 2.94 & 7.18 + 0.020 & 6.15 & 3.22 & 4.71 & -1.098 & 3.07 & 4.35 + 0.025 & 2.66 & 2.66 & 5.71 & -1.023 & 2.46 & 6.74 + 0.030 & 2.90 & 2.89 & 1.98 & -9.345 & 1.99 & 8.28 + 0.040 & 1.94 & 2.72 & 1.42 & -7.773 & 2.28 & 8.10 + 0.050 & 3.97 & 4.99 & 2.11 & -6.799 & 2.22 & 1.18 + 0.060 & 2.66 & 3.15 & 1.29 & -6.154 & 2.20 & 1.23 + 0.070 & 2.61 & 2.98 & 1.29 & -5.696 & 2.18 & 1.11 + 0.080 & 8.49 & 8.84 & 3.98 & -5.343 & 1.93 & 6.32 + 0.090 & 3.81 & 1.69 & 6.26 & -5.022 & 1.35 & 8.26 + 0.100 & 2.08 & 5.25 & 1.30 & -4.671 & 9.25 & 1.41 + 0.110 & 8.09 & 1.91 & 4.80 & -4.308 & 9.00 & 8.58 + 0.120 & 2.17 & 5.14 & 1.27 & -3.979 & 8.86 & 3.19 + 0.130 & 3.85 & 8.74 & 2.04 & -3.696 & 8.45 & 2.98 + 0.140 & 4.58 & 9.94 & 2.23 & -3.453 & 8.01 & 3.17 + 0.150 & 3.88 & 8.17 & 1.76 & -3.243 & 7.60 & 3.64 + 0.160 & 2.53 & 5.15 & 1.07 & -3.059 & 7.21 & 4.85 + 0.180 & 5.95 & 1.11 & 2.14 & -2.751 & 6.42 & 1.48 + 0.200 & 7.88 & 1.35 & 2.41 & -2.500 & 5.56 & 5.03 + 0.250 & 1.18 & 1.57 & 2.24 & -2.023 & 3.28 & 3.37 + 0.300 & 4.70 & 5.35 & 6.41 & -1.672 & 1.68 & 5.55 + 0.350 & 7.42 & 8.01 & 8.81 & -1.403 & 9.31 & 2.85 + 0.400 & 6.12 & 6.51 & 6.93 & -1.194 & 6.50 & 5.50 + 0.450 & 3.21 & 3.38 & 3.57 & -1.029 & 5.51 & 1.06 + 0.500 & 1.21 & 1.27 & 1.34 & -8.968 & 5.15 & 4.58 + 0.600 & 8.81 & 9.24 & 9.72 & -6.986 & 4.96 & 2.97 + 0.700 & 3.58 & 3.76 & 3.95 & -5.584 & 4.95 & 1.98 + 0.800 & 1.01 & 1.06 & 1.11 & -4.547 & 4.97 & 2.40 + 0.900 & 2.24 & 2.35 & 2.47 & -3.751 & 4.98 & 2.48 + 1.000 & 4.21 & 4.42 & 4.64 & -3.119 & 4.96 & 2.37 + 1.250 & 1.34 & 1.40 & 1.47 & -1.963 & 4.74 & 3.12 + 1.500 & 3.17 & 3.33 & 3.49 & -1.101 & 4.75 & 2.83 + 1.750 & 6.70 & 7.13 & 7.66 & -3.331 & 6.86 & 1.01 + 2.000 & ( 1.32 ) & ( 1.48 ) & ( 1.66 ) & ( 3.893 ) & ( 1.15 ) & + 2.500 & ( 6.80 ) & ( 7.62 ) & ( 8.55 ) & ( 2.031 ) & ( 1.15 ) & + 3.000 & ( 2.45 ) & ( 2.75 ) & ( 3.08 ) & ( 3.314 ) & ( 1.15 ) & + 3.500 & ( 6.84 ) & ( 7.68 ) & ( 8.61 ) & ( 4.341 ) & ( 1.15 ) & + 4.000 & ( 1.58 ) & ( 1.77 ) & ( 1.99 ) & ( 5.177 ) & ( 1.15 ) & + 5.000 & ( 5.64 ) & ( 6.32 ) & ( 7.09 ) & ( 6.449 ) & ( 1.15 ) & + 6.000 & ( 1.40 ) & ( 1.57 ) & ( 1.76 ) & ( 7.360 ) & ( 1.15 ) & + 7.000 & ( 2.74 ) & ( 3.07 ) & ( 3.45 ) & ( 8.031 ) & ( 1.15 ) & + 8.000 & ( 4.55 ) & ( 5.10 ) & ( 5.72 ) & ( 8.537 ) & ( 1.15 ) & + 9.000 & ( 6.71 ) & ( 7.53 ) & ( 8.44 ) & ( 8.926 ) & ( 1.15 ) & + 10.000 & ( 9.90 ) & ( 1.11 ) & ( 1.25 ) & ( 9.315 ) & ( 1.15 ) & + comments : a total of 21 resonances are taken into account for calculating the reaction rate . for resonances in the range of e kev , resonance energies are adopted from either endt or vogelaar et al . , while resonance strengths are adopted from ref . and dababneh et al .note that for the very weakly observed e kev resonance we adopt a strength uncertainty ( 50% ) that is larger than the originally published value .for the e kev resonance , the -particle partial width is calculated from the measured resonance strength , while the -ray partial width is equal to the total width ( see berg and wiehard ) . for resonances in the e kev region ,the energies and strengths are adopted from trautvetter et al .1978 . for the e kev resonance , the total widthis known ( kev ) , but not enough information is available to derive values for and .thus we do not take the tail of this broad resonance into account .for the two low - energy resonances at e and 174 kev , we use the results of the ( , d) work of giesen et al .for e kev we find a 2 assignment more convincing than the originally proposed 3 assignment ( see the angular distribution shown in fig . ) ; we adopt a factor of 2 for the uncertainty of the measured -particle spectroscopic factor .the level corresponding to e is only weakly populated in ref . and , in our opinion , even a transfer can not be excluded ( as can be seen by comparing the angular distributions for nearby levels ) .thus we treat the -particle partial width of this resonance as an upper limit , where an s - wave spectroscopic factor of can be estimated from the data of ref . ) . note that our assumptions differ from those in the original work of ref . , where a assignment was adopted ( in fact , our procedure is more consistent with giesen s thesis ) .the direct capture s - factor is adopted from trautvetter et al .it is interesting to note that vastly different results are obtained by buchmann , dauria and mccorquodale and descouvemont .for example , at gk , the calculated direct capture rate in refs . amounts to , , , respectively .the direct capture component and the low - energy tail of the broad e kev resonance are only expected to influence the total rate at very low temperatures of gk . for resonances above e mev ,see the comment section in paper iii .+ & & & & & & + + & & & & & & + 0.010 & 2.02 & 2.97 & 4.29 & -6.339 & 3.77 & 3.91 + 0.011 & 2.60 & 3.85 & 5.63 & -6.082 & 3.83 & 3.85 + 0.012 & 2.57 & 3.69 & 5.36 & -5.856 & 3.72 & 3.51 + 0.013 & 1.92 & 2.81 & 4.10 & -5.653 & 3.84 & 2.23 + 0.014 & 1.19 & 1.75 & 2.53 & -5.471 & 3.73 & 3.97 + 0.015 & 6.32 & 9.15 & 1.35 & -5.304 & 3.81 & 4.01 + 0.016 & 2.89 & 4.17 & 6.13 & -5.152 & 3.82 & 3.35 + 0.018 & 4.19 & 6.07 & 8.87 & -4.885 & 3.79 & 3.65 + 0.020 & 4.22 & 6.12 & 8.93 & -4.654 & 3.79 & 6.23 + 0.025 & 4.35 & 6.26 & 9.12 & -4.191 & 3.70 & 2.14 + 0.030 & 1.47 & 2.15 & 3.12 & -3.838 & 3.77 & 1.77 + 0.040 & 2.46 & 3.58 & 5.28 & -3.326 & 3.81 & 4.76 + 0.050 & 9.43 & 1.36 & 2.00 & -2.962 & 3.81 & 3.27 + 0.060 & 1.51 & 2.18 & 3.19 & -2.685 & 3.79 & 4.54 + 0.070 & 1.38 & 2.00 & 2.88 & -2.463 & 3.75 & 5.18 + 0.080 & 8.51 & 1.24 & 1.81 & -2.281 & 3.79 & 2.85 + 0.090 & 3.92 & 5.73 & 8.35 & -2.128 & 3.85 & 5.02 + 0.100 & 1.51 & 2.17 & 3.16 & -1.994 & 3.70 & 5.65 + 0.110 & 4.79 & 6.90 & 1.01 & -1.879 & 3.82 & 8.00 + 0.120 & 1.32 & 1.91 & 2.75 & -1.777 & 3.71 & 3.75 + 0.130 & 3.27 & 4.77 & 6.97 & -1.686 & 3.76 & 3.25 + 0.140 & 7.33 & 1.07 & 1.57 & -1.605 & 3.78 & 4.00 + 0.150 & 1.57 & 2.28 & 3.35 & -1.529 & 3.77 & 3.79 + 0.160 & 3.10 & 4.54 & 6.48 & -1.461 & 3.71 & 4.50 + 0.180 & 1.04 & 1.50 & 2.18 & -1.340 & 3.75 & 6.50 + 0.200 & 2.94 & 4.27 & 6.09 & -1.237 & 3.74 & 4.43 + 0.250 & 2.39 & 3.39 & 4.89 & -1.029 & 3.61 & 2.66 + 0.300 & 1.17 & 1.67 & 2.42 & -8.693 & 3.65 & 8.64 + 0.350 & 4.45 & 6.29 & 8.80 & -7.371 & 3.45 & 3.41 + 0.400 & 1.47 & 1.99 & 2.74 & -6.212 & 3.11 & 9.85 + 0.450 & 4.50 & 5.98 & 8.02 & -5.115 & 2.88 & 9.46 + 0.500 & 1.23 & 1.61 & 2.13 & -4.123 & 2.73 & 1.12 + 0.600 & 6.46 & 8.53 & 1.14 & -2.454 & 2.91 & 2.15 + 0.700 & 2.29 & 3.05 & 4.14 & -1.175 & 3.05 & 3.70 + 0.800 & 6.04 & 8.09 & 1.11 & -1.983 & 3.12 & 4.04 + 0.900 & 1.29 & 1.74 & 2.39 & 5.641 & 3.17 & 3.69 + 1.000 & 2.35 & 3.18 & 4.37 & 1.168 & 3.18 & 3.77 + 1.250 & 6.97 & 9.36 & 1.28 & 2.246 & 3.12 & 3.81 + 1.500 & 1.43 & 1.90 & 2.58 & 2.954 & 3.03 & 3.67 + 1.750 & 2.39 & 3.18 & 4.22 & 3.465 & 2.90 & 2.70 + 2.000 & 3.61 & 4.70 & 6.18 & 3.858 & 2.75 & 3.92 + 2.500 & 6.57 & 8.43 & 1.09 & 4.442 & 2.58 & 2.15 + 3.000 & 1.02 & 1.30 & 1.69 & 4.877 & 2.48 & 1.85 + 3.500 & 1.45 & 1.83 & 2.37 & 5.221 & 2.50 & 2.33 + 4.000 & 1.90 & 2.43 & 3.14 & 5.499 & 2.56 & 2.53 + 5.000 & 2.86 & 3.70 & 4.85 & 5.921 & 2.72 & 2.60 + 6.000 & 3.74 & 4.93 & 6.67 & 6.213 & 2.88 & 2.89 + 7.000 & 4.53 & 6.03 & 8.09 & 6.408 & 2.93 & 1.94 + 8.000 & 5.01 & 6.67 & 9.07 & 6.514 & 3.00 & 2.32 + 9.000 & 5.21 & 7.00 & 9.69 & 6.562 & 3.04 & 3.56 + 10.000 & 5.21 & 6.94 & 9.45 & 6.553 & 3.03 & 2.81 + comments : in total , 7 resonances in the range of e=596 - 2226 kev are taken into account .resonance energies are calculated from the excitation energies measured by hahn et al . and park et al . , except for the expected s - wave resonance at e=600 kev for which the resonance energy has been measured in elastic scattering studies ( bardayan et al .proton partial widths are either adopted from experiment ( refs . ) or are calculated by using the measured spectroscopic factors of the mirror states ( li et al .gamma - ray partial widths are adopted either from the measured mean lifetimes of the mirror states ( tilley et al . ) , and corrected for the e energy dependence , or are calculated from the shell model . for all resonances considered , the proton partial width exceeds the -ray partial width . for the broad resonances at e=596 , 600 , 665 and 1182kev the reaction rate contributions are found from a numerical integration .the direct capture into 5 bound states is computed by using measured spectroscopic factors of mirror states .the resulting total direct capture s - factor below e=2.5 mev is approximately constant and amounts to s(e)=2.4 mev b. below t=0.5 gk the direct capture dominates the total rates .our uncertainties in the total rate are significantly lager than the unrealistic values ( % below t=0.5 gk ) reported by ref .our reaction rate is in much better agreement with the theoretical calculations reported in dufour and descouvemont compared to those described in chatterjee , okolowicz and ploszajczak .+ & & & & & & + + & & & & & & + 0.010 & 4.80 & 2.18 & 2.07 & -5.653 & 1.73 & 4.43 + 0.011 & 8.74 & 5.25 & 4.09 & -5.352 & 1.73 & 3.05 + 0.012 & 1.22 & 9.69 & 5.18 & -5.081 & 1.69 & 3.07 + 0.013 & 1.69 & 1.21 & 4.63 & -4.840 & 1.56 & 3.98 + 0.014 & 1.69 & 1.01 & 3.18 & -4.629 & 1.42 & 4.68 + 0.015 & 1.38 & 6.52 & 1.85 & -4.440 & 1.33 & 5.62 + 0.016 & 9.29 & 3.42 & 8.23 & -4.270 & 1.15 & 4.91 + 0.018 & 1.98 & 5.59 & 1.20 & -3.986 & 1.01 & 4.77 + 0.020 & 2.11 & 5.47 & 1.19 & -3.756 & 9.94 & 4.98 + 0.025 & 1.06 & 3.92 & 9.37 & -3.338 & 1.21 & 8.01 + 0.030 & 1.29 & 6.84 & 2.00 & -3.056 & 1.38 & 4.66 + 0.040 & 4.24 & 2.37 & 1.03 & -2.685 & 1.48 & 1.36 + 0.050 & 7.15 & 2.57 & 1.16 & -2.429 & 1.33 & 1.35 + 0.060 & 6.36 & 1.80 & 6.95 & -2.231 & 1.18 & 2.13 + 0.070 & 3.86 & 8.92 & 2.73 & -2.073 & 1.02 & 2.33 + 0.080 & 1.77 & 3.88 & 9.87 & -1.929 & 9.34 & 1.98 + 0.090 & 6.23 & 1.33 & 3.05 & -1.808 & 8.81 & 1.89 + 0.100 & 1.87 & 3.93 & 8.83 & -1.701 & 8.58 & 1.13 + 0.110 & 5.01 & 1.08 & 2.40 & -1.601 & 8.97 & 2.01 + 0.120 & 1.16 & 2.50 & 5.54 & -1.515 & 9.08 & 3.21 + 0.130 & 2.49 & 5.63 & 1.27 & -1.435 & 9.24 & 2.52 + 0.140 & 5.17 & 1.15 & 2.54 & -1.364 & 9.37 & 3.30 + 0.150 & 1.01 & 2.21 & 4.85 & -1.297 & 9.24 & 3.38 + 0.160 & 1.89 & 4.15 & 9.44 & -1.233 & 9.30 & 3.53 + 0.180 & 6.53 & 1.34 & 2.86 & -1.114 & 9.02 & 6.60 + 0.200 & 2.34 & 4.35 & 9.03 & -9.943 & 8.44 & 8.13 + 0.250 & 3.42 & 5.94 & 1.11 & -7.355 & 7.23 & 6.65 + 0.300 & 2.62 & 4.52 & 8.16 & -5.348 & 6.58 & 3.56 + 0.350 & 1.18 & 2.00 & 3.57 & -3.864 & 6.38 & 2.77 + 0.400 & 3.65 & 6.46 & 1.15 & -2.708 & 6.44 & 2.25 + 0.450 & 9.34 & 1.59 & 2.84 & -1.808 & 5.94 & 9.03 + 0.500 & 1.96 & 3.37 & 5.96 & -1.054 & 6.21 & 2.04 + 0.600 & 6.59 & 1.14 & 2.06 & 1.633 & 6.15 & 1.43 + 0.700 & ( 1.46 ) & ( 2.77 ) & ( 5.23 ) & ( 1.018 ) & ( 6.37 ) & + 0.800 & ( 2.85 ) & ( 5.39 ) & ( 1.02 ) & ( 1.684 ) & ( 6.37 ) & + 0.900 & ( 4.88 ) & ( 9.22 ) & ( 1.74 ) & ( 2.221 ) & ( 6.37 ) & + 1.000 & ( 7.62 ) & ( 1.44 ) & ( 2.72 ) & ( 2.667 ) & ( 6.37 ) & + 1.250 & ( 1.79 ) & ( 3.39 ) & ( 6.40 ) & ( 3.522 ) & ( 6.37 ) & + 1.500 & ( 3.27 ) & ( 6.18 ) & ( 1.17 ) & ( 4.123 ) & ( 6.37 ) & + 1.750 & ( 5.17 ) & ( 9.77 ) & ( 1.85 ) & ( 4.582 ) & ( 6.37 ) & + 2.000 & ( 7.41 ) & ( 1.40 ) & ( 2.65 ) & ( 4.941 ) & ( 6.37 ) & + 2.500 & ( 1.25 ) & ( 2.36 ) & ( 4.45 ) & ( 5.462 ) & ( 6.37 ) & + 3.000 & ( 1.78 ) & ( 3.37 ) & ( 6.36 ) & ( 5.819 ) & ( 6.37 ) & + 3.500 & ( 2.32 ) & ( 4.38 ) & ( 8.27 ) & ( 6.082 ) & ( 6.37 ) & + 4.000 & ( 2.84 ) & ( 5.37 ) & ( 1.02 ) & ( 6.287 ) & ( 6.37 ) & + 5.000 & ( 3.90 ) & ( 7.36 ) & ( 1.39 ) & ( 6.602 ) & ( 6.37 ) & + 6.000 & ( 4.98 ) & ( 9.41 ) & ( 1.78 ) & ( 6.846 ) & ( 6.37 ) & + 7.000 & ( 6.08 ) & ( 1.15 ) & ( 2.17 ) & ( 7.047 ) & ( 6.37 ) & + 8.000 & ( 7.17 ) & ( 1.36 ) & ( 2.56 ) & ( 7.212 ) & ( 6.37 ) & + 9.000 & ( 8.21 ) & ( 1.55 ) & ( 2.93 ) & ( 7.347 ) & ( 6.37 ) & + 10.000 & ( 9.40 ) & ( 1.78 ) & ( 3.36 ) & ( 7.482 ) & ( 6.37 ) & + comments : the same data as for the (p,) reaction are used as input .the radiative widths are deduced from analog levels when known ; otherwise , we calculate by using and , which are derived from the statistics of 25 radiative widths in assuming a lognormal distribution .the direct capture contribution , based on measured +p spectroscopic factors , is adopted from utku et al . , where we assume a factor uncertainty of an order of magnitude .+ & & & & & & + + & & & & & & + 0.010 & 7.75 & 2.77 & 2.55 & -4.933 & 1.57 & 7.45 + 0.011 & 1.29 & 6.79 & 5.03 & -4.635 & 1.59 & 4.67 + 0.012 & 1.79 & 1.28 & 6.21 & -4.365 & 1.55 & 5.83 + 0.013 & 2.51 & 1.59 & 5.20 & -4.123 & 1.41 & 7.38 + 0.014 & 2.63 & 1.47 & 3.48 & -3.909 & 1.26 & 1.15 + 0.015 & 2.05 & 9.42 & 1.88 & -3.723 & 1.14 & 1.40 + 0.016 & 1.50 & 4.99 & 8.82 & -3.551 & 9.76 & 1.57 + 0.018 & 3.19 & 7.80 & 1.22 & -3.269 & 8.07 & 1.67 + 0.020 & 3.52 & 7.72 & 1.17 & -3.038 & 7.77 & 1.85 + 0.025 & 1.70 & 5.73 & 9.43 & -2.621 & 1.02 & 2.24 + 0.030 & 1.97 & 9.81 & 2.30 & -2.335 & 1.24 & 1.16 + 0.040 & 6.96 & 3.19 & 1.25 & -1.962 & 1.31 & 1.74 + 0.050 & 1.16 & 3.54 & 1.50 & -1.705 & 1.20 & 1.58 + 0.060 & 1.03 & 2.51 & 7.76 & -1.509 & 1.00 & 1.96 + 0.070 & 6.17 & 1.51 & 3.21 & -1.340 & 8.34 & 1.15 + 0.080 & 2.81 & 6.94 & 1.24 & -1.196 & 7.43 & 1.57 + 0.090 & 1.05 & 2.63 & 4.45 & -1.068 & 6.88 & 4.54 + 0.100 & 3.27 & 8.26 & 1.40 & -9.548 & 6.79 & 7.90 + 0.110 & 9.13 & 2.42 & 3.97 & -8.514 & 6.78 & 1.08 + 0.120 & 2.26 & 5.92 & 1.01 & -7.605 & 6.80 & 1.34 + 0.130 & 5.22 & 1.23 & 2.32 & -6.782 & 6.78 & 1.49 + 0.140 & 1.15 & 3.21 & 5.03 & -5.986 & 6.74 & 1.82 + 0.150 & 2.33 & 6.14 & 1.01 & -5.296 & 6.65 & 1.86 + 0.160 & 4.50 & 1.09 & 1.92 & -4.642 & 6.56 & 2.04 + 0.180 & 1.65 & 3.97 & 6.17 & -3.415 & 6.06 & 2.11 + 0.200 & 5.40 & 1.17 & 1.77 & -2.298 & 5.39 & 2.09 + 0.250 & 7.49 & 1.21 & 1.69 & 1.374 & 3.71 & 9.65 + 0.300 & 5.82 & 8.28 & 1.07 & 2.080 & 2.85 & 3.11 + 0.350 & 2.83 & 3.76 & 4.71 & 3.606 & 2.44 & 1.05 + 0.400 & 1.07 & 1.35 & 1.63 & 4.889 & 2.04 & 6.09 + 0.450 & ( 3.22 ) & ( 3.84 ) & ( 4.57 ) & ( 5.951 ) & ( 1.75 ) & + 0.500 & ( 7.06 ) & ( 8.41 ) & ( 1.00 ) & ( 6.734 ) & ( 1.75 ) & + 0.600 & ( 2.41 ) & ( 2.87 ) & ( 3.42 ) & ( 7.961 ) & ( 1.75 ) & + 0.700 & ( 6.11 ) & ( 7.28 ) & ( 8.67 ) & ( 8.892 ) & ( 1.75 ) & + 0.800 & ( 1.28 ) & ( 1.52 ) & ( 1.81 ) & ( 9.631 ) & ( 1.75 ) & + 0.900 & ( 2.35 ) & ( 2.80 ) & ( 3.33 ) & ( 1.024 ) & ( 1.75 ) & + 1.000 & ( 3.91 ) & ( 4.66 ) & ( 5.55 ) & ( 1.075 ) & ( 1.75 ) & + 1.250 & ( 1.06 ) & ( 1.26 ) & ( 1.50 ) & ( 1.175 ) & ( 1.75 ) & + 1.500 & ( 2.18 ) & ( 2.60 ) & ( 3.10 ) & ( 1.247 ) & ( 1.75 ) & + 1.750 & ( 3.85 ) & ( 4.59 ) & ( 5.46 ) & ( 1.304 ) & ( 1.75 ) & + 2.000 & ( 6.11 ) & ( 7.27 ) & ( 8.66 ) & ( 1.350 ) & ( 1.75 ) & + 2.500 & ( 1.23 ) & ( 1.47 ) & ( 1.75 ) & ( 1.420 ) & ( 1.75 ) & + 3.000 & ( 2.05 ) & ( 2.45 ) & ( 2.91 ) & ( 1.471 ) & ( 1.75 ) & + 3.500 & ( 3.00 ) & ( 3.58 ) & ( 4.26 ) & ( 1.509 ) & ( 1.75 ) & + 4.000 & ( 4.02 ) & ( 4.79 ) & ( 5.71 ) & ( 1.538 ) & ( 1.75 ) & + 5.000 & ( 6.07 ) & ( 7.23 ) & ( 8.61 ) & ( 1.579 ) & ( 1.75 ) & + 6.000 & ( 7.95 ) & ( 9.47 ) & ( 1.13 ) & ( 1.606 ) & ( 1.75 ) & + 7.000 & ( 9.59 ) & ( 1.14 ) & ( 1.36 ) & ( 1.625 ) & ( 1.75 ) & + 8.000 & ( 1.10 ) & ( 1.30 ) & ( 1.55 ) & ( 1.638 ) & ( 1.75 ) & + 9.000 & ( 1.20 ) & ( 1.43 ) & ( 1.71 ) & ( 1.648 ) & ( 1.75 ) & + 10.000 & ( 1.32 ) & ( 1.57 ) & ( 1.87 ) & ( 1.657 ) & ( 1.75 ) & + comments : compilations of data or summary tables for +p can be found in refs . . however , whenever possible , we prefer to consult the original papers .most resonance energies are calculated from excitation energies as measured by utku et al . , using a proton separation energy of = 6411.2.6 kev . for the lowest - energy resonances , when the analog assignments are known , we use the reanalyzed results of smotrich et al . from ( to estimate the -particle partial widths . following ref . , the analogs of the 6.419 and 6.449 mev levels in ( corresponding to the 8 and 38 kev resonances ) were assumed to be the states at 6.497 and 6.528 mev ( ) .they were previously thought to possibly dominate the reaction rate at low temperatures ; neutron spectroscopic factors were previously extracted from analog transfer reactions and used to calculate proton partial widths .this procedure now seems obsolete with the recent neutron and proton transfer experiment of adekola et al . , who assign to the 8 kev resonance ( corresponding to = 1/2 or 3/2 ) and observe a stong population of the subthreshold 6.290 mev level ( e kev , = ( 1/2,3/2) ) . with these new experimental results ,the analog assignments for the levels corresponding to the e , 8 and 38 kev resonances need to be reassessed . here, we make the following assumptions : ( i ) the subthreshold 6.290 mev level has and is the analog of the 6.255 mev level in ; ( ii ) the e kev resonance has , corresponding to the 6.088 mev level in ; and ( iii ) the e kev resonance has , as previously assumed , and may thus interfere with the e kev resonance ; the proton widths are adopted from adekola et al .needless to say that these assumptions are subject to caution since the spins and parities for these levels are not known unambiguously .also , there are many missing ( unobserved ) levels in compared to , which we prefer to ignore in the absence experimental evidence . for the e kev resonance , the total width was measured by utku et al . while an upper limit for the proton width is adopted from kozub et al . ( as for the e kev resonance ) .a lower limit of was obtained by visser et al . for the e kev ( 7/2 ) resonance , yielding ev .since this value exceeds the wigner limit , we use the latter result instead .two important resonances have been directly observed : the 3/2 resonance at 665 kev , for which we use the precise energy and partial widths measured by bardayan et al . , and the 3/2 resonance at 330 kev .partial widths ( or upper limits ) for the next two resonances are extracted from refs . . because of conflicting experimental data , the reported resonances above 900 kev are not included in our calculation of the reaction rate .for instance , the spin , parity and proton partial width given by bardayan et al . for the assumed 1009 kev resonance were shown to be incompatible , leading to an unrealistic spectroscopic factor .in addition , this resonance was not observed by murphy et al . .the broad 1/2 resonance , predicted by dufour and descouvemont at 1.49 mev above the proton threshold and apparently observed recently , was also not detected by murphy et al .on the other hand , several resonances reported by ref . were not seen in previous work .accordingly , we match the reaction rate at 0.4 gk with the hauser - feshbach result obtained with the talys code .the contribution of the e kev resonance ( with unknown spin ) is negligible . on the contrary, the e kev subthreshold resonance could increase the low rate by a factor of 3 near 0.1 gk ( if the e and 665 kev resonances interfere destructively ) .note the bimodal rate probability density function near t=0.1 gk , which is caused by the unknown interference sign for these two resonances .+ & & & & & & + + & & & & & & + 0.010 & 1.71 & 2.46 & 3.57 & -7.048 & 3.65 & 3.03 + 0.011 & 2.69 & 3.86 & 5.61 & -6.772 & 3.67 & 6.62 + 0.012 & 3.11 & 4.49 & 6.51 & -6.527 & 3.67 & 3.19 + 0.013 & 2.79 & 4.00 & 5.72 & -6.309 & 3.69 & 4.09 + 0.014 & 1.95 & 2.83 & 4.10 & -6.112 & 3.68 & 1.32 + 0.015 & 1.19 & 1.70 & 2.48 & -5.933 & 3.67 & 4.21 + 0.016 & 6.03 & 8.71 & 1.25 & -5.770 & 3.70 & 2.13 + 0.018 & 1.09 & 1.56 & 2.24 & -5.482 & 3.65 & 2.45 + 0.020 & 1.31 & 1.90 & 2.70 & -5.233 & 3.66 & 3.07 + 0.025 & 1.93 & 2.76 & 4.01 & -4.734 & 3.69 & 7.81 + 0.030 & 8.77 & 1.24 & 1.78 & -4.353 & 3.63 & 8.34 + 0.040 & 2.18 & 3.11 & 4.46 & -3.800 & 3.65 & 6.60 + 0.050 & 1.10 & 1.57 & 2.28 & -3.408 & 3.64 & 3.84 + 0.060 & 2.23 & 3.17 & 4.55 & -3.108 & 3.61 & 5.09 + 0.070 & 2.40 & 3.42 & 4.98 & -2.870 & 3.68 & 9.06 + 0.080 & 1.72 & 2.46 & 3.56 & -2.673 & 3.63 & 2.86 + 0.090 & 9.07 & 1.29 & 1.87 & -2.507 & 3.62 & 6.07 + 0.100 & 3.74 & 5.40 & 7.74 & -2.364 & 3.62 & 3.33 + 0.110 & 1.32 & 1.86 & 2.67 & -2.240 & 3.64 & 9.47 + 0.120 & 4.03 & 5.70 & 8.14 & -2.128 & 3.57 & 6.00 + 0.130 & 1.07 & 1.52 & 2.20 & -2.030 & 3.64 & 7.64 + 0.140 & 2.58 & 3.70 & 5.34 & -1.941 & 3.65 & 2.62 + 0.150 & 5.82 & 8.36 & 1.20 & -1.860 & 3.63 & 2.32 + 0.160 & 1.24 & 1.76 & 2.51 & -1.785 & 3.52 & 8.45 + 0.180 & 4.92 & 6.86 & 9.76 & -1.649 & 3.43 & 4.16 + 0.200 & 1.92 & 2.71 & 3.80 & -1.512 & 3.52 & 1.53 + 0.250 & 5.56 & 9.63 & 1.81 & -1.151 & 5.87 & 7.00 + 0.300 & 1.05 & 2.01 & 3.98 & -8.492 & 6.59 & 1.16 + 0.350 & 9.74 & 1.90 & 3.69 & -6.266 & 6.62 & 4.38 + 0.400 & 5.20 & 9.97 & 1.92 & -4.603 & 6.52 & 3.54 + 0.450 & 1.89 & 3.59 & 6.80 & -3.323 & 6.42 & 3.14 + 0.500 & 5.23 & 9.87 & 1.85 & -2.312 & 6.31 & 3.03 + 0.600 & 2.37 & 4.37 & 8.05 & -8.204 & 6.10 & 3.85 + 0.700 & 6.92 & 1.25 & 2.25 & 2.254 & 5.88 & 6.48 + 0.800 & 1.54 & 2.69 & 4.77 & 1.001 & 5.65 & 1.23 + 0.900 & 2.89 & 4.88 & 8.47 & 1.600 & 5.40 & 2.15 + 1.000 & 4.78 & 7.86 & 1.33 & 2.080 & 5.14 & 3.34 + 1.250 & ( 1.23 ) & ( 1.86 ) & ( 2.99 ) & ( 2.950 ) & ( 4.54 ) & + 1.500 & ( 2.31 ) & ( 3.36 ) & ( 5.12 ) & ( 3.537 ) & ( 4.06 ) & + 1.750 & ( 3.65 ) & ( 5.13 ) & ( 7.50 ) & ( 3.957 ) & ( 3.70 ) & + 2.000 & ( 5.11 ) & ( 7.02 ) & ( 9.95 ) & ( 4.268 ) & ( 3.44 ) & + 2.500 & ( 7.96 ) & ( 1.07 ) & ( 1.46 ) & ( 4.683 ) & ( 3.14 ) & + 3.000 & ( 1.03 ) & ( 1.38 ) & ( 1.84 ) & ( 4.930 ) & ( 2.99 ) & + 3.500 & ( 1.21 ) & ( 1.60 ) & ( 2.13 ) & ( 5.082 ) & ( 2.92 ) & + 4.000 & ( 1.32 ) & ( 1.76 ) & ( 2.33 ) & ( 5.174 ) & ( 2.88 ) & + 5.000 & ( 1.45 ) & ( 1.92 ) & ( 2.53 ) & ( 5.261 ) & ( 2.85 ) & + 6.000 & ( 1.48 ) & ( 1.96 ) & ( 2.57 ) & ( 5.280 ) & ( 2.83 ) & + 7.000 & ( 1.45 ) & ( 1.93 ) & ( 2.54 ) & ( 5.263 ) & ( 2.83 ) & + 8.000 & ( 1.40 ) & ( 1.86 ) & ( 2.45 ) & ( 5.227 ) & ( 2.83 ) & + 9.000 & ( 1.33 ) & ( 1.77 ) & ( 2.34 ) & ( 5.181 ) & ( 2.83 ) & + 10.000 & ( 1.27 ) & ( 1.69 ) & ( 2.23 ) & ( 5.130 ) & ( 2.83 ) & + comments : because of the limited available spectroscopic information on , only four levels are considered for the calculation of the reaction rate .they are located at =2645 , 2849 , 3001 and 3086 kev .the higher lying states have well determined spins and parities , have assigned analogs in and , and their total widths ( =19.8 and 35.9 kev ) have been measured .the 2645 kev level is assigned a spin and parity of either 1 or 3 , while 3 or 3 are assigned to the 2849 kev level . with the revised mass ,the proton emission threshold is now at 2193 kev and , consequently , the resonances are located at = 452 , 656 , 808 and 893 kev .only upper limits have been obtained for the strength of these resonances ( mev , , , mev , respectively ) .hence , to calculate the rate , we use shell model results for with an % estimated uncertainty .the calculated resonance strengths are well within the experimental upper limits except for the first and most important level at 2645 kev .if its spin is 1 , then the calculated strength ( 6 mev ) is below the experimental upper limit ( mev ) of couder et al .. however , fortune et al . argue that it is more likely a 3 state and calculate a lower limit of mev , marginally consistent with the experimental upper limit .clearly , new experiments are needed to settle this issue and to account for this uncertainty .we adopt a value of =9 mev to cover the interval between the shell model result of 6 mev and experiment . because of the uncertain parity assignment, we use =8 mev for the second level , based on shell model calculations .the direct capture s - factor is adopted from vancraeynest et al .( note that the energy in their eq .( 10 ) must be in units of mev although their s - factor is given in units of kevb . ) above gk the reaction rate should be extrapolated using the statistical model .below this temperature , our results are in agreement with refs . , except that the rate uncertainty below 0.2 gk is higher because of the 40% uncertainty assigned to the direct capture s - factor , and is lower above 0.2 gk since we adopt shell model results instead of experimental upper limits .+ & & & & & & + + & & & & & & + 0.010 & 5.55 & 7.61 & 1.05 & -6.474 & 3.22 & 6.75 + 0.011 & 8.02 & 1.09 & 1.51 & -6.207 & 3.19 & 7.06 + 0.012 & 8.54 & 1.17 & 1.60 & -5.971 & 3.16 & 7.04 + 0.013 & 7.04 & 9.57 & 1.31 & -5.761 & 3.12 & 1.05 + 0.014 & 4.72 & 6.41 & 8.70 & -5.571 & 3.09 & 8.38 + 0.015 & 2.65 & 3.59 & 4.86 & -5.398 & 3.07 & 1.04 + 0.016 & 1.28 & 1.73 & 2.34 & -5.241 & 3.05 & 8.12 + 0.018 & 2.08 & 2.80 & 3.78 & -4.962 & 2.99 & 9.38 + 0.020 & 2.30 & 3.07 & 4.10 & -4.723 & 2.94 & 1.14 + 0.025 & 2.78 & 3.65 & 4.86 & -4.245 & 2.83 & 1.86 + 0.030 & 1.07 & 1.39 & 1.82 & -3.881 & 2.73 & 1.58 + 0.040 & 2.12 & 2.73 & 3.53 & -3.353 & 2.57 & 1.63 + 0.050 & 9.12 & 1.15 & 1.47 & -2.979 & 2.42 & 1.77 + 0.060 & 1.58 & 1.98 & 2.50 & -2.694 & 2.33 & 1.36 + 0.070 & 1.53 & 1.89 & 2.40 & -2.468 & 2.24 & 2.46 + 0.080 & 9.99 & 1.23 & 1.52 & -2.281 & 2.15 & 1.41 + 0.090 & 4.82 & 5.91 & 7.31 & -2.124 & 2.11 & 1.39 + 0.100 & 1.88 & 2.29 & 2.83 & -1.989 & 2.06 & 2.26 + 0.110 & 6.13 & 7.42 & 9.09 & -1.871 & 2.02 & 5.90 + 0.120 & 1.74 & 2.11 & 2.58 & -1.767 & 2.00 & 1.03 + 0.130 & 4.44 & 5.34 & 6.51 & -1.674 & 1.93 & 1.09 + 0.140 & 1.03 & 1.24 & 1.50 & -1.590 & 1.93 & 8.11 + 0.150 & 2.21 & 2.67 & 3.22 & -1.514 & 1.89 & 4.64 + 0.160 & 4.44 & 5.35 & 6.45 & -1.444 & 1.89 & 3.38 + 0.180 & 1.54 & 1.85 & 2.21 & -1.320 & 1.86 & 5.04 + 0.200 & 4.53 & 5.39 & 6.49 & -1.213 & 1.82 & 5.58 + 0.250 & 4.33 & 5.05 & 5.94 & -9.890 & 1.60 & 8.26 + 0.300 & 2.69 & 3.09 & 3.58 & -8.077 & 1.44 & 1.15 + 0.350 & 1.19 & 1.35 & 1.53 & -6.609 & 1.28 & 5.13 + 0.400 & 3.86 & 4.36 & 4.94 & -5.435 & 1.25 & 3.60 + 0.450 & 1.00 & 1.13 & 1.29 & -4.480 & 1.25 & 3.71 + 0.500 & 2.19 & 2.48 & 2.80 & -3.699 & 1.23 & 3.92 + 0.600 & 7.34 & 8.36 & 9.45 & -2.484 & 1.26 & 5.27 + 0.700 & 1.84 & 2.09 & 2.37 & -1.565 & 1.28 & 1.19 + 0.800 & 3.91 & 4.46 & 5.09 & -8.059 & 1.32 & 1.29 + 0.900 & 7.86 & 8.90 & 1.01 & -1.148 & 1.27 & 1.05 + 1.000 & 1.54 & 1.73 & 1.95 & 5.514 & 1.17 & 4.80 + 1.250 & 7.37 & 7.94 & 8.65 & 2.077 & 8.28 & 9.20 + 1.500 & 2.54 & 2.69 & 2.86 & 3.295 & 6.11 & 1.25 + 1.750 & 6.46 & 6.76 & 7.11 & 4.217 & 4.93 & 1.35 + 2.000 & 1.31 & 1.36 & 1.43 & 4.919 & 4.47 & 1.32 + 2.500 & ( 3.54 ) & ( 3.68 ) & ( 3.85 ) & ( 5.911 ) & ( 4.13 ) & + 3.000 & ( 6.85 ) & ( 7.15 ) & ( 7.49 ) & ( 6.574 ) & ( 4.57 ) & + 3.500 & ( 1.09 ) & ( 1.14 ) & ( 1.21 ) & ( 7.046 ) & ( 5.12 ) & + 4.000 & ( 1.54 ) & ( 1.62 ) & ( 1.73 ) & ( 7.397 ) & ( 5.78 ) & + 5.000 & ( 2.46 ) & ( 2.62 ) & ( 2.82 ) & ( 7.876 ) & ( 6.87 ) & + 6.000 & ( 3.27 ) & ( 3.53 ) & ( 3.82 ) & ( 8.170 ) & ( 7.86 ) & + 7.000 & ( 3.90 ) & ( 4.21 ) & ( 4.60 ) & ( 8.350 ) & ( 8.48 ) & + 8.000 & ( 4.32 ) & ( 4.69 ) & ( 5.15 ) & ( 8.458 ) & ( 8.92 ) & + 9.000 & ( 4.54 ) & ( 4.96 ) & ( 5.46 ) & ( 8.513 ) & ( 9.16 ) & + 10.000 & ( 4.63 ) & ( 5.07 ) & ( 5.61 ) & ( 8.536 ) & ( 9.56 ) & + comments : this rate has a substantial contribution from a subthreshold resonance , which depends critically on its energy with respect to the +p threshold .we have revised the q - value for this reaction by including a recent measurement of the mass excess in a weighted average with that listed in ref . . our recommended value , kev , is about 0.5 kev higher than the result of ref . .our recommended excitation energy is 2424.9(4 ) kev , which is a weighted average of e kev and e kev .consequently , our revised resonance energy is e kev .rolfs et al . have measured the individual s - factors for resonance and direct capture ( dc ) into this state .their absolute cross section scale was derived relative to the known strength of the e kev resonance as well as from the cross section for the (p,) reaction , which itself is normalized to the cross section of tanner et al . .the accepted value for the strength of the 1113 kev resonance is ev .however , there have been two recent measurements of ev and ev .we have used a weighted average of these three results : ev . for the normalization based on the (p,) reaction , we instead use the evaluated cross sections of iliadis et al . . taking these results together , we find that the s - factors of ref . should be increased by 2.9% , which is within their quoted uncertainties .we have re - fit these s - factors using a more realistic bound - state potential than was used in the original fits .this results in a slightly smaller value for the spectroscopic factor of the subthreshold state : compared to the original value of 0.9(1 ) .our uncertainty includes an estimate of the uncertainty associated with the choice of parameters for the bound - state potential .we have also re - fit the s - factor for resonance capture into the tail of this state .taking ev ( derived from the measured lifetime ) , this fit provides an independent value of .thus , we adopt an average of . to calculate the contribution of the tail of this state to the total reaction rate , it is also necessary to know the dimensionless reduced width . herewe have used an observed " value of and adopt an uncertainty of 15% .our value of is somewhat lower than the result of terakawa et al . ( ) , but is consistent with the best - fit " value of , derived from a measurement of the asymptotic normalization coefficient ( anc ) by ref .however , this latter value assumes a resonance energy of e kev and they show that the anc varies greatly with the assumed resonance energy .thus , it is not possible to scale this result to our energy of e kev with a reliable estimate for the uncertainty . as a result, we have used our value derived from the data of rolfs et al . of .it should be noted that we also obtain for the e kev state , which is in poor agreement with from terakawa et al . . although the reason for this disagreement is not clear , the fit to the dc data is extremely good and it is s that we use for the calculation of the reaction rate. note that in this case s is not well described by a second - order polynomial .however , we are able to fit the data with two second - order polynomials , each of which having a different cutoff energy .the uncertainties assigned to each term are chosen to yield an overall uncertainty of % over the entire temperature range , which accounts for uncertainties in the fits to the dc data as well as our estimate of the systematic uncertainty associated with the choice of potential parameters for the dc calculations .there are 8 known resonances in the (p,) reaction , extending up to an energy of e kev .resonance energies are derived from the excitation energies appearing in endsf . with the exception of the aforementioned 1113 kev resonance ,resonance strengths are obtained from the compilation of endt and van der leun .since no uncertainty is listed for the strength of the 2036 kev resonance , we have arbitrarily chosen a value of 25% .in addition to the subthreshold resonance , the contributions of the broad resonances at 1738 kev and 2036 kev were integrated numerically in our calculation of the reaction rate .+ & & & & & & + + & & & & & & + 0.010 & 1.10 & 1.96 & 3.53 & -1.858 & 5.72 & 1.31 + 0.011 & 9.14 & 1.59 & 2.81 & -1.791 & 5.67 & 1.15 + 0.012 & 3.45 & 5.99 & 1.07 & -1.732 & 5.61 & 6.82 + 0.013 & 6.94 & 1.20 & 2.15 & -1.679 & 5.63 & 1.68 + 0.014 & 8.54 & 1.46 & 2.58 & -1.631 & 5.55 & 8.93 + 0.015 & 6.59 & 1.15 & 1.99 & -1.587 & 5.57 & 3.45 + 0.016 & 3.60 & 6.24 & 1.08 & -1.547 & 5.57 & 9.24 + 0.018 & 4.18 & 7.16 & 1.25 & -1.477 & 5.51 & 1.04 + 0.020 & 1.82 & 3.09 & 5.41 & -1.416 & 5.50 & 1.22 + 0.025 & 3.72 & 6.44 & 1.13 & -1.294 & 5.48 & 1.05 + 0.030 & 2.26 & 1.64 & 6.77 & -1.171 & 1.57 & 3.45 + 0.040 & 1.13 & 1.20 & 5.09 & -9.700 & 2.19 & 1.23 + 0.050 & 2.23 & 2.37 & 1.01 & -8.481 & 2.23 & 1.27 + 0.060 & 7.17 & 7.65 & 3.25 & -7.674 & 2.23 & 1.28 + 0.070 & 2.22 & 2.37 & 1.00 & -7.100 & 2.23 & 1.28 + 0.080 & 1.59 & 1.70 & 7.21 & -6.672 & 2.23 & 1.27 + 0.090 & 4.33 & 4.64 & 1.96 & -6.342 & 2.22 & 1.26 + 0.100 & 5.99 & 6.42 & 2.71 & -6.079 & 2.21 & 1.25 + 0.110 & 5.07 & 5.43 & 2.29 & -5.865 & 2.19 & 1.22 + 0.120 & 2.97 & 3.19 & 1.34 & -5.688 & 2.16 & 1.19 + 0.130 & 1.31 & 1.41 & 5.94 & -5.538 & 2.12 & 1.12 + 0.140 & 4.73 & 5.00 & 2.11 & -5.407 & 2.02 & 9.71 + 0.150 & 1.67 & 1.51 & 6.30 & -5.282 & 1.70 & 5.37 + 0.160 & 1.49 & 4.95 & 1.74 & -5.131 & 1.09 & 3.60 + 0.180 & 4.66 & 5.75 & 7.08 & -4.660 & 2.14 & 2.95 + 0.200 & 6.31 & 7.68 & 9.49 & -4.171 & 2.06 & 5.43 + 0.250 & 4.55 & 5.54 & 6.83 & -3.283 & 2.06 & 5.45 + 0.300 & 1.62 & 1.97 & 2.43 & -2.695 & 2.05 & 5.46 + 0.350 & 1.04 & 1.26 & 1.55 & -2.279 & 2.05 & 5.50 + 0.400 & 2.28 & 2.78 & 3.43 & -1.970 & 2.04 & 5.52 + 0.450 & 2.49 & 3.03 & 3.73 & -1.731 & 2.04 & 5.57 + 0.500 & 1.66 & 2.02 & 2.48 & -1.542 & 2.03 & 5.69 + 0.600 & 2.81 & 3.40 & 4.17 & -1.259 & 1.97 & 6.40 + 0.700 & 2.16 & 2.58 & 3.12 & -1.056 & 1.87 & 6.73 + 0.800 & 1.06 & 1.26 & 1.50 & -8.979 & 1.72 & 4.53 + 0.900 & 4.08 & 4.82 & 5.71 & -7.636 & 1.67 & 6.77 + 1.000 & 1.35 & 1.60 & 1.93 & -6.428 & 1.80 & 4.61 + 1.250 & 1.74 & 2.08 & 2.58 & -3.857 & 2.01 & 1.48 + 1.500 & 1.26 & 1.48 & 1.80 & -1.898 & 1.81 & 1.44 + 1.750 & 5.60 & 6.46 & 7.61 & -4.269 & 1.56 & 1.20 + 2.000 & 1.75 & 1.99 & 2.29 & 6.962 & 1.36 & 9.07 + 2.500 & 8.73 & 9.69 & 1.08 & 2.276 & 1.09 & 4.42 + 3.000 & 2.52 & 2.77 & 3.04 & 3.322 & 9.42 & 2.01 + 3.500 & 5.32 & 5.78 & 6.30 & 4.059 & 8.44 & 1.01 + 4.000 & 9.23 & 9.96 & 1.08 & 4.603 & 7.75 & 6.33 + 5.000 & 1.97 & 2.10 & 2.25 & 5.349 & 6.81 & 4.63 + 6.000 & 3.22 & 3.42 & 3.64 & 5.836 & 6.17 & 4.01 + 7.000 & 4.56 & 4.83 & 5.12 & 6.181 & 5.73 & 4.15 + 8.000 & 5.94 & 6.27 & 6.62 & 6.441 & 5.43 & 4.37 + 9.000 & ( 7.35 ) & ( 7.76 ) & ( 8.18 ) & ( 6.654 ) & ( 5.31 ) & + 10.000 & ( 9.42 ) & ( 9.93 ) & ( 1.05 ) & ( 6.901 ) & ( 5.31 ) & + comments : for temperatures below t gk , the rate of the (,) mg reaction is dominated by direct capture ( dc ) and by possible , unobserved low - energy resonances at e , , and kev , which correspond to states in mg at e , 9305.39 , and 9532.48 kev , respectively . the direct capture component is estimated by scaling a calculated dc rate using relative spectroscopic factors for ( , d) mg .these were converted to absolute spectroscopic factors using the ratio together with the absolute spectroscopic factor for ( , d) .this procedure yields . it should be noted that the spectroscopic factors in ref . agree well with those obtained from the known -particle widths for the 892 and 1058 kev resonances .in addition , this ground - state spectroscopic factor is consistent with inferred from ( p , p ) measurements .the theoretical dc rate was calculated using the same potential parameters as used for the dwba fits to the stripping data .this procedure assumes that the ( , d ) reaction proceeds solely via transfer of an -particle cluster . while the angular distributions are suggestive of this , to be conservative we have assigned a factor of 2 uncertainty to the dc rate .the three possible resonances were selected based on their t=0 assignments and favorable j values , which allow for transfer in the case of the kev resonance and transfer for the other two states .resonances have been measured by smulders , highland and thwaites , fifield et al . and schmalbrock et al .resonance energies and total widths have been updated using the excitation energies and widths appearing in endsf .the resonance strengths and partial widths that we have adopted are obtained from a weighted average of the published resonance strengths with the following corrections : the strengths of smulders must be multiplied by a factor of 0.83 to convert laboratory stopping powers to center - of - mass values , as pointed out by schmalbrock et al .this also affects the results of highland and thwaites , who report strengths measured relative to that of smulders for the 2548 kev resonance . the original value of ref . , ev , becomes ev after the center - of - mass conversion .the weighted average of this value with the more precise result from schmalbrock et al . is ev .thus , we have scaled the strengths of highland and thwaites to this value . finally , the strengths of fifield et al . were recalculated using ( i ) updated stopping powers from srim-2008 , and ( ii ) our adopted strength for the standard resonance at e kev in (,) ( see paper iii ) . to calculate partial widths ,we have also made use of resonance strengths for the (p,) and (p,) mg reactions listed in paper iii as well as strengths for states populated by the (p , p ) and (, ) reactions .our recommended rates differ markedly from those in the nacre compilation for t.16 gk and t 2 gk ( see paper iv ) .we suspect that the difference at low temperatures stems from the fact that we have included the tails of low - lying resonances , which was apparently not done for the nacre rate . in fact , the latter rate is similar to our classical rate .nacre considered resonances up to e kev and matched to a hauser - feshbach rate at t=1 gk .however , there are numerous resonances known above this and the level density here may not be high enough to warrant a statistical - model approach . in our case, we have included resonances up to e kev and have matched to hauser - feshbach results near t=8 gk .thus , it is likely that the difference between the rates at high temperatures is a result of the choice of matching temperatures . in our view, the matching temperature should be higher than that employed by nacre .+ & & & & & & + + & & & & & & + 0.010 & 5.38 & 5.85 & 4.87 & -6.121 & 4.75 & 2.75 + 0.011 & 7.57 & 3.23 & 4.47 & -5.909 & 4.66 & 2.79 + 0.012 & 8.00 & 1.26 & 2.79 & -5.726 & 4.53 & 3.05 + 0.013 & 6.54 & 3.99 & 1.31 & -5.563 & 4.38 & 3.40 + 0.014 & 4.42 & 1.09 & 5.03 & -5.417 & 4.20 & 3.92 + 0.015 & 2.56 & 2.76 & 1.53 & -5.284 & 4.00 & 4.47 + 0.016 & 1.25 & 6.59 & 4.21 & -5.161 & 3.80 & 5.09 + 0.018 & 2.16 & 5.77 & 2.20 & -4.941 & 3.40 & 6.42 + 0.020 & 2.43 & 5.36 & 8.08 & -4.748 & 3.02 & 7.73 + 0.025 & 3.53 & 6.40 & 8.49 & -4.337 & 2.16 & 1.02 + 0.030 & 7.31 & 9.42 & 1.76 & -3.887 & 1.11 & 1.34 + 0.040 & 3.70 & 4.51 & 5.48 & -2.843 & 2.02 & 2.02 + 0.050 & 2.90 & 3.53 & 4.30 & -2.176 & 1.97 & 4.11 + 0.060 & 2.36 & 2.88 & 3.50 & -1.736 & 1.98 & 4.04 + 0.070 & 5.29 & 6.44 & 7.84 & -1.425 & 1.98 & 4.02 + 0.080 & 5.29 & 6.45 & 7.85 & -1.195 & 1.98 & 4.01 + 0.090 & 3.11 & 3.79 & 4.61 & -1.018 & 1.97 & 4.00 + 0.100 & 1.26 & 1.54 & 1.87 & -8.779 & 1.97 & 4.00 + 0.110 & 3.92 & 4.78 & 5.81 & -7.646 & 1.97 & 4.03 + 0.120 & 9.99 & 1.22 & 1.48 & -6.711 & 1.97 & 4.15 + 0.130 & 2.19 & 2.67 & 3.24 & -5.925 & 1.95 & 4.49 + 0.140 & 4.31 & 5.24 & 6.34 & -5.252 & 1.93 & 5.27 + 0.150 & 7.83 & 9.46 & 1.14 & -4.660 & 1.88 & 6.85 + 0.160 & 1.35 & 1.61 & 1.93 & -4.125 & 1.79 & 9.79 + 0.180 & 3.66 & 4.26 & 4.98 & -3.152 & 1.55 & 1.71 + 0.200 & 9.39 & 1.07 & 1.22 & -2.236 & 1.29 & 6.60 + 0.250 & 7.47 & 8.43 & 9.52 & -1.702 & 1.22 & 2.98 + 0.300 & 3.60 & 4.12 & 4.70 & 1.415 & 1.33 & 5.62 + 0.350 & 1.15 & 1.31 & 1.51 & 2.576 & 1.37 & 5.69 + 0.400 & 2.73 & 3.13 & 3.59 & 3.444 & 1.38 & 5.66 + 0.450 & 5.34 & 6.11 & 7.00 & 4.113 & 1.36 & 6.02 + 0.500 & 9.12 & 1.04 & 1.19 & 4.646 & 1.33 & 6.74 + 0.600 & 2.06 & 2.34 & 2.64 & 5.455 & 1.24 & 9.37 + 0.700 & 3.82 & 4.28 & 4.78 & 6.059 & 1.12 & 1.08 + 0.800 & 6.29 & 6.96 & 7.69 & 6.546 & 1.02 & 9.64 + 0.900 & 9.59 & 1.05 & 1.15 & 6.957 & 9.23 & 9.51 + 1.000 & 1.38 & 1.50 & 1.63 & 7.311 & 8.54 & 1.31 + 1.250 & 2.82 & 3.02 & 3.26 & 8.016 & 7.44 & 2.68 + 1.500 & 4.74 & 5.05 & 5.42 & 8.530 & 6.75 & 3.37 + 1.750 & 6.98 & 7.41 & 7.89 & 8.912 & 6.22 & 3.62 + 2.000 & 9.37 & 9.91 & 1.05 & 9.203 & 5.78 & 3.58 + 2.500 & 1.41 & 1.48 & 1.56 & 9.606 & 5.12 & 3.51 + 3.000 & 1.84 & 1.93 & 2.02 & 9.866 & 4.68 & 2.99 + 3.500 & 2.20 & 2.29 & 2.40 & 1.004 & 4.42 & 2.34 + 4.000 & 2.49 & 2.60 & 2.71 & 1.016 & 4.30 & 1.95 + 5.000 & 2.88 & 3.01 & 3.14 & 1.031 & 4.31 & 1.67 + 6.000 & 3.10 & 3.24 & 3.38 & 1.039 & 4.48 & 1.82 + 7.000 & 3.18 & 3.33 & 3.49 & 1.041 & 4.69 & 2.22 + 8.000 & 3.18 & 3.33 & 3.51 & 1.042 & 4.89 & 2.63 + 9.000 & ( 3.13 ) & ( 3.30 ) & ( 3.47 ) & ( 1.040 ) & ( 5.06 ) & + 10.000 & ( 3.28 ) & ( 3.45 ) & ( 3.63 ) & ( 1.045 ) & ( 5.06 ) & + comments : the same input information as in iliadis et al . is used for the calculation of the reaction rates . in total ,46 narrow resonances in the range of e=17 - 1937 kev are taken into account .the direct capture s - factor is adopted from rolfs et al . .+ & & & & & & + + & & & & & & + 0.010 & 4.05 & 6.67 & 1.08 & -5.567 & 4.80 & 5.22 + 0.011 & 1.54 & 2.42 & 3.77 & -5.208 & 4.39 & 6.15 + 0.012 & 3.15 & 4.76 & 7.28 & -4.910 & 4.12 & 7.26 + 0.013 & 3.97 & 5.85 & 8.79 & -4.658 & 3.95 & 8.32 + 0.014 & 3.44 & 5.02 & 7.47 & -4.444 & 3.85 & 8.50 + 0.015 & 2.20 & 3.19 & 4.76 & -4.258 & 3.80 & 8.60 + 0.016 & 1.11 & 1.60 & 2.39 & -4.097 & 3.79 & 7.64 + 0.018 & 1.60 & 2.33 & 3.49 & -3.829 & 3.85 & 5.69 + 0.020 & 1.33 & 1.96 & 2.93 & -3.617 & 3.95 & 4.96 + 0.025 & 5.62 & 8.57 & 1.32 & -3.239 & 4.25 & 5.54 + 0.030 & 6.45 & 1.01 & 1.60 & -2.992 & 4.52 & 6.29 + 0.040 & 1.24 & 2.02 & 3.34 & -2.693 & 4.91 & 5.49 + 0.050 & 6.89 & 1.15 & 1.94 & -2.519 & 5.11 & 5.24 + 0.060 & 2.26 & 3.71 & 6.23 & -2.401 & 4.98 & 9.03 + 0.070 & 7.04 & 1.05 & 1.63 & -2.296 & 4.14 & 4.17 + 0.080 & 2.84 & 3.84 & 5.16 & -2.168 & 3.05 & 6.09 + 0.090 & 1.29 & 1.89 & 2.64 & -2.010 & 3.39 & 1.11 + 0.100 & 5.55 & 9.18 & 1.58 & -1.850 & 4.68 & 3.02 + 0.110 & 2.05 & 3.91 & 7.72 & -1.705 & 5.79 & 5.00 + 0.120 & 6.42 & 1.43 & 3.02 & -1.579 & 6.70 & 6.18 + 0.130 & 1.76 & 4.34 & 9.64 & -1.470 & 7.34 & 6.99 + 0.140 & 4.32 & 1.15 & 2.64 & -1.374 & 7.77 & 7.44 + 0.150 & 9.47 & 2.69 & 6.29 & -1.290 & 8.06 & 7.65 + 0.160 & 1.95 & 5.67 & 1.33 & -1.216 & 8.22 & 8.09 + 0.180 & 7.22 & 2.04 & 4.71 & -1.088 & 8.02 & 7.97 + 0.200 & 2.56 & 6.06 & 1.33 & -9.743 & 7.03 & 7.80 + 0.250 & 8.28 & 1.07 & 1.46 & -6.826 & 2.57 & 2.49 + 0.300 & 1.49 & 1.72 & 1.98 & -4.063 & 1.49 & 7.08 + 0.350 & 1.28 & 1.49 & 1.72 & -1.906 & 1.51 & 5.63 + 0.400 & 6.66 & 7.72 & 8.96 & -2.592 & 1.52 & 7.32 + 0.450 & 2.42 & 2.80 & 3.24 & 1.030 & 1.50 & 9.52 + 0.500 & 6.86 & 7.89 & 9.10 & 2.067 & 1.45 & 1.25 + 0.600 & 3.34 & 3.80 & 4.33 & 3.639 & 1.33 & 1.78 + 0.700 & 1.06 & 1.19 & 1.35 & 4.784 & 1.20 & 2.13 + 0.800 & 2.59 & 2.87 & 3.21 & 5.663 & 1.08 & 2.10 + 0.900 & 5.28 & 5.80 & 6.40 & 6.365 & 9.88 & 1.51 + 1.000 & 9.44 & 1.03 & 1.13 & 6.940 & 9.21 & 8.75 + 1.250 & 2.79 & 3.03 & 3.30 & 8.017 & 8.57 & 6.89 + 1.500 & 5.90 & 6.41 & 7.00 & 8.768 & 8.75 & 1.82 + 1.750 & 1.02 & 1.11 & 1.22 & 9.322 & 9.05 & 3.46 + 2.000 & 1.56 & 1.70 & 1.87 & 9.744 & 9.25 & 4.72 + 2.500 & 2.84 & 3.09 & 3.40 & 1.034 & 9.27 & 5.74 + 3.000 & 4.23 & 4.59 & 5.04 & 1.074 & 8.98 & 6.67 + 3.500 & 5.59 & 6.05 & 6.62 & 1.102 & 8.62 & 6.38 + 4.000 & 6.85 & 7.37 & 8.04 & 1.121 & 8.23 & 6.56 + 5.000 & ( 9.19 ) & ( 9.98 ) & ( 1.08 ) & ( 1.151 ) & ( 8.20 ) & + 6.000 & ( 1.13 ) & ( 1.23 ) & ( 1.33 ) & ( 1.172 ) & ( 8.20 ) & + 7.000 & ( 1.30 ) & ( 1.41 ) & ( 1.53 ) & ( 1.186 ) & ( 8.20 ) & + 8.000 & ( 1.45 ) & ( 1.57 ) & ( 1.71 ) & ( 1.197 ) & ( 8.20 ) & + 9.000 & ( 1.56 ) & ( 1.70 ) & ( 1.84 ) & ( 1.204 ) & ( 8.20 ) & + 10.000 & ( 1.65 ) & ( 1.80 ) & ( 1.95 ) & ( 1.210 ) & ( 8.20 ) & + comments : in total , 55 resonances with energies of e=28 - 1822 kev are taken into account .above e kev , the resonance strengths are adopted from ref . , which have been normalized relative to the strength of the e kev resonance ( ev ) .the direct capture s - factor is adopted from ref . , with an estimated uncertainty of % .our treatment of the threshold states differs in two main respects from the analysis of hale et al .first , for the e kev resonance , we consider the spectroscopic factor of as a mean value rather than an upper limit , in agreement with the original interpretation in hale s ph.d . thesis .second , we entirely disregard the levels at and 9000 kev that were reported by powers et al . , who concluded that the existence of these states should be considered as tentative .these levels should have been populated in the much more sensitive study of ref . , but no evidence for their existence was seen .+ & & & & & & + + & & & & & & + 0.010 & 3.42 & 8.27 & 1.98 & -1.775 & 8.89 & 5.21 + 0.011 & 1.26 & 2.79 & 6.00 & -1.694 & 7.93 & 3.23 + 0.012 & 1.14 & 2.39 & 4.92 & -1.626 & 7.35 & 1.71 + 0.013 & 3.54 & 7.10 & 1.43 & -1.569 & 7.08 & 1.07 + 0.014 & 4.70 & 9.45 & 1.89 & -1.520 & 7.02 & 1.62 + 0.015 & 3.16 & 6.45 & 1.30 & -1.478 & 7.11 & 2.23 + 0.016 & 1.25 & 2.60 & 5.30 & -1.441 & 7.28 & 1.93 + 0.018 & 5.49 & 1.21 & 2.60 & -1.380 & 7.75 & 2.37 + 0.020 & 6.98 & 1.61 & 3.64 & -1.331 & 8.27 & 3.19 + 0.025 & 5.95 & 1.26 & 2.85 & -1.241 & 7.88 & 3.13 + 0.030 & 2.17 & 2.16 & 7.85 & -1.125 & 1.77 & 1.17 + 0.040 & 1.44 & 1.48 & 5.41 & -9.454 & 2.04 & 1.52 + 0.050 & 6.70 & 6.92 & 2.53 & -8.379 & 2.05 & 1.54 + 0.060 & 8.50 & 8.57 & 3.11 & -7.662 & 1.91 & 1.32 + 0.070 & 2.55 & 1.48 & 5.00 & -7.126 & 1.49 & 7.27 + 0.080 & 4.17 & 1.62 & 3.58 & -6.656 & 1.22 & 9.23 + 0.090 & 2.52 & 1.12 & 4.13 & -6.218 & 1.46 & 2.28 + 0.100 & 7.53 & 6.04 & 2.53 & -5.834 & 1.75 & 4.36 + 0.110 & 1.75 & 1.75 & 7.43 & -5.507 & 1.93 & 7.20 + 0.120 & 2.80 & 2.89 & 1.23 & -5.230 & 1.98 & 8.51 + 0.130 & 3.09 & 3.08 & 1.32 & -4.990 & 1.92 & 7.51 + 0.140 & 2.79 & 2.36 & 9.94 & -4.778 & 1.76 & 4.96 + 0.150 & 2.41 & 1.42 & 5.78 & -4.586 & 1.54 & 2.44 + 0.160 & 1.88 & 7.64 & 2.75 & -4.409 & 1.32 & 9.99 + 0.180 & 7.23 & 2.04 & 4.90 & -4.081 & 9.72 & 1.28 + 0.200 & 1.60 & 4.01 & 8.81 & -3.781 & 8.41 & 7.01 + 0.250 & 1.02 & 1.98 & 4.20 & -3.152 & 6.98 & 4.14 + 0.300 & 1.18 & 1.98 & 3.41 & -2.694 & 5.43 & 1.69 + 0.350 & 3.72 & 5.96 & 9.21 & -2.356 & 4.68 & 3.81 + 0.400 & 4.96 & 7.92 & 1.18 & -2.099 & 4.41 & 1.04 + 0.450 & 3.71 & 5.99 & 8.72 & -1.898 & 4.34 & 1.52 + 0.500 & 1.86 & 3.01 & 4.39 & -1.736 & 4.32 & 1.75 + 0.600 & 2.19 & 3.47 & 5.00 & -1.492 & 4.16 & 1.72 + 0.700 & 1.48 & 2.21 & 3.06 & -1.306 & 3.63 & 1.12 + 0.800 & 7.76 & 1.07 & 1.42 & -1.146 & 3.07 & 4.21 + 0.900 & 3.35 & 4.43 & 5.88 & -1.002 & 2.91 & 1.38 + 1.000 & 1.19 & 1.57 & 2.14 & -8.741 & 3.02 & 7.00 + 1.250 & 1.43 & 1.93 & 2.70 & -6.228 & 3.20 & 9.44 + 1.500 & ( 9.76 ) & ( 1.34 ) & ( 1.85 ) & ( -4.310 ) & ( 3.19 ) & + 1.750 & ( 5.08 ) & ( 6.99 ) & ( 9.62 ) & ( -2.661 ) & ( 3.19 ) & + 2.000 & ( 2.02 ) & ( 2.78 ) & ( 3.83 ) & ( -1.279 ) & ( 3.19 ) & + 2.500 & ( 1.72 ) & ( 2.36 ) & ( 3.26 ) & ( 8.607 ) & ( 3.19 ) & + 3.000 & ( 8.36 ) & ( 1.15 ) & ( 1.58 ) & ( 2.443 ) & ( 3.19 ) & + 3.500 & ( 2.82 ) & ( 3.88 ) & ( 5.34 ) & ( 3.658 ) & ( 3.19 ) & + 4.000 & ( 7.43 ) & ( 1.02 ) & ( 1.41 ) & ( 4.627 ) & ( 3.19 ) & + 5.000 & ( 3.17 ) & ( 4.37 ) & ( 6.01 ) & ( 6.080 ) & ( 3.19 ) & + 6.000 & ( 8.97 ) & ( 1.23 ) & ( 1.70 ) & ( 7.118 ) & ( 3.19 ) & + 7.000 & ( 1.96 ) & ( 2.70 ) & ( 3.71 ) & ( 7.900 ) & ( 3.19 ) & + 8.000 & ( 3.61 ) & ( 4.97 ) & ( 6.84 ) & ( 8.511 ) & ( 3.19 ) & + 9.000 & ( 5.85 ) & ( 8.06 ) & ( 1.11 ) & ( 8.994 ) & ( 3.19 ) & + 10.000 & ( 8.74 ) & ( 1.20 ) & ( 1.66 ) & ( 9.395 ) & ( 3.19 ) & + comments : in total , 40 resonances below e kev are taken into account . of these , 10 have been directly measured by wolke et al .another 7 resonances have measured strengths in the competing ( ,n ) reaction .most of these rate contributions are integrated numerically , with and computed from results presented in refs . assuming and an average -ray partial width of ev . one more level has been observed below the neutron threshold by giesen et al . at e kev ( ) .in addition , there are upper limit contributions from 22 states , 18 of which have been estimated indirectly . for these levels ,upper limits of are obtained from the excitation function shown in ref . , with and adopted from koehler .the remaining 4 upper limit contributions are obtained from ugalde et al .results from a ( , ) experiment unambiguously determine the values of the e and 334 kev resonances , and also show that the level corresponding to a previously assumed e kev resonance has in fact unnatural parity .furthermore , according to ref . , a doublet exists near 330 kev and , consequently , we apply the spectroscopic factor upper limit from ref . to both states .+ & & & & & & + + & & & & & & + 0.010 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.011 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.012 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.013 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.014 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.015 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.016 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.018 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.020 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.025 & 0.00 & 0.00 & 0.00 & 0.000 & 0.00 & 0.00 + 0.030 & 5.84 & 3.62 & 9.96 & -2.017 & 1.37 & 1.21 + 0.040 & 1.53 & 1.05 & 2.91 & -1.546 & 1.43 & 1.30 + 0.050 & 3.12 & 2.15 & 5.97 & -1.262 & 1.44 & 1.31 + 0.060 & 5.73 & 3.52 & 9.66 & -1.073 & 1.37 & 1.21 + 0.070 & 5.73 & 2.73 & 7.06 & -9.366 & 1.23 & 1.04 + 0.080 & 2.14 & 8.00 & 1.84 & -8.334 & 1.07 & 9.16 + 0.090 & 8.37 & 2.62 & 5.33 & -7.522 & 9.47 & 7.89 + 0.100 & 6.57 & 1.81 & 3.38 & -6.865 & 8.76 & 5.73 + 0.110 & 1.56 & 3.94 & 7.46 & -6.322 & 8.45 & 3.52 + 0.120 & 1.52 & 3.60 & 7.28 & -5.866 & 8.32 & 1.83 + 0.130 & 7.54 & 1.71 & 3.63 & -5.476 & 8.18 & 6.87 + 0.140 & 2.27 & 4.97 & 1.06 & -5.137 & 7.85 & 9.26 + 0.150 & 4.72 & 9.76 & 2.04 & -4.837 & 7.24 & 1.28 + 0.160 & 7.59 & 1.42 & 2.79 & -4.567 & 6.37 & 9.27 + 0.180 & 1.06 & 1.54 & 2.51 & -4.096 & 4.36 & 4.10 + 0.200 & 6.83 & 8.44 & 1.14 & -3.697 & 2.74 & 5.61 + 0.250 & 1.46 & 1.63 & 1.83 & -2.944 & 1.16 & 3.32 + 0.300 & 2.49 & 2.73 & 3.01 & -2.432 & 9.55 & 2.66 + 0.350 & 9.67 & 1.06 & 1.16 & -2.067 & 9.17 & 1.36 + 0.400 & 1.51 & 1.65 & 1.80 & -1.792 & 8.87 & 1.92 + 0.450 & 1.32 & 1.43 & 1.56 & -1.576 & 8.33 & 3.47 + 0.500 & 8.06 & 8.67 & 9.35 & -1.396 & 7.32 & 6.40 + 0.600 & 1.85 & 1.92 & 2.01 & -1.086 & 4.31 & 1.65 + 0.700 & 2.83 & 2.91 & 3.00 & -8.141 & 3.00 & 8.96 + 0.800 & 2.76 & 2.84 & 2.93 & -5.862 & 3.08 & 6.35 + 0.900 & 1.79 & 1.85 & 1.91 & -3.992 & 3.37 & 3.80 + 1.000 & 8.36 & 8.68 & 9.01 & -2.444 & 3.80 & 5.27 + 1.250 & 1.51 & 1.59 & 1.68 & 4.664 & 5.39 & 1.33 + 1.500 & ( 1.33 ) & ( 1.41 ) & ( 1.50 ) & ( 2.648 ) & ( 6.07 ) & + 1.750 & ( 8.16 ) & ( 8.67 ) & ( 9.21 ) & ( 4.463 ) & ( 6.07 ) & + 2.000 & ( 3.55 ) & ( 3.78 ) & ( 4.01 ) & ( 5.934 ) & ( 6.07 ) & + 2.500 & ( 3.33 ) & ( 3.53 ) & ( 3.76 ) & ( 8.170 ) & ( 6.07 ) & + 3.000 & ( 1.71 ) & ( 1.82 ) & ( 1.93 ) & ( 9.807 ) & ( 6.07 ) & + 3.500 & ( 6.03 ) & ( 6.41 ) & ( 6.81 ) & ( 1.107 ) & ( 6.07 ) & + 4.000 & ( 1.65 ) & ( 1.75 ) & ( 1.86 ) & ( 1.207 ) & ( 6.07 ) & + 5.000 & ( 7.41 ) & ( 7.87 ) & ( 8.36 ) & ( 1.358 ) & ( 6.07 ) & + 6.000 & ( 2.19 ) & ( 2.33 ) & ( 2.48 ) & ( 1.466 ) & ( 6.07 ) & + 7.000 & ( 4.94 ) & ( 5.25 ) & ( 5.58 ) & ( 1.547 ) & ( 6.07 ) & + 8.000 & ( 9.36 ) & ( 9.94 ) & ( 1.06 ) & ( 1.611 ) & ( 6.07 ) & + 9.000 & ( 1.56 ) & ( 1.65 ) & ( 1.76 ) & ( 1.662 ) & ( 6.07 ) & + 10.000 & ( 2.39 ) & ( 2.54 ) & ( 2.70 ) & ( 1.705 ) & ( 6.07 ) & + comments : in total , 41 resonances between the neutron threshold at 479 kev and e kev are taken into account . of these , 22 have been directly measured by refs . .those studies were performed in the same laboratory with continuing improvements on the target and detection systems .thus we assume that the most recent work supersedes the other three studies . for resonances beyond the energy range covered by jaeger et al . , the results of ref . are adopted .neutron and -ray partial widths are obtained from koehler when available . for all other wide resonancesan average value of ev is used for the -ray partial width . of the remaining resonances ,only upper limits for are available .the values are adopted from ref . , except for the e kev resonance , for which the results of the ( , d) mg measurement by ugalde et al . are used . since this studywas performed at one angle only , we interpret the reported spectroscopic factor as an upper limit rather than a mean value .results from a ( , ) experiment demonstrated that the level corresponding to a previously assumed e kev resonance has in fact unnatural parity .+ & & & & & & + + & & & & & & + 0.010 & 2.99 & 4.39 & 6.43 & -7.451 & 3.88 & 3.48 + 0.011 & 5.70 & 8.33 & 1.22 & -7.156 & 3.83 & 1.04 + 0.012 & 7.71 & 1.13 & 1.65 & -6.896 & 3.85 & 7.28 + 0.013 & 7.98 & 1.17 & 1.70 & -6.662 & 3.82 & 4.22 + 0.014 & 6.45 & 9.42 & 1.39 & -6.453 & 3.84 & 2.53 + 0.015 & 4.37 & 6.43 & 9.49 & -6.261 & 3.90 & 2.69 + 0.016 & 2.52 & 3.69 & 5.38 & -6.087 & 3.84 & 3.89 + 0.018 & 5.53 & 8.02 & 1.18 & -5.778 & 3.85 & 6.59 + 0.020 & 7.90 & 1.15 & 1.69 & -5.512 & 3.83 & 2.35 + 0.025 & 1.61 & 2.36 & 3.46 & -4.980 & 3.85 & 1.25 + 0.030 & 9.38 & 1.38 & 2.02 & -4.573 & 3.84 & 5.61 + 0.040 & 3.41 & 5.02 & 7.40 & -3.983 & 3.87 & 2.55 + 0.050 & 2.55 & 3.62 & 5.15 & -3.555 & 3.54 & 1.54 + 0.060 & 5.43 & 6.62 & 8.04 & -3.035 & 1.98 & 5.52 + 0.070 & 1.09 & 1.35 & 1.66 & -2.503 & 2.12 & 3.61 + 0.080 & 6.31 & 7.77 & 9.57 & -2.097 & 2.10 & 3.13 + 0.090 & 1.46 & 1.79 & 2.20 & -1.784 & 2.08 & 2.84 + 0.100 & 1.77 & 2.17 & 2.66 & -1.534 & 2.06 & 2.69 + 0.110 & 1.34 & 1.65 & 2.02 & -1.332 & 2.05 & 2.53 + 0.120 & 7.20 & 8.83 & 1.08 & -1.164 & 2.03 & 2.51 + 0.130 & 2.95 & 3.62 & 4.42 & -1.023 & 2.03 & 2.43 + 0.140 & 9.81 & 1.20 & 1.47 & -9.027 & 2.02 & 2.39 + 0.150 & 2.76 & 3.38 & 4.12 & -7.994 & 2.02 & 2.36 + 0.160 & 6.78 & 8.29 & 1.01 & -7.096 & 2.01 & 2.35 + 0.180 & 2.98 & 3.65 & 4.44 & -5.615 & 2.00 & 2.36 + 0.200 & 9.60 & 1.17 & 1.43 & -4.447 & 2.00 & 2.29 + 0.250 & 7.49 & 9.13 & 1.11 & -2.394 & 1.99 & 2.11 + 0.300 & 2.80 & 3.41 & 4.15 & -1.076 & 1.99 & 1.95 + 0.350 & 6.92 & 8.44 & 1.03 & -1.705 & 1.99 & 1.84 + 0.400 & 1.33 & 1.62 & 1.97 & 4.828 & 1.98 & 1.81 + 0.450 & 2.17 & 2.64 & 3.22 & 9.718 & 1.98 & 1.83 + 0.500 & 3.17 & 3.86 & 4.69 & 1.350 & 1.96 & 1.96 + 0.600 & 5.53 & 6.68 & 8.09 & 1.900 & 1.91 & 3.40 + 0.700 & 8.46 & 1.01 & 1.21 & 2.311 & 1.77 & 1.19 + 0.800 & 1.26 & 1.47 & 1.71 & 2.688 & 1.53 & 4.01 + 0.900 & 1.93 & 2.17 & 2.46 & 3.081 & 1.24 & 8.38 + 1.000 & ( 2.99 ) & ( 3.28 ) & ( 3.63 ) & ( 3.49 ) & ( 9.75 ) & + 1.250 & ( 8.25 ) & ( 8.86 ) & ( 9.53 ) & ( 4.48 ) & ( 7.29 ) & + 1.500 & ( 1.81 ) & ( 1.94 ) & ( 2.09 ) & ( 5.27 ) & ( 7.47 ) & + 1.750 & ( 3.23 ) & ( 3.48 ) & ( 3.77 ) & ( 5.85 ) & ( 7.81 ) & + 2.000 & ( 4.97 ) & ( 5.38 ) & ( 5.83 ) & ( 6.29 ) & ( 8.02 ) & + 2.500 & ( 8.86 ) & ( 9.61 ) & ( 1.04 ) & ( 6.87 ) & ( 8.21 ) & + 3.000 & ( 1.26 ) & ( 1.36 ) & ( 1.48 ) & ( 7.22 ) & ( 8.29 ) & + 3.500 & ( 1.57 ) & ( 1.70 ) & ( 1.85 ) & ( 7.44 ) & ( 8.33 ) & + 4.000 & ( 1.81 ) & ( 1.96 ) & ( 2.13 ) & ( 7.58 ) & ( 8.35 ) & + 5.000 & ( 2.10 ) & ( 2.28 ) & ( 2.48 ) & ( 7.73 ) & ( 8.40 ) & + 6.000 & ( 2.21 ) & ( 2.40 ) & ( 2.61 ) & ( 7.78 ) & ( 8.44 ) & + 7.000 & ( 2.21 ) & ( 2.41 ) & ( 2.62 ) & ( 7.79 ) & ( 8.47 ) & + 8.000 & ( 2.16 ) & ( 2.35 ) & ( 2.56 ) & ( 7.76 ) & ( 8.49 ) & + 9.000 & ( 2.08 ) & ( 2.26 ) & ( 2.46 ) & ( 7.72 ) & ( 8.51 ) & + 10.000 & ( 1.99 ) & ( 2.16 ) & ( 2.35 ) & ( 7.68 ) & ( 8.52 ) & + comments : the value of q=5504.18.34 kev is obtained from the mass excesses reported by mukherjee et al . .the resonance energies are either calculated from excitation energies ( ruiz et al . ) or are directly adopted from experiment ( dauria et al . ) , depending on which procedure yielded a smaller uncertainty . in total ,6 narrow resonances in the range of e=206 - 1101 kev are taken into account .the resonance at e=329 kev , corresponding to a mg level at e=5837 kev , has been disregarded according to the suggestion of seweryniak et al . .the direct capture s - factor is adopted from bateman et al . .+ & & & & & & + + & & & & & & + 0.010 & 7.56 & 2.08 & 5.69 & -6.836 & 1.01 & 9.83 + 0.011 & 7.27 & 1.70 & 3.95 & -6.397 & 8.55 & 1.48 + 0.012 & 3.17 & 6.53 & 1.33 & -6.031 & 7.27 & 1.76 + 0.013 & 7.64 & 1.42 & 2.60 & -5.723 & 6.25 & 1.90 + 0.014 & 1.15 & 1.97 & 3.35 & -5.459 & 5.44 & 1.89 + 0.015 & 1.19 & 1.91 & 3.04 & -5.232 & 4.83 & 1.78 + 0.016 & 9.06 & 1.39 & 2.12 & -5.033 & 4.38 & 1.41 + 0.018 & 2.54 & 3.77 & 5.47 & -4.703 & 3.91 & 5.84 + 0.020 & 3.57 & 5.19 & 7.53 & -4.441 & 3.82 & 2.55 + 0.025 & 4.28 & 6.31 & 9.35 & -3.960 & 3.95 & 3.95 + 0.030 & 1.57 & 2.20 & 3.16 & -3.605 & 3.53 & 6.49 + 0.040 & 3.12 & 4.43 & 6.29 & -3.075 & 3.57 & 4.06 + 0.050 & 8.80 & 1.37 & 2.12 & -2.732 & 4.49 & 3.50 + 0.060 & 8.52 & 1.42 & 2.34 & -2.498 & 5.08 & 7.01 + 0.070 & 8.82 & 1.52 & 2.85 & -2.257 & 5.77 & 5.78 + 0.080 & 1.82 & 3.41 & 8.48 & -1.939 & 7.28 & 4.07 + 0.090 & 3.50 & 6.36 & 1.52 & -1.646 & 6.99 & 3.96 + 0.100 & 4.08 & 6.93 & 1.56 & -1.407 & 6.46 & 4.09 + 0.110 & 3.05 & 4.94 & 1.03 & -1.212 & 5.99 & 4.23 + 0.120 & 1.62 & 2.52 & 5.03 & -1.049 & 5.60 & 4.32 + 0.130 & 6.63 & 1.01 & 1.91 & -9.118 & 5.27 & 4.35 + 0.140 & 2.21 & 3.28 & 5.96 & -7.942 & 4.98 & 4.33 + 0.150 & 6.30 & 9.10 & 1.60 & -6.925 & 4.72 & 4.28 + 0.160 & 1.57 & 2.23 & 3.80 & -6.035 & 4.49 & 4.20 + 0.180 & 7.18 & 9.91 & 1.60 & -4.550 & 4.09 & 3.99 + 0.200 & 2.45 & 3.29 & 5.04 & -3.359 & 3.74 & 3.72 + 0.250 & 2.25 & 2.90 & 4.08 & -1.199 & 3.07 & 2.93 + 0.300 & 1.00 & 1.26 & 1.66 & 2.558 & 2.60 & 2.05 + 0.350 & 2.93 & 3.60 & 4.59 & 1.302 & 2.29 & 1.30 + 0.400 & 6.61 & 7.96 & 9.87 & 2.090 & 2.06 & 7.55 + 0.450 & 1.24 & 1.48 & 1.80 & 2.706 & 1.89 & 4.27 + 0.500 & 2.07 & 2.45 & 2.92 & 3.204 & 1.76 & 2.52 + 0.600 & 4.55 & 5.28 & 6.17 & 3.971 & 1.54 & 1.10 + 0.700 & 8.25 & 9.43 & 1.08 & 4.550 & 1.37 & 6.63 + 0.800 & 1.33 & 1.51 & 1.71 & 5.016 & 1.25 & 5.29 + 0.900 & 1.98 & 2.22 & 2.49 & 5.403 & 1.15 & 8.63 + 1.000 & 2.77 & 3.08 & 3.44 & 5.732 & 1.08 & 1.14 + 1.250 & ( 5.54 ) & ( 6.13 ) & ( 6.78 ) & ( 6.418 ) & ( 1.01 ) & + 1.500 & ( 9.49 ) & ( 1.05 ) & ( 1.16 ) & ( 6.956 ) & ( 1.01 ) & + 1.750 & ( 1.41 ) & ( 1.56 ) & ( 1.73 ) & ( 7.352 ) & ( 1.01 ) & + 2.000 & ( 1.91 ) & ( 2.12 ) & ( 2.34 ) & ( 7.657 ) & ( 1.01 ) & + 2.500 & ( 2.97 ) & ( 3.28 ) & ( 3.63 ) & ( 8.097 ) & ( 1.01 ) & + 3.000 & ( 3.97 ) & ( 4.39 ) & ( 4.86 ) & ( 8.388 ) & ( 1.01 ) & + 3.500 & ( 4.90 ) & ( 5.42 ) & ( 6.00 ) & ( 8.598 ) & ( 1.01 ) & + 4.000 & ( 5.74 ) & ( 6.35 ) & ( 7.02 ) & ( 8.756 ) & ( 1.01 ) & + 5.000 & ( 7.16 ) & ( 7.92 ) & ( 8.76 ) & ( 8.977 ) & ( 1.01 ) & + 6.000 & ( 8.29 ) & ( 9.17 ) & ( 1.01 ) & ( 9.123 ) & ( 1.01 ) & + 7.000 & ( 9.18 ) & ( 1.02 ) & ( 1.12 ) & ( 9.226 ) & ( 1.01 ) & + 8.000 & ( 9.91 ) & ( 1.10 ) & ( 1.21 ) & ( 9.302 ) & ( 1.01 ) & + 9.000 & ( 1.05 ) & ( 1.16 ) & ( 1.28 ) & ( 9.359 ) & ( 1.01 ) & + 10.000 & ( 1.11 ) & ( 1.23 ) & ( 1.36 ) & ( 9.416 ) & ( 1.01 ) & + comments : measured resonance energies and strengths are adopted from seuthe et al . and stegmller et al .for some of the observed resonances we calculated resonance energies from the excitation energies reported in jenkins et al .for the observed e=204 kev resonance , the experimental yield together with the measured branching ratio ( 100% ) results in a resonance strength of =(1.4.3) ev ( i.e. , any unobserved -ray transitions can be excluded ) .the unobserved e=200 kev resonance can most likely be disregarded , but the unobserved e=189 kev resonance strongly influences the reaction rates . for the latter resonance , we estimate an upper limit of ev from the results of the ( , d ) work of schmidt et al . . the measurement of jenkins et al . identified the e=7623 and 7647 kev levels in mg as ( unobserved ) d - wave resonances ( i.e. , the s - wave contributions assumed in ref . can be disregarded ; see also the arguments based on the shell model in comisel et al . ) . in total, 13 narrow resonances with energies of e=43761 kev are taken into account . the unobserved resonance at e=4 kev and the direct capture process ( seuthe et al . ) do not contribute significantly to the total rates .+ & & & & & & + + & & & & & & + 0.010 & 8.34 & 1.21 & 1.77 & -7.349 & 3.83 & 4.76 + 0.011 & 1.59 & 2.33 & 3.41 & -7.053 & 3.80 & 2.57 + 0.012 & 2.17 & 3.15 & 4.60 & -6.793 & 3.82 & 2.72 + 0.013 & 2.22 & 3.27 & 4.84 & -6.559 & 3.91 & 1.84 + 0.014 & 1.87 & 2.73 & 3.99 & -6.347 & 3.85 & 5.04 + 0.015 & 1.24 & 1.81 & 2.64 & -6.158 & 3.84 & 6.04 + 0.016 & 7.13 & 1.05 & 1.54 & -5.982 & 3.84 & 2.40 + 0.018 & 1.57 & 2.28 & 3.36 & -5.673 & 3.84 & 3.16 + 0.020 & 2.27 & 3.32 & 4.83 & -5.406 & 3.82 & 4.04 + 0.025 & 4.67 & 6.91 & 1.01 & -4.873 & 3.88 & 3.92 + 0.030 & 2.71 & 4.02 & 5.86 & -4.467 & 3.87 & 4.34 + 0.040 & 1.28 & 1.88 & 2.72 & -3.852 & 3.75 & 4.75 + 0.050 & 1.70 & 7.15 & 1.94 & -3.273 & 1.06 & 1.08 + 0.060 & 1.20 & 9.88 & 2.93 & -2.803 & 1.47 & 1.96 + 0.070 & 3.71 & 3.49 & 1.05 & -2.454 & 1.62 & 2.38 + 0.080 & 5.36 & 5.01 & 1.49 & -2.187 & 1.60 & 2.38 + 0.090 & 5.21 & 3.98 & 1.17 & -1.969 & 1.40 & 1.95 + 0.100 & 4.79 & 2.22 & 6.13 & -1.780 & 1.09 & 1.45 + 0.110 & 4.40 & 1.08 & 2.53 & -1.607 & 7.48 & 1.12 + 0.120 & 3.36 & 5.41 & 9.60 & -1.440 & 4.60 & 7.76 + 0.130 & 2.06 & 2.71 & 3.72 & -1.281 & 2.73 & 2.27 + 0.140 & 1.01 & 1.23 & 1.48 & -1.131 & 1.87 & 2.02 + 0.150 & 4.12 & 4.85 & 5.67 & -9.937 & 1.61 & 1.12 + 0.160 & 1.43 & 1.67 & 1.94 & -8.699 & 1.55 & 6.55 + 0.180 & 1.16 & 1.37 & 1.59 & -6.597 & 1.58 & 4.75 + 0.200 & 6.32 & 7.45 & 8.74 & -4.901 & 1.61 & 4.62 + 0.250 & 1.31 & 1.56 & 1.84 & -1.860 & 1.67 & 4.23 + 0.300 & 9.63 & 1.14 & 1.35 & 1.335 & 1.70 & 3.84 + 0.350 & 3.88 & 4.61 & 5.47 & 1.528 & 1.71 & 3.64 + 0.400 & 1.08 & 1.28 & 1.52 & 2.550 & 1.72 & 3.55 + 0.450 & 2.35 & 2.79 & 3.31 & 3.328 & 1.72 & 3.50 + 0.500 & 4.32 & 5.14 & 6.09 & 3.938 & 1.71 & 3.47 + 0.600 & 1.06 & 1.25 & 1.48 & 4.830 & 1.67 & 3.70 + 0.700 & 1.99 & 2.34 & 2.75 & 5.455 & 1.61 & 4.58 + 0.800 & 3.20 & 3.74 & 4.36 & 5.924 & 1.52 & 6.68 + 0.900 & 4.69 & 5.42 & 6.26 & 6.296 & 1.42 & 9.95 + 1.000 & 6.47 & 7.37 & 8.43 & 6.605 & 1.32 & 1.36 + 1.250 & 1.22 & 1.35 & 1.51 & 7.211 & 1.06 & 2.04 + 1.500 & 1.98 & 2.16 & 2.35 & 7.678 & 8.69 & 2.10 + 1.750 & 2.94 & 3.16 & 3.40 & 8.059 & 7.43 & 1.62 + 2.000 & 4.07 & 4.34 & 4.64 & 8.377 & 6.67 & 1.16 + 2.500 & 6.74 & 7.15 & 7.59 & 8.876 & 5.99 & 6.21 + 3.000 & 9.70 & 1.03 & 1.09 & 9.237 & 5.82 & 3.65 + 3.500 & 1.26 & 1.34 & 1.42 & 9.502 & 5.81 & 4.18 + 4.000 & 1.54 & 1.63 & 1.73 & 9.697 & 5.87 & 4.72 + 5.000 & 1.98 & 2.10 & 2.23 & 9.951 & 6.00 & 4.33 + 6.000 & 2.27 & 2.42 & 2.57 & 1.009 & 6.10 & 4.20 + 7.000 & 2.45 & 2.61 & 2.77 & 1.017 & 6.17 & 4.22 + 8.000 & 2.54 & 2.70 & 2.88 & 1.021 & 6.21 & 4.37 + 9.000 & 2.57 & 2.74 & 2.91 & 1.022 & 6.25 & 4.70 + 10.000 & 2.56 & 2.72 & 2.90 & 1.021 & 6.27 & 4.97 + comments : excitation energies and spectroscopic factors of threshold states are presented in hale et al .for the e=138 kev resonance , the directly measured upper limit of the resonance strength ( rowland et al . ) is taken into account .the measured resonance strengths for e kev are adopted from endt , but are renormalized relative to the e=512 kev resonance ( see tab.1 in iliadis et al . ) . in total ,54 resonances with energies in the range of e=6 - 2256 kev are taken into account .the direct capture cross section is adopted from hale et al . .+ & & & & & & + + & & & & & & + 0.010 & 1.37 & 1.88 & 2.64 & -6.843 & 3.37 & 5.84 + 0.011 & 2.70 & 3.76 & 5.26 & -6.544 & 3.39 & 2.38 + 0.012 & 3.78 & 5.31 & 7.44 & -6.280 & 3.42 & 9.23 + 0.013 & 3.99 & 5.65 & 7.90 & -6.044 & 3.44 & 8.10 + 0.014 & 3.33 & 4.72 & 6.60 & -5.832 & 3.44 & 1.15 + 0.015 & 2.28 & 3.22 & 4.49 & -5.640 & 3.41 & 1.28 + 0.016 & 1.31 & 1.84 & 2.56 & -5.465 & 3.37 & 1.12 + 0.018 & 2.89 & 4.00 & 5.47 & -5.157 & 3.24 & 9.16 + 0.020 & 4.13 & 5.58 & 7.59 & -4.893 & 3.10 & 2.32 + 0.025 & 8.53 & 1.11 & 1.48 & -4.363 & 2.89 & 9.69 + 0.030 & 5.09 & 6.55 & 8.69 & -3.955 & 2.78 & 1.18 + 0.040 & 2.16 & 2.74 & 3.54 & -3.352 & 2.57 & 6.87 + 0.050 & 1.81 & 2.28 & 2.87 & -2.911 & 2.38 & 2.13 + 0.060 & 7.18 & 8.78 & 1.08 & -2.546 & 2.10 & 8.48 + 0.070 & 2.04 & 2.43 & 2.92 & -2.213 & 1.83 & 8.02 + 0.080 & 3.69 & 4.37 & 5.20 & -1.925 & 1.74 & 6.40 + 0.090 & 4.01 & 4.74 & 5.64 & -1.686 & 1.73 & 5.79 + 0.100 & 2.84 & 3.35 & 4.00 & -1.490 & 1.71 & 6.03 + 0.110 & 1.46 & 1.71 & 2.03 & -1.327 & 1.68 & 6.85 + 0.120 & 5.89 & 6.88 & 8.11 & -1.188 & 1.60 & 8.61 + 0.130 & 2.02 & 2.33 & 2.72 & -1.066 & 1.49 & 9.93 + 0.140 & 6.13 & 7.01 & 8.04 & -9.564 & 1.36 & 9.52 + 0.150 & 1.71 & 1.93 & 2.20 & -8.550 & 1.26 & 5.51 + 0.160 & 4.41 & 4.96 & 5.61 & -7.607 & 1.21 & 4.01 + 0.180 & 2.40 & 2.72 & 3.09 & -5.906 & 1.26 & 1.42 + 0.200 & 1.03 & 1.17 & 1.34 & -4.443 & 1.34 & 2.64 + 0.250 & 1.64 & 1.88 & 2.17 & -1.666 & 1.39 & 3.09 + 0.300 & 1.11 & 1.26 & 1.44 & 2.341 & 1.32 & 3.54 + 0.350 & 4.46 & 5.00 & 5.67 & 1.614 & 1.20 & 4.42 + 0.400 & 1.34 & 1.48 & 1.65 & 2.698 & 1.05 & 5.64 + 0.450 & 3.43 & 3.72 & 4.07 & 3.621 & 8.71 & 5.88 + 0.500 & 7.95 & 8.52 & 9.16 & 4.447 & 7.17 & 3.39 + 0.600 & 3.36 & 3.57 & 3.80 & 5.879 & 6.10 & 3.97 + 0.700 & 1.06 & 1.13 & 1.20 & 7.030 & 6.36 & 3.28 + 0.800 & 2.64 & 2.82 & 3.01 & 7.943 & 6.61 & 3.44 + 0.900 & 5.49 & 5.87 & 6.27 & 8.677 & 6.64 & 3.56 + 1.000 & 1.01 & 1.07 & 1.15 & 9.283 & 6.53 & 3.49 + 1.250 & 3.23 & 3.43 & 3.64 & 1.044 & 6.00 & 4.03 + 1.500 & 7.66 & 8.09 & 8.55 & 1.130 & 5.57 & 1.07 + 1.750 & 1.50 & 1.58 & 1.67 & 1.197 & 5.33 & 1.89 + 2.000 & 2.58 & 2.71 & 2.86 & 1.251 & 5.19 & 1.90 + 2.500 & 5.76 & 6.06 & 6.37 & 1.331 & 5.05 & 1.14 + 3.000 & 1.01 & 1.06 & 1.12 & 1.388 & 5.03 & 1.06 + 3.500 & 1.53 & 1.61 & 1.69 & 1.429 & 5.10 & 1.42 + 4.000 & ( 2.40 ) & ( 2.52 ) & ( 2.65 ) & ( 1.474 ) & ( 5.11 ) & + 5.000 & ( 4.87 ) & ( 5.13 ) & ( 5.40 ) & ( 1.545 ) & ( 5.11 ) & + 6.000 & ( 8.26 ) & ( 8.70 ) & ( 9.15 ) & ( 1.598 ) & ( 5.11 ) & + 7.000 & ( 1.25 ) & ( 1.31 ) & ( 1.38 ) & ( 1.639 ) & ( 5.11 ) & + 8.000 & ( 1.72 ) & ( 1.81 ) & ( 1.91 ) & ( 1.671 ) & ( 5.11 ) & + 9.000 & ( 2.25 ) & ( 2.36 ) & ( 2.49 ) & ( 1.698 ) & ( 5.11 ) & + 10.000 & ( 2.92 ) & ( 3.08 ) & ( 3.24 ) & ( 1.724 ) & ( 5.11 ) & + comments : excitation energies and spectroscopic factors of threshold states are presented in hale et al .for the e=138 kev resonance , the directly measured upper limit for the ( p, ) resonance strength ( rowland et al . ) also influences the ( p, ) reaction rates . for e kev ,the resonance strengths are adopted from tab .vi of hale et al .the resonance strengths for e kev are renormalized relative to the e=338 kev resonance ( rowland et al . ) . in total ,52 resonances with energies in the range of e=6 - 2328 kev are taken into account .+ & & & & & & + + & & & & & & + 0.010 & 5.26 & 7.66 & 1.12 & -8.316 & 3.84 & 2.81 + 0.011 & 1.19 & 1.75 & 2.57 & -8.003 & 3.87 & 4.84 + 0.012 & 1.92 & 2.81 & 4.20 & -7.725 & 3.89 & 3.50 + 0.013 & 2.26 & 3.32 & 4.87 & -7.479 & 3.83 & 3.81 + 0.014 & 2.12 & 3.11 & 4.60 & -7.254 & 3.86 & 2.91 + 0.015 & 1.60 & 2.37 & 3.48 & -7.052 & 3.84 & 4.03 + 0.016 & 1.02 & 1.51 & 2.22 & -6.866 & 3.86 & 6.94 + 0.018 & 2.75 & 4.07 & 5.92 & -6.538 & 3.87 & 6.19 + 0.020 & 4.64 & 6.84 & 1.00 & -6.255 & 3.87 & 2.60 + 0.025 & 1.33 & 1.97 & 2.89 & -5.689 & 3.86 & 3.44 + 0.030 & 1.00 & 1.48 & 2.15 & -5.257 & 3.84 & 3.22 + 0.040 & 5.44 & 7.94 & 1.18 & -4.628 & 3.85 & 4.32 + 0.050 & 4.73 & 6.93 & 1.02 & -4.181 & 3.84 & 1.93 + 0.060 & 1.43 & 2.11 & 3.07 & -3.840 & 3.87 & 5.08 + 0.070 & 2.16 & 3.18 & 4.64 & -3.568 & 3.87 & 5.49 + 0.080 & 2.03 & 2.99 & 4.32 & -3.345 & 3.84 & 3.83 + 0.090 & 1.33 & 1.96 & 2.90 & -3.156 & 3.89 & 2.59 + 0.100 & 6.76 & 1.00 & 1.47 & -2.993 & 3.89 & 1.82 + 0.110 & 2.81 & 4.18 & 6.15 & -2.851 & 3.91 & 3.55 + 0.120 & 1.00 & 1.47 & 2.15 & -2.725 & 3.83 & 2.14 + 0.130 & 3.09 & 4.55 & 6.68 & -2.612 & 3.84 & 6.90 + 0.140 & 8.57 & 1.24 & 1.82 & -2.511 & 3.81 & 3.34 + 0.150 & 2.17 & 3.16 & 4.63 & -2.418 & 3.82 & 2.96 + 0.160 & 5.08 & 7.40 & 1.08 & -2.332 & 3.82 & 2.40 + 0.180 & 2.35 & 3.40 & 4.99 & -2.180 & 3.81 & 4.40 + 0.200 & 9.16 & 1.34 & 1.95 & -2.043 & 3.89 & 9.61 + 0.250 & 1.57 & 2.33 & 3.66 & -1.755 & 4.45 & 1.37 + 0.300 & 1.51 & 2.30 & 3.72 & -1.526 & 4.64 & 1.27 + 0.350 & 9.22 & 1.38 & 2.16 & -1.347 & 4.38 & 6.57 + 0.400 & 3.85 & 5.64 & 8.36 & -1.207 & 3.91 & 2.76 + 0.450 & 1.22 & 1.71 & 2.41 & -1.097 & 3.47 & 6.18 + 0.500 & 3.09 & 4.22 & 5.73 & -1.008 & 3.07 & 2.67 + 0.600 & 1.31 & 1.68 & 2.14 & -8.695 & 2.50 & 6.66 + 0.700 & 3.73 & 4.70 & 5.94 & -7.660 & 2.36 & 8.52 + 0.800 & 8.51 & 1.08 & 1.38 & -6.826 & 2.43 & 1.15 + 0.900 & 1.70 & 2.18 & 2.88 & -6.116 & 2.66 & 3.20 + 1.000 & 3.10 & 4.08 & 5.46 & -5.494 & 2.87 & 4.26 + 1.250 & 1.09 & 1.50 & 2.11 & -4.191 & 3.33 & 2.38 + 1.500 & 3.04 & 4.30 & 6.08 & -3.146 & 3.52 & 2.12 + 1.750 & 7.23 & 1.03 & 1.47 & -2.270 & 3.59 & 6.12 + 2.000 & 1.49 & 2.12 & 3.07 & -1.545 & 3.63 & 1.34 + 2.500 & 4.80 & 6.76 & 9.73 & -3.839 & 3.60 & 1.25 + 3.000 & 1.13 & 1.62 & 2.34 & 4.841 & 3.64 & 3.91 + 3.500 & 2.26 & 3.24 & 4.71 & 1.181 & 3.67 & 4.75 + 4.000 & 3.93 & 5.68 & 8.16 & 1.737 & 3.67 & 4.92 + 5.000 & 9.31 & 1.35 & 1.96 & 2.602 & 3.75 & 1.09 + 6.000 & 1.74 & 2.53 & 3.67 & 3.232 & 3.75 & 2.12 + 7.000 & 2.87 & 4.14 & 6.14 & 3.732 & 3.78 & 1.35 + 8.000 & 4.12 & 6.06 & 8.88 & 4.104 & 3.78 & 4.11 + 9.000 & 5.67 & 8.22 & 1.20 & 4.413 & 3.77 & 3.40 + 10.000 & 7.29 & 1.07 & 1.56 & 4.668 & 3.77 & 3.26 + comments : the contributions of two resonances and the direct capture process are taken into account for the calculation of the reaction rates .the s - factor for the direct capture to the ground state ( j=5/2 ) is calculated here by using the measured spectroscopic factor of for the mirror state in .( the shell model value of was used previously ) .the two resonances are located at e and 1651 kev .the first resonance corresponds to an excitation energy of e kev and has a spin - parity of 1/2 .the proton width is adopted from ref . , while the -ray partial width has been measured in a coulomb dissociation experiment .the second resonance corresponds to e kev and represents most likely the 3/2 shell model state .we obtain its proton and -ray partial widths using the shell model results of ref . . a g - wave ( ) resonance , corresponding to the mirror state at e kev ,is expected to occur near e mev .neither has the level been observed in , nor has a value been reported for the shell model -ray partial width and , consequently , we disregard this state . note that it is highly unlikely that any of the resonances observed in the scattering study of he et al . have a significant effect on the total reaction rates since they are located too high in energy ( e mev ) .+ & & & & & & + + & & & & & & + 0.010 & 2.45 & 3.59 & 5.19 & -7.932 & 3.80 & 8.89 + 0.011 & 5.57 & 8.21 & 1.19 & -7.619 & 3.80 & 2.68 + 0.012 & 9.01 & 1.31 & 1.94 & -7.341 & 3.84 & 7.73 + 0.013 & 1.05 & 1.55 & 2.27 & -7.095 & 3.84 & 2.52 + 0.014 & 9.88 & 1.45 & 2.10 & -6.871 & 3.80 & 3.91 + 0.015 & 7.42 & 1.09 & 1.60 & -6.669 & 3.83 & 1.45 + 0.016 & 4.82 & 7.15 & 1.03 & -6.482 & 3.83 & 1.14 + 0.018 & 1.27 & 1.86 & 2.74 & -6.155 & 3.84 & 1.69 + 0.020 & 2.16 & 3.14 & 4.59 & -5.872 & 3.83 & 4.34 + 0.025 & 6.31 & 9.14 & 1.34 & -5.305 & 3.83 & 3.88 + 0.030 & 4.60 & 6.72 & 9.91 & -4.875 & 3.82 & 6.09 + 0.040 & 2.47 & 3.60 & 5.30 & -4.247 & 3.78 & 3.64 + 0.050 & 2.14 & 3.12 & 4.56 & -3.801 & 3.80 & 2.01 + 0.060 & 6.32 & 9.26 & 1.35 & -3.461 & 3.82 & 3.10 + 0.070 & 9.72 & 1.40 & 2.04 & -3.190 & 3.77 & 2.90 + 0.080 & 8.84 & 1.30 & 1.90 & -2.967 & 3.84 & 1.88 + 0.090 & 5.80 & 8.54 & 1.25 & -2.779 & 3.89 & 3.88 + 0.100 & 2.96 & 4.32 & 6.28 & -2.617 & 3.76 & 3.02 + 0.110 & 1.20 & 1.77 & 2.61 & -2.476 & 3.93 & 5.90 + 0.120 & 4.25 & 6.24 & 9.18 & -2.350 & 3.84 & 2.27 + 0.130 & 1.28 & 1.87 & 2.73 & -2.240 & 3.82 & 3.25 + 0.140 & 3.57 & 5.18 & 7.63 & -2.138 & 3.81 & 4.17 + 0.150 & 8.93 & 1.29 & 1.88 & -2.047 & 3.81 & 9.70 + 0.160 & 2.12 & 3.05 & 4.42 & -1.960 & 3.69 & 3.06 + 0.180 & 1.16 & 1.58 & 2.18 & -1.796 & 3.25 & 5.60 + 0.200 & 7.20 & 9.66 & 1.29 & -1.616 & 2.92 & 4.49 + 0.250 & 5.95 & 8.82 & 1.29 & -1.165 & 3.87 & 6.56 + 0.300 & 1.65 & 2.47 & 3.64 & -8.319 & 4.02 & 1.59 + 0.350 & 1.78 & 2.66 & 3.91 & -5.942 & 4.01 & 1.78 + 0.400 & 1.04 & 1.55 & 2.28 & -4.177 & 3.96 & 1.75 + 0.450 & 4.09 & 6.03 & 8.80 & -2.818 & 3.90 & 1.63 + 0.500 & 1.21 & 1.77 & 2.56 & -1.740 & 3.83 & 1.48 + 0.600 & 6.01 & 8.75 & 1.25 & -1.430 & 3.68 & 1.18 + 0.700 & 1.87 & 2.69 & 3.78 & 9.832 & 3.52 & 9.77 + 0.800 & 4.39 & 6.23 & 8.60 & 1.819 & 3.38 & 8.64 + 0.900 & 8.46 & 1.18 & 1.62 & 2.461 & 3.25 & 8.22 + 1.000 & 1.42 & 1.97 & 2.67 & 2.970 & 3.15 & 7.74 + 1.250 & 3.56 & 4.81 & 6.43 & 3.869 & 2.96 & 6.11 + 1.500 & 6.45 & 8.59 & 1.13 & 4.450 & 2.84 & 4.57 + 1.750 & ( 9.00 ) & ( 1.21 ) & ( 1.63 ) & ( 4.798 ) & ( 2.99 ) & + 2.000 & ( 1.18 ) & ( 1.59 ) & ( 2.14 ) & ( 5.069 ) & ( 2.99 ) & + 2.500 & ( 1.75 ) & ( 2.36 ) & ( 3.18 ) & ( 5.464 ) & ( 2.99 ) & + 3.000 & ( 2.32 ) & ( 3.12 ) & ( 4.21 ) & ( 5.743 ) & ( 2.99 ) & + 3.500 & ( 2.87 ) & ( 3.87 ) & ( 5.21 ) & ( 5.958 ) & ( 2.99 ) & + 4.000 & ( 3.40 ) & ( 4.59 ) & ( 6.19 ) & ( 6.129 ) & ( 2.99 ) & + 5.000 & ( 4.46 ) & ( 6.01 ) & ( 8.09 ) & ( 6.398 ) & ( 2.99 ) & + 6.000 & ( 5.51 ) & ( 7.42 ) & ( 1.00 ) & ( 6.609 ) & ( 2.99 ) & + 7.000 & ( 6.61 ) & ( 8.91 ) & ( 1.20 ) & ( 6.792 ) & ( 2.99 ) & + 8.000 & ( 7.74 ) & ( 1.04 ) & ( 1.41 ) & ( 6.950 ) & ( 2.99 ) & + 9.000 & ( 8.92 ) & ( 1.20 ) & ( 1.62 ) & ( 7.092 ) & ( 2.99 ) & + 10.000 & ( 1.03 ) & ( 1.39 ) & ( 1.87 ) & ( 7.235 ) & ( 2.99 ) & + comments : in total , 6 resonances with energies of e mev are taken into account .the resonance energies are deduced from the excitation energies measured by lotay et al . and visser et al .spectroscopic factors are adopted from the mirror states in , measured using the ( d , p ) reaction ( tomandl et al .gamma - ray partial widths are obtained from the shell model ( herndl et al . ) for unbound states , or from endt for bound states .information on -ray branching ratios is adopted from refs .the rate uncertainties result from the following uncertainties of input parameters : ( i ) resonance energies ( kev ) , ( ii ) spectroscopic factors ( % ) , ( iii ) -ray transition strengths ( % ) , and ( iv ) direct capture s factor ( % ) .note that the rate uncertainties presented here do not take into account the fact that the experimental spin and parity assignments for some of the low - energy resonances are not unambiguous .+ & & & & & & + + & & & & & & + 0.010 & 3.05 & 4.29 & 6.07 & -7.913 & 3.47 & 1.59 + 0.011 & 7.05 & 9.82 & 1.39 & -7.600 & 3.43 & 6.81 + 0.012 & 1.13 & 1.59 & 2.22 & -7.322 & 3.42 & 3.22 + 0.013 & 1.36 & 1.90 & 2.67 & -7.074 & 3.36 & 7.26 + 0.014 & 1.26 & 1.76 & 2.51 & -6.850 & 3.45 & 8.16 + 0.015 & 9.69 & 1.35 & 1.89 & -6.647 & 3.43 & 6.18 + 0.016 & 6.27 & 8.73 & 1.21 & -6.461 & 3.35 & 6.95 + 0.018 & 1.67 & 2.32 & 3.28 & -6.132 & 3.42 & 1.74 + 0.020 & 2.86 & 4.00 & 5.62 & -5.848 & 3.37 & 5.94 + 0.025 & 8.45 & 1.16 & 1.64 & -5.280 & 3.31 & 1.13 + 0.030 & 6.45 & 8.92 & 1.25 & -4.846 & 3.31 & 1.13 + 0.040 & 3.69 & 5.01 & 6.94 & -4.213 & 3.21 & 1.41 + 0.050 & 8.29 & 9.94 & 1.20 & -3.684 & 1.86 & 7.27 + 0.060 & 1.33 & 1.56 & 1.81 & -2.949 & 1.56 & 9.99 + 0.070 & 3.90 & 4.50 & 5.13 & -2.383 & 1.37 & 1.12 + 0.080 & 2.71 & 3.08 & 3.47 & -1.960 & 1.23 & 1.10 + 0.090 & 7.19 & 8.08 & 9.03 & -1.633 & 1.14 & 9.53 + 0.100 & 9.73 & 1.09 & 1.20 & -1.374 & 1.07 & 8.35 + 0.110 & 8.08 & 8.96 & 9.91 & -1.162 & 1.02 & 7.86 + 0.120 & 4.66 & 5.15 & 5.67 & -9.876 & 9.90 & 7.92 + 0.130 & 2.03 & 2.24 & 2.46 & -8.407 & 9.64 & 7.58 + 0.140 & 7.11 & 7.82 & 8.57 & -7.155 & 9.46 & 7.09 + 0.150 & 2.09 & 2.30 & 2.51 & -6.078 & 9.32 & 6.63 + 0.160 & 5.34 & 5.86 & 6.41 & -5.141 & 9.22 & 6.25 + 0.180 & 2.51 & 2.75 & 3.00 & -3.596 & 9.10 & 6.31 + 0.200 & 8.49 & 9.31 & 1.02 & -2.376 & 9.04 & 6.26 + 0.250 & 7.27 & 7.96 & 8.68 & -2.305 & 9.03 & 5.51 + 0.300 & 2.89 & 3.17 & 3.46 & 1.151 & 9.08 & 5.07 + 0.350 & 7.49 & 8.21 & 8.97 & 2.105 & 9.12 & 4.70 + 0.400 & 1.50 & 1.64 & 1.79 & 2.798 & 9.12 & 4.55 + 0.450 & 2.53 & 2.77 & 3.03 & 3.322 & 9.08 & 4.01 + 0.500 & 3.81 & 4.17 & 4.56 & 3.731 & 9.00 & 3.44 + 0.600 & 6.95 & 7.58 & 8.25 & 4.327 & 8.71 & 2.35 + 0.700 & 1.05 & 1.15 & 1.25 & 4.743 & 8.33 & 1.82 + 0.800 & 1.44 & 1.56 & 1.69 & 5.051 & 7.88 & 1.89 + 0.900 & 1.85 & 1.99 & 2.14 & 5.292 & 7.41 & 2.50 + 1.000 & 2.26 & 2.42 & 2.59 & 5.489 & 6.93 & 3.27 + 1.250 & 3.36 & 3.56 & 3.77 & 5.877 & 5.79 & 4.11 + 1.500 & 4.61 & 4.84 & 5.08 & 6.182 & 4.92 & 2.65 + 1.750 & 5.97 & 6.25 & 6.53 & 6.437 & 4.46 & 1.37 + 2.000 & 7.42 & 7.74 & 8.08 & 6.652 & 4.32 & 1.32 + 2.500 & 1.03 & 1.08 & 1.13 & 6.986 & 4.44 & 4.52 + 3.000 & 1.31 & 1.37 & 1.44 & 7.226 & 4.61 & 6.74 + 3.500 & 1.56 & 1.64 & 1.72 & 7.402 & 4.70 & 6.35 + 4.000 & 1.78 & 1.87 & 1.96 & 7.533 & 4.75 & 4.85 + 5.000 & 2.13 & 2.23 & 2.34 & 7.711 & 4.82 & 2.68 + 6.000 & 2.36 & 2.48 & 2.60 & 7.816 & 4.92 & 2.36 + 7.000 & 2.50 & 2.63 & 2.77 & 7.876 & 5.04 & 3.48 + 8.000 & 2.58 & 2.71 & 2.86 & 7.906 & 5.16 & 3.82 + 9.000 & 2.60 & 2.74 & 2.89 & 7.915 & 5.28 & 3.48 + 10.000 & ( 2.75 ) & ( 2.90 ) & ( 3.06 ) & ( 7.973 ) & ( 5.35 ) & + comments : the reaction rate is calculated from the same input information as in powell et al . , except that ( i ) the strengths of the higher - lying resonances ( e mev ) have been normalized to the weighted average strength of e=790 kev from trautvetter and engel et al . , and ( ii ) the new rate incorporates the updated q - value ( see tab .[ tab : master ] ) . in total ,9 resonances with energies in the range of e=214 - 2311 kev are taken into account .the partial rates for the e=214 and 402 kev resonances have been found by numerical integration in order to account for the low - energy resonance tails ( see powell et al . ) .+ & & & & & & + + & & & & & & + 0.010 & 8.88 & 1.59 & 3.02 & -2.137 & 6.00 & 5.52 + 0.011 & 1.92 & 3.51 & 6.36 & -2.060 & 6.00 & 3.08 + 0.012 & 1.73 & 3.10 & 5.71 & -1.992 & 5.95 & 3.81 + 0.013 & 7.49 & 1.34 & 2.48 & -1.931 & 6.00 & 2.74 + 0.014 & 1.85 & 3.26 & 6.12 & -1.876 & 5.96 & 6.23 + 0.015 & 2.72 & 4.78 & 8.77 & -1.826 & 5.93 & 3.23 + 0.016 & 2.60 & 4.58 & 8.42 & -1.781 & 5.88 & 2.94 + 0.018 & 8.30 & 1.47 & 2.65 & -1.700 & 5.84 & 2.27 + 0.020 & 9.20 & 1.59 & 2.89 & -1.630 & 5.79 & 3.14 + 0.025 & 1.39 & 1.40 & 6.03 & -1.451 & 1.79 & 7.96 + 0.030 & 3.75 & 4.53 & 1.96 & -1.302 & 2.13 & 1.41 + 0.040 & 4.77 & 5.76 & 2.51 & -1.116 & 2.18 & 1.49 + 0.050 & 3.20 & 3.90 & 1.69 & -1.004 & 2.18 & 1.49 + 0.060 & 5.06 & 6.16 & 2.67 & -9.307 & 2.17 & 1.48 + 0.070 & 9.41 & 1.14 & 4.95 & -8.784 & 2.15 & 1.44 + 0.080 & 4.69 & 5.61 & 2.42 & -8.392 & 2.05 & 1.26 + 0.090 & 2.37 & 1.32 & 5.04 & -8.051 & 1.53 & 3.67 + 0.100 & 2.14 & 8.58 & 2.55 & -7.631 & 1.34 & 5.72 + 0.110 & 1.33 & 1.23 & 5.20 & -7.149 & 1.82 & 8.52 + 0.120 & 1.07 & 1.12 & 4.78 & -6.708 & 2.05 & 1.41 + 0.130 & 4.86 & 5.13 & 2.19 & -6.328 & 2.13 & 1.59 + 0.140 & 1.28 & 1.35 & 5.75 & -6.002 & 2.16 & 1.63 + 0.150 & 2.16 & 2.27 & 9.71 & -5.720 & 2.15 & 1.63 + 0.160 & 2.54 & 2.68 & 1.14 & -5.472 & 2.13 & 1.58 + 0.180 & 1.54 & 1.61 & 6.84 & -5.059 & 2.03 & 1.34 + 0.200 & 4.41 & 4.24 & 1.79 & -4.724 & 1.85 & 9.23 + 0.250 & 6.95 & 2.28 & 6.87 & -4.065 & 1.07 & 1.12 + 0.300 & 4.84 & 7.03 & 1.12 & -3.485 & 4.18 & 2.34 + 0.350 & 8.91 & 1.12 & 1.46 & -2.980 & 2.55 & 1.06 + 0.400 & 5.15 & 6.21 & 7.61 & -2.580 & 1.99 & 3.44 + 0.450 & 1.31 & 1.53 & 1.81 & -2.259 & 1.65 & 4.21 + 0.500 & 1.87 & 2.13 & 2.44 & -1.997 & 1.37 & 5.58 + 0.600 & 1.16 & 1.27 & 1.41 & -1.587 & 9.65 & 3.60 + 0.700 & 2.49 & 2.69 & 2.91 & -1.283 & 7.73 & 9.89 + 0.800 & 2.62 & 2.81 & 3.02 & -1.048 & 7.16 & 7.53 + 0.900 & 1.68 & 1.80 & 1.93 & -8.624 & 7.05 & 5.93 + 1.000 & 7.51 & 8.06 & 8.65 & -7.123 & 7.01 & 5.23 + 1.250 & 1.16 & 1.24 & 1.33 & -4.388 & 6.79 & 5.88 + 1.500 & 7.42 & 7.92 & 8.45 & -2.535 & 6.53 & 5.81 + 1.750 & 2.85 & 3.04 & 3.24 & -1.191 & 6.35 & 6.36 + 2.000 & 7.99 & 8.50 & 9.04 & -1.626 & 6.23 & 8.50 + 2.500 & 3.58 & 3.79 & 4.02 & 1.333 & 5.92 & 1.65 + 3.000 & 1.04 & 1.10 & 1.16 & 2.397 & 5.50 & 2.29 + 3.500 & 2.36 & 2.48 & 2.61 & 3.211 & 5.16 & 1.94 + 4.000 & 4.50 & 4.72 & 4.97 & 3.856 & 5.00 & 1.49 + 5.000 & 1.17 & 1.23 & 1.29 & 4.812 & 5.14 & 1.31 + 6.000 & 2.25 & 2.38 & 2.52 & 5.473 & 5.59 & 2.43 + 7.000 & ( 3.57 ) & ( 3.77 ) & ( 3.99 ) & ( 5.933 ) & ( 5.67 ) & + 8.000 & ( 5.08 ) & ( 5.38 ) & ( 5.69 ) & ( 6.288 ) & ( 5.67 ) & + 9.000 & ( 6.72 ) & ( 7.11 ) & ( 7.52 ) & ( 6.567 ) & ( 5.67 ) & + 10.000 & ( 8.88 ) & ( 9.40 ) & ( 9.94 ) & ( 6.845 ) & ( 5.67 ) & + comments : for temperatures below t , the rate of the (,) reaction is dominated by direct capture ( dc ) and by possible , unobserved low - energy resonances at e , 530 , 821 , 932 and 969 kev , which correspond to known states in at e , 10.514 , 10.806 , 10.916 , and 10.953 mev , respectively . the dc component was estimated by scaling a calculated dc rate using relative spectroscopic factors for ( , d) mg .these were converted to absolute spectroscopic factors using the same procedure described for the (,) mg reaction .again , to be conservative we have assigned a factor of 2 uncertainty to the dc rate .the 5 possible resonances were selected based on their t = 0 assignments and favorable j values , which allow for transfer .upper limits on the -particle widths were calculated using a potential model for the lower three states . for the upper two states ,the results of direct ( ) measurements were used to determine upper limits on the -particle widths .resonances have been measured by smulders and endt , weinman et al . , lyons et al . , maas et al . , cseh et al . , and strandberg et al .resonance energies and total widths have been updated using the excitation energies and widths appearing in endsf and endt .the resonance strengths and partial widths that we adopted are obtained from a weighted average of the published resonance strengths , excluding those of weinman et al . , which were reported without uncertainties .the earler studies used stopping powers and standard resonance strengths that are now considered to be archaic .nonetheless , there is excellent agreement amongst the various data sets and thus no corrections were made . to calculate partial widths, we also made use of resonance strengths for the (p , ) mg and (p,) reactions ( see paper iii for input values ) .overall , our classical reaction rate is similar to that reported by strandberg et al .however , at their lowest temperature , t , our rate is significantly larger because we have also included the direct capture and the possible low - energy resonances listed above , which have a negligible effect at higher temperatures .+ & & & & & & + + & & & & & & + 0.010 & 1.08 & 1.56 & 2.27 & -7.324 & 3.73 & 1.29 + 0.011 & 6.04 & 8.20 & 1.13 & -6.927 & 3.17 & 8.12 + 0.012 & 2.85 & 3.77 & 4.99 & -6.545 & 2.81 & 6.58 + 0.013 & 1.16 & 1.62 & 2.28 & -6.168 & 3.37 & 6.46 + 0.014 & 3.55 & 5.10 & 7.40 & -5.824 & 3.69 & 1.90 + 0.015 & 7.40 & 1.07 & 1.57 & -5.519 & 3.79 & 7.92 + 0.016 & 1.07 & 1.56 & 2.29 & -5.251 & 3.82 & 6.03 + 0.018 & 9.21 & 1.35 & 1.98 & -4.806 & 3.83 & 5.48 + 0.020 & 3.21 & 4.69 & 6.89 & -4.451 & 3.83 & 5.37 + 0.025 & 1.82 & 2.67 & 3.92 & -3.816 & 3.83 & 5.31 + 0.030 & 1.19 & 1.74 & 2.55 & -3.398 & 3.83 & 5.32 + 0.040 & 2.08 & 3.00 & 4.39 & -2.883 & 3.76 & 8.47 + 0.050 & 4.90 & 6.82 & 9.62 & -2.571 & 3.38 & 5.39 + 0.060 & 4.96 & 6.56 & 8.73 & -2.344 & 2.86 & 1.98 + 0.070 & 3.04 & 3.99 & 5.24 & -2.164 & 2.75 & 1.00 + 0.080 & 1.30 & 1.72 & 2.31 & -2.017 & 2.92 & 7.06 + 0.090 & 4.26 & 5.73 & 7.85 & -1.897 & 3.07 & 1.30 + 0.100 & 1.19 & 1.60 & 2.18 & -1.794 & 3.03 & 2.19 + 0.110 & 3.21 & 4.14 & 5.48 & -1.699 & 2.68 & 5.16 + 0.120 & 9.78 & 1.17 & 1.43 & -1.595 & 1.94 & 1.17 + 0.130 & 3.63 & 4.05 & 4.56 & -1.471 & 1.18 & 7.43 + 0.140 & 1.48 & 1.62 & 1.77 & -1.333 & 9.13 & 1.91 + 0.150 & 5.82 & 6.40 & 7.04 & -1.196 & 9.59 & 1.25 + 0.160 & 2.09 & 2.31 & 2.56 & -1.068 & 1.02 & 1.20 + 0.180 & 1.89 & 2.10 & 2.34 & -8.466 & 1.08 & 8.10 + 0.200 & 1.14 & 1.26 & 1.41 & -6.674 & 1.09 & 7.43 + 0.250 & 2.89 & 3.20 & 3.56 & -3.440 & 1.05 & 1.10 + 0.300 & 2.51 & 2.77 & 3.05 & -1.285 & 9.74 & 1.92 + 0.350 & 1.19 & 1.29 & 1.42 & 2.597 & 8.94 & 2.88 + 0.400 & 3.84 & 4.16 & 4.52 & 1.427 & 8.16 & 3.66 + 0.450 & 9.69 & 1.04 & 1.12 & 2.345 & 7.45 & 3.94 + 0.500 & 2.05 & 2.19 & 2.35 & 3.088 & 6.84 & 3.71 + 0.600 & 6.45 & 6.84 & 7.25 & 4.226 & 5.91 & 2.46 + 0.700 & 1.50 & 1.58 & 1.66 & 5.060 & 5.32 & 1.43 + 0.800 & 2.85 & 2.99 & 3.15 & 5.701 & 5.02 & 1.65 + 0.900 & 4.74 & 4.98 & 5.23 & 6.211 & 4.90 & 4.58 + 1.000 & 7.19 & 7.53 & 7.91 & 6.626 & 4.88 & 1.24 + 1.250 & 1.55 & 1.62 & 1.71 & 7.395 & 4.93 & 4.87 + 1.500 & 2.65 & 2.77 & 2.91 & 7.929 & 4.90 & 8.38 + 1.750 & 3.95 & 4.12 & 4.32 & 8.327 & 4.75 & 1.06 + 2.000 & 5.40 & 5.62 & 5.88 & 8.637 & 4.51 & 1.18 + 2.500 & 8.57 & 8.88 & 9.25 & 9.095 & 3.98 & 1.18 + 3.000 & 1.19 & 1.22 & 1.27 & 9.415 & 3.51 & 1.03 + 3.500 & ( 1.48 ) & ( 1.53 ) & ( 1.58 ) & ( 9.633 ) & ( 3.33 ) & + 4.000 & ( 1.73 ) & ( 1.79 ) & ( 1.85 ) & ( 9.793 ) & ( 3.33 ) & + 5.000 & ( 2.20 ) & ( 2.27 ) & ( 2.35 ) & ( 1.003 ) & ( 3.33 ) & + 6.000 & ( 2.61 ) & ( 2.70 ) & ( 2.79 ) & ( 1.020 ) & ( 3.33 ) & + 7.000 & ( 2.97 ) & ( 3.07 ) & ( 3.18 ) & ( 1.033 ) & ( 3.33 ) & + 8.000 & ( 3.30 ) & ( 3.41 ) & ( 3.52 ) & ( 1.044 ) & ( 3.33 ) & + 9.000 & ( 3.58 ) & ( 3.70 ) & ( 3.83 ) & ( 1.052 ) & ( 3.33 ) & + 10.000 & ( 3.89 ) & ( 4.03 ) & ( 4.16 ) & ( 1.060 ) & ( 3.33 ) & + comments : altogether , 82 resonances at energies of e kev are taken into account in calculating the total reaction rates for the formation of either the ground or isomeric state .for measured resonances ( e kev ) the energies and strengths are adopted from endt , but the latter values are renormalized using the standard values presented in tab . 1 of iliadis et alfor the threshold states the spin - parity assignments and the proton partial widths are adopted from iliadis et al .for the e kev resonance the parity is not known experimentally ; we adopt j=4 as predicted by the shell model , implying transfer .( note that this assumption differs from the one adopted in ref . where an upper limit on the proton partial width was derived for a p - wave resonance ) .we disregarded the ( ground state ) resonance strengths measured by arazi et al . using accelerator mass spectrometry since their strength of the 189 kev resonance in particular seems far too small ( see also formicola et al .the direct capture s - factor is adopted from endt and rolfs .+ & & & & & & + + & & & & & & + 0.010 & 8.40 & 1.22 & 1.77 & -7.348 & 3.76 & 1.10 + 0.011 & 4.74 & 6.47 & 8.93 & -6.951 & 3.20 & 5.73 + 0.012 & 2.27 & 3.01 & 4.02 & -6.567 & 2.89 & 1.28 + 0.013 & 9.29 & 1.30 & 1.85 & -6.190 & 3.47 & 6.24 + 0.014 & 2.84 & 4.13 & 6.04 & -5.845 & 3.80 & 1.22 + 0.015 & 5.92 & 8.71 & 1.28 & -5.540 & 3.90 & 2.60 + 0.016 & 8.57 & 1.27 & 1.87 & -5.272 & 3.93 & 1.51 + 0.018 & 7.38 & 1.09 & 1.61 & -4.827 & 3.94 & 1.47 + 0.020 & 2.57 & 3.81 & 5.63 & -4.472 & 3.94 & 1.56 + 0.025 & 1.46 & 2.16 & 3.20 & -3.837 & 3.94 & 1.67 + 0.030 & 9.54 & 1.41 & 2.09 & -3.419 & 3.94 & 1.74 + 0.040 & 1.67 & 2.44 & 3.59 & -2.904 & 3.86 & 4.90 + 0.050 & 3.98 & 5.57 & 7.89 & -2.591 & 3.44 & 6.95 + 0.060 & 4.09 & 5.42 & 7.22 & -2.364 & 2.86 & 3.27 + 0.070 & 2.55 & 3.33 & 4.35 & -2.182 & 2.68 & 2.30 + 0.080 & 1.10 & 1.45 & 1.92 & -2.035 & 2.82 & 2.66 + 0.090 & 3.63 & 4.85 & 6.53 & -1.914 & 2.95 & 6.56 + 0.100 & 1.01 & 1.34 & 1.80 & -1.812 & 2.94 & 1.23 + 0.110 & 2.64 & 3.40 & 4.46 & -1.719 & 2.64 & 3.33 + 0.120 & 7.83 & 9.36 & 1.15 & -1.617 & 1.94 & 9.23 + 0.130 & 2.91 & 3.25 & 3.66 & -1.493 & 1.17 & 6.24 + 0.140 & 1.21 & 1.32 & 1.45 & -1.353 & 9.21 & 1.39 + 0.150 & 4.86 & 5.34 & 5.89 & -1.214 & 9.69 & 1.19 + 0.160 & 1.76 & 1.95 & 2.16 & -1.085 & 1.03 & 1.08 + 0.180 & 1.61 & 1.79 & 2.00 & -8.625 & 1.08 & 6.57 + 0.200 & 9.71 & 1.08 & 1.20 & -6.830 & 1.08 & 5.94 + 0.250 & 2.46 & 2.73 & 3.03 & -3.602 & 1.04 & 8.92 + 0.300 & 2.12 & 2.33 & 2.57 & -1.455 & 9.80 & 1.60 + 0.350 & 9.91 & 1.08 & 1.19 & 8.017 & 9.05 & 2.51 + 0.400 & 3.18 & 3.45 & 3.75 & 1.238 & 8.32 & 3.25 + 0.450 & 7.93 & 8.55 & 9.24 & 2.147 & 7.64 & 3.52 + 0.500 & 1.66 & 1.78 & 1.91 & 2.881 & 7.05 & 3.40 + 0.600 & 5.14 & 5.46 & 5.81 & 4.001 & 6.14 & 2.54 + 0.700 & 1.17 & 1.24 & 1.31 & 4.818 & 5.54 & 1.78 + 0.800 & 2.19 & 2.31 & 2.43 & 5.442 & 5.18 & 1.97 + 0.900 & 3.60 & 3.78 & 3.98 & 5.936 & 4.98 & 3.40 + 1.000 & 5.39 & 5.65 & 5.94 & 6.338 & 4.88 & 6.81 + 1.250 & 1.13 & 1.18 & 1.24 & 7.078 & 4.80 & 2.46 + 1.500 & 1.89 & 1.98 & 2.07 & 7.591 & 4.72 & 4.61 + 1.750 & 2.77 & 2.89 & 3.03 & 7.973 & 4.57 & 6.24 + 2.000 & 3.75 & 3.90 & 4.08 & 8.272 & 4.35 & 7.22 + 2.500 & 5.87 & 6.08 & 6.33 & 8.716 & 3.86 & 7.67 + 3.000 & 8.08 & 8.34 & 8.63 & 9.030 & 3.43 & 6.85 + 3.500 & ( 9.93 ) & ( 1.03 ) & ( 1.06 ) & ( 9.237 ) & ( 3.36 ) & + 4.000 & ( 1.17 ) & ( 1.21 ) & ( 1.25 ) & ( 9.397 ) & ( 3.36 ) & + 5.000 & ( 1.48 ) & ( 1.53 ) & ( 1.58 ) & ( 9.636 ) & ( 3.36 ) & + 6.000 & ( 1.76 ) & ( 1.82 ) & ( 1.88 ) & ( 9.808 ) & ( 3.36 ) & + 7.000 & ( 2.00 ) & ( 2.07 ) & ( 2.14 ) & ( 9.938 ) & ( 3.36 ) & + 8.000 & ( 2.22 ) & ( 2.30 ) & ( 2.37 ) & ( 1.004 ) & ( 3.36 ) & + 9.000 & ( 2.41 ) & ( 2.49 ) & ( 2.58 ) & ( 1.012 ) & ( 3.36 ) & + 10.000 & ( 2.62 ) & ( 2.71 ) & ( 2.80 ) & ( 1.021 ) & ( 3.36 ) & + comments : the reaction rates for the formation of the ground state are calculated from the same information used to compute the total reaction rates ( see tab .[ tab : mgpgt ] ) , but in addition the ground state -ray branching ratios , , have to be taken into account ; these are adopted from endt and rolfs , except for the e and 292 kev resonances , for which the more accurate results of iliadis are used ( and 0.79 , respectively ) .+ & & & & & & + + & & & & & & + 0.010 & 2.37 & 3.40 & 4.87 & -7.476 & 3.60 & 1.18 + 0.011 & 1.28 & 1.74 & 2.39 & -7.082 & 3.12 & 5.29 + 0.012 & 5.79 & 7.59 & 9.99 & -6.705 & 2.75 & 4.46 + 0.013 & 2.26 & 3.12 & 4.38 & -6.333 & 3.31 & 6.65 + 0.014 & 6.77 & 9.72 & 1.41 & -5.989 & 3.66 & 1.82 + 0.015 & 1.40 & 2.04 & 2.99 & -5.685 & 3.78 & 6.04 + 0.016 & 2.03 & 2.96 & 4.35 & -5.418 & 3.82 & 4.05 + 0.018 & 1.74 & 2.55 & 3.75 & -4.972 & 3.83 & 3.57 + 0.020 & 6.08 & 8.89 & 1.31 & -4.617 & 3.83 & 3.52 + 0.025 & 3.46 & 5.05 & 7.42 & -3.983 & 3.83 & 3.51 + 0.030 & 2.26 & 3.30 & 4.84 & -3.565 & 3.83 & 3.61 + 0.040 & 3.91 & 5.68 & 8.29 & -3.050 & 3.77 & 6.55 + 0.050 & 8.97 & 1.26 & 1.79 & -2.740 & 3.46 & 4.71 + 0.060 & 8.62 & 1.15 & 1.55 & -2.518 & 2.95 & 3.25 + 0.070 & 5.09 & 6.67 & 8.77 & -2.343 & 2.74 & 4.91 + 0.080 & 2.12 & 2.81 & 3.73 & -2.199 & 2.84 & 4.98 + 0.090 & 7.06 & 9.35 & 1.26 & -2.078 & 2.91 & 1.63 + 0.100 & 2.10 & 2.72 & 3.61 & -1.971 & 2.73 & 4.07 + 0.110 & 6.34 & 7.78 & 9.82 & -1.866 & 2.22 & 9.07 + 0.120 & 2.08 & 2.38 & 2.81 & -1.754 & 1.56 & 1.09 + 0.130 & 7.36 & 8.12 & 9.02 & -1.632 & 1.04 & 3.45 + 0.140 & 2.70 & 2.93 & 3.19 & -1.504 & 8.37 & 6.75 + 0.150 & 9.65 & 1.05 & 1.14 & -1.377 & 8.46 & 3.15 + 0.160 & 3.22 & 3.53 & 3.86 & -1.256 & 9.09 & 8.64 + 0.180 & 2.73 & 3.02 & 3.33 & -1.041 & 9.94 & 1.41 + 0.200 & 1.61 & 1.79 & 1.98 & -8.630 & 1.01 & 1.60 + 0.250 & 4.25 & 4.68 & 5.15 & -5.365 & 9.52 & 2.30 + 0.300 & 3.90 & 4.25 & 4.63 & -3.158 & 8.60 & 2.64 + 0.350 & 1.94 & 2.10 & 2.27 & -1.562 & 7.77 & 2.32 + 0.400 & 6.58 & 7.06 & 7.58 & -3.472 & 7.07 & 1.70 + 0.450 & 1.73 & 1.85 & 1.97 & 6.151 & 6.50 & 1.21 + 0.500 & 3.82 & 4.06 & 4.31 & 1.401 & 6.04 & 8.89 + 0.600 & 1.30 & 1.37 & 1.45 & 2.616 & 5.45 & 1.01 + 0.700 & 3.21 & 3.38 & 3.56 & 3.520 & 5.27 & 5.46 + 0.800 & 6.46 & 6.80 & 7.17 & 4.220 & 5.37 & 2.41 + 0.900 & 1.13 & 1.19 & 1.26 & 4.780 & 5.58 & 5.69 + 1.000 & 1.78 & 1.88 & 1.99 & 5.238 & 5.78 & 9.35 + 1.250 & 4.15 & 4.38 & 4.66 & 6.087 & 6.04 & 1.64 + 1.500 & 7.49 & 7.88 & 8.39 & 6.676 & 5.96 & 2.00 + 1.750 & 1.16 & 1.22 & 1.29 & 7.112 & 5.69 & 2.15 + 2.000 & 1.63 & 1.71 & 1.81 & 7.450 & 5.35 & 2.17 + 2.500 & 2.68 & 2.79 & 2.93 & 7.938 & 4.66 & 1.99 + 3.000 & 3.76 & 3.90 & 4.06 & 8.271 & 4.11 & 1.65 + 3.500 & 4.78 & 4.94 & 5.13 & 8.507 & 3.71 & 1.30 + 4.000 & ( 5.65 ) & ( 5.85 ) & ( 6.06 ) & ( 8.674 ) & ( 3.52 ) & + 5.000 & ( 7.17 ) & ( 7.42 ) & ( 7.69 ) & ( 8.913 ) & ( 3.52 ) & + 6.000 & ( 8.52 ) & ( 8.82 ) & ( 9.14 ) & ( 9.085 ) & ( 3.52 ) & + 7.000 & ( 9.70 ) & ( 1.00 ) & ( 1.04 ) & ( 9.215 ) & ( 3.52 ) & + 8.000 & ( 1.08 ) & ( 1.11 ) & ( 1.15 ) & ( 9.318 ) & ( 3.52 ) & + 9.000 & ( 1.17 ) & ( 1.21 ) & ( 1.25 ) & ( 9.401 ) & ( 3.52 ) & + 10.000 & ( 1.27 ) & ( 1.32 ) & ( 1.36 ) & ( 9.484 ) & ( 3.52 ) & + comments : the reaction rates for the formation of the isomeric state at e kev are calculated from the same information used to compute the total reaction rates ( see tab .[ tab : mgpgt ] ) , but in addition the isomeric state -ray branching ratios , , have to be taken into account ; these are adopted from endt and rolfs , except for the e and 292 kev resonances , for which the more accurate results of iliadis are used ( and 0.79 , respectively ) .+ & & & & & & + + & & & & & & + 0.010 & 1.41 & 2.07 & 3.23 & -7.753 & 4.28 & 5.62 + 0.011 & 1.27 & 2.33 & 4.29 & -7.283 & 6.05 & 1.65 + 0.012 & 1.03 & 1.90 & 3.45 & -6.844 & 6.03 & 2.24 + 0.013 & 4.76 & 8.35 & 1.45 & -6.466 & 5.59 & 2.60 + 0.014 & 1.28 & 2.13 & 3.59 & -6.141 & 5.18 & 2.96 + 0.015 & 2.20 & 3.54 & 5.71 & -5.861 & 4.84 & 3.20 + 0.016 & 2.62 & 4.11 & 6.45 & -5.616 & 4.57 & 3.78 + 0.018 & 1.59 & 2.39 & 3.64 & -5.209 & 4.20 & 3.83 + 0.020 & 4.14 & 6.13 & 9.13 & -4.885 & 4.00 & 4.04 + 0.025 & 1.38 & 2.02 & 2.95 & -4.305 & 3.83 & 5.17 + 0.030 & 7.26 & 1.09 & 1.57 & -3.907 & 3.87 & 1.28 + 0.040 & 2.84 & 6.80 & 1.57 & -3.264 & 7.56 & 3.89 + 0.050 & 4.75 & 1.02 & 2.17 & -2.763 & 6.75 & 3.62 + 0.060 & 1.85 & 3.35 & 6.16 & -2.412 & 5.46 & 2.19 + 0.070 & 2.63 & 4.30 & 6.95 & -2.158 & 4.55 & 1.05 + 0.080 & 2.04 & 3.03 & 4.48 & -1.962 & 3.86 & 4.39 + 0.090 & 1.05 & 1.47 & 2.04 & -1.804 & 3.30 & 1.91 + 0.100 & 4.23 & 5.66 & 7.42 & -1.670 & 2.84 & 1.51 + 0.110 & 1.48 & 1.90 & 2.39 & -1.549 & 2.42 & 1.14 + 0.120 & 4.97 & 6.10 & 7.45 & -1.431 & 2.03 & 5.11 + 0.130 & 1.72 & 2.04 & 2.42 & -1.310 & 1.69 & 4.75 + 0.140 & 6.23 & 7.13 & 8.24 & -1.185 & 1.40 & 3.73 + 0.150 & 2.24 & 2.50 & 2.82 & -1.059 & 1.17 & 6.06 + 0.160 & 7.60 & 8.35 & 9.24 & -9.387 & 9.88 & 3.82 + 0.180 & 6.66 & 7.20 & 7.82 & -7.235 & 8.02 & 7.35 + 0.200 & 4.04 & 4.33 & 4.69 & -5.439 & 7.43 & 6.64 + 0.250 & 1.09 & 1.17 & 1.26 & -2.143 & 7.26 & 5.93 + 0.300 & 9.90 & 1.06 & 1.14 & 6.046 & 7.25 & 5.57 + 0.350 & 4.74 & 5.08 & 5.46 & 1.626 & 7.15 & 5.88 + 0.400 & 1.52 & 1.63 & 1.75 & 2.792 & 6.99 & 6.18 + 0.450 & 3.77 & 4.02 & 4.30 & 3.694 & 6.78 & 6.22 + 0.500 & 7.73 & 8.23 & 8.80 & 4.411 & 6.57 & 5.86 + 0.600 & 2.25 & 2.39 & 2.54 & 5.477 & 6.18 & 4.22 + 0.700 & 4.79 & 5.07 & 5.37 & 6.229 & 5.85 & 2.92 + 0.800 & 8.36 & 8.84 & 9.34 & 6.785 & 5.59 & 2.40 + 0.900 & 1.29 & 1.36 & 1.43 & 7.212 & 5.38 & 2.61 + 1.000 & 1.81 & 1.90 & 2.00 & 7.552 & 5.20 & 3.66 + 1.250 & 3.34 & 3.50 & 3.67 & 8.161 & 4.82 & 5.48 + 1.500 & 5.08 & 5.30 & 5.54 & 8.576 & 4.50 & 6.28 + 1.750 & 6.95 & 7.25 & 7.56 & 8.888 & 4.26 & 7.62 + 2.000 & 8.96 & 9.33 & 9.70 & 9.140 & 4.08 & 6.19 + 2.500 & 1.34 & 1.39 & 1.45 & 9.543 & 3.88 & 3.78 + 3.000 & 1.86 & 1.94 & 2.01 & 9.871 & 3.77 & 3.97 + 3.500 & 2.48 & 2.57 & 2.66 & 1.015 & 3.68 & 3.87 + 4.000 & 3.17 & 3.29 & 3.41 & 1.040 & 3.64 & 3.59 + 5.000 & ( 4.69 ) & ( 4.87 ) & ( 5.05 ) & ( 1.079 ) & ( 3.65 ) & + 6.000 & ( 6.23 ) & ( 6.46 ) & ( 6.70 ) & ( 1.108 ) & ( 3.65 ) & + 7.000 & ( 7.68 ) & ( 7.97 ) & ( 8.27 ) & ( 1.129 ) & ( 3.65 ) & + 8.000 & ( 9.02 ) & ( 9.35 ) & ( 9.70 ) & ( 1.145 ) & ( 3.65 ) & + 9.000 & ( 1.02 ) & ( 1.06 ) & ( 1.10 ) & ( 1.157 ) & ( 3.65 ) & + 10.000 & ( 1.15 ) & ( 1.20 ) & ( 1.24 ) & ( 1.169 ) & ( 3.65 ) & + comments : in total , 133 resonances in the range of e=16 - 2867 kev are taken into account for the calculation of the total rates .the direct capture component is adopted from iliadis et al . .the measured resonance strengths listed in endt have been normalized to the standard strengths given in tab . 1 of iliadis et al . .the rate contribution of threshold states is estimated by using information from the ( , d) study of champagne et al .these stripping data have been reanalyzed in 2000 ( c. rowland , priv .comm . ) with a modern version of the dwba code dwuck4 . for e=8324 kev ( e=53 kev )we adopt the assignment from ref . ( on which the value of listed in ref . was based ) ; however , this assignment must be regarded as tentative since a angular distribution ( implying ) fits the stripping data almost equally well .similar arguments apply to e=8376 kev ( e=105 kev ) .both and angular distributions fit the stripping data .we adopt here which seems to fit slightly better , although was reported in the original analysis of ref . ( note also that listed in ref . was based on the originally reported value ) .the quantum numbers of these two levels , and those for e=8361 kev ( e=90 kev ) , need to be determined unambiguously in future work .+ & & & & & & + + & & & & & & + 0.010 & 2.90 & 4.55 & 7.38 & -8.592 & 8.55 & 4.46 + 0.011 & 8.16 & 1.26 & 2.05 & -8.257 & 9.90 & 6.21 + 0.012 & 1.52 & 2.37 & 3.93 & -7.959 & 1.16 & 7.83 + 0.013 & 2.13 & 3.32 & 5.57 & -7.691 & 1.35 & 9.41 + 0.014 & 2.28 & 3.58 & 6.11 & -7.447 & 1.58 & 1.07 + 0.015 & 1.96 & 3.09 & 5.40 & -7.222 & 1.83 & 1.19 + 0.016 & 1.41 & 2.23 & 4.12 & -7.012 & 2.10 & 1.24 + 0.018 & 4.63 & 7.60 & 1.67 & -6.631 & 2.68 & 1.26 + 0.020 & 9.65 & 1.62 & 1.46 & -6.282 & 3.29 & 1.15 + 0.025 & 4.34 & 1.14 & 2.63 & -5.492 & 4.64 & 5.77 + 0.030 & 5.97 & 9.31 & 9.77 & -4.768 & 5.30 & 1.64 + 0.040 & 1.82 & 3.53 & 2.14 & -3.615 & 4.45 & 5.73 + 0.050 & 1.99 & 7.13 & 7.39 & -2.858 & 3.05 & 1.23 + 0.060 & 8.34 & 1.04 & 3.84 & -2.352 & 2.06 & 1.82 + 0.070 & 6.14 & 3.15 & 7.21 & -1.997 & 1.49 & 2.27 + 0.080 & 1.26 & 3.93 & 7.72 & -1.732 & 1.23 & 2.53 + 0.090 & 1.14 & 2.96 & 5.40 & -1.528 & 1.19 & 3.33 + 0.100 & 5.96 & 1.53 & 2.74 & -1.366 & 1.29 & 4.28 + 0.110 & 2.08 & 6.03 & 1.11 & -1.236 & 1.44 & 4.72 + 0.120 & 5.23 & 1.91 & 3.73 & -1.128 & 1.59 & 4.65 + 0.130 & 1.07 & 5.05 & 1.06 & -1.037 & 1.74 & 4.33 + 0.140 & 1.96 & 1.13 & 2.61 & -9.607 & 1.88 & 3.95 + 0.150 & 3.23 & 2.25 & 5.77 & -8.950 & 2.00 & 3.57 + 0.160 & 5.00 & 4.06 & 1.16 & -8.381 & 2.11 & 3.24 + 0.180 & 1.02 & 1.06 & 3.71 & -7.447 & 2.29 & 2.73 + 0.200 & 1.76 & 2.23 & 9.33 & -6.716 & 2.44 & 2.36 + 0.250 & 4.54 & 7.95 & 4.66 & -5.443 & 2.70 & 1.78 + 0.300 & 6.92 & 1.68 & 1.23 & -4.696 & 2.89 & 1.43 + 0.350 & 1.19 & 3.00 & 2.51 & -4.076 & 2.91 & 1.17 + 0.400 & 1.59 & 4.36 & 4.05 & -3.665 & 2.92 & 9.50 + 0.450 & 2.00 & 5.68 & 5.80 & -3.349 & 2.91 & 7.62 + 0.500 & 2.43 & 6.90 & 7.63 & -3.090 & 2.86 & 5.92 + 0.600 & 3.60 & 8.96 & 1.12 & -2.679 & 2.73 & 3.28 + 0.700 & 5.90 & 1.11 & 1.39 & -2.323 & 2.52 & 1.87 + 0.800 & 1.03 & 1.32 & 1.61 & -2.008 & 2.31 & 1.72 + 0.900 & 1.80 & 1.61 & 1.78 & -1.719 & 2.11 & 2.53 + 1.000 & 3.10 & 1.82 & 1.92 & -1.464 & 1.91 & 4.82 + 1.250 & 1.03 & 3.33 & 2.10 & -8.180 & 1.51 & 9.72 + 1.500 & ( 2.58 ) & ( 5.84 ) & ( 2.32 ) & ( -2.774 ) & ( 1.19 ) & + 1.750 & ( 5.26 ) & ( 9.94 ) & ( 2.71 ) & ( 1.937 ) & ( 9.58 ) & + 2.000 & ( 9.45 ) & ( 1.57 ) & ( 3.35 ) & ( 6.051 ) & ( 7.84 ) & + 2.500 & ( 2.19 ) & ( 3.29 ) & ( 5.51 ) & ( 1.271 ) & ( 5.64 ) & + 3.000 & ( 4.01 ) & ( 5.72 ) & ( 8.73 ) & ( 1.788 ) & ( 4.47 ) & + 3.500 & ( 6.15 ) & ( 8.76 ) & ( 1.28 ) & ( 2.190 ) & ( 3.89 ) & + 4.000 & ( 8.69 ) & ( 1.21 ) & ( 1.75 ) & ( 2.513 ) & ( 3.63 ) & + 5.000 & ( 1.38 ) & ( 1.93 ) & ( 2.78 ) & ( 2.973 ) & ( 3.57 ) & + 6.000 & ( 1.76 ) & ( 2.50 ) & ( 3.64 ) & ( 3.230 ) & ( 3.63 ) & + 7.000 & ( 1.97 ) & ( 2.79 ) & ( 4.04 ) & ( 3.338 ) & ( 3.63 ) & + 8.000 & ( 1.96 ) & ( 2.77 ) & ( 4.00 ) & ( 3.333 ) & ( 3.58 ) & + 9.000 & ( 1.79 ) & ( 2.50 ) & ( 3.54 ) & ( 3.225 ) & ( 3.46 ) & + 10.000 & ( 1.54 ) & ( 2.13 ) & ( 3.00 ) & ( 3.066 ) & ( 3.33 ) & + comments : four resonances , none of them observed directly , are taken into account for calculating the total reaction rates .the energy of the lowest - lying resonance ( e kev ) is calculated from the measured excitation energy of e kev .based on a comparison to the shell model and to the mirror nucleus structure , this level corresponds most likely to .the energies of the remaining three resonances are computed by using the coulomb shift calculations of ref .for the energy uncertainties we assume a value of kev , which should be regarded as a rough estimate only .( note that the coulomb shift calculations of ref . overpredict the energy of the level by 180 kev . )all proton and -ray partial widths used here are based on the shell model results of ref .the direct capture s - factor is calculated using shell model spectroscopic factors and is in reasonable agreement with ref . .+ & & & & & & + + & & & & & & + 0.010 & 9.69 & 1.55 & 2.62 & -8.451 & 1.45 & 1.01 + 0.011 & 2.62 & 4.25 & 7.27 & -8.118 & 1.51 & 1.03 + 0.012 & 4.90 & 7.97 & 1.38 & -7.822 & 1.57 & 1.06 + 0.013 & 6.70 & 1.09 & 1.91 & -7.557 & 1.63 & 1.08 + 0.014 & 7.08 & 1.16 & 2.02 & -7.318 & 1.69 & 1.11 + 0.015 & 6.05 & 9.99 & 1.80 & -7.100 & 1.74 & 1.11 + 0.016 & 4.40 & 7.11 & 1.29 & -6.900 & 1.79 & 1.13 + 0.018 & 1.40 & 2.33 & 4.30 & -6.546 & 1.89 & 1.13 + 0.020 & 2.80 & 4.63 & 9.12 & -6.240 & 1.98 & 1.12 + 0.025 & 1.15 & 1.92 & 4.40 & -5.623 & 2.18 & 1.10 + 0.030 & 1.10 & 1.91 & 1.17 & -5.147 & 2.35 & 1.02 + 0.040 & 8.85 & 1.63 & 1.52 & -4.438 & 2.64 & 8.24 + 0.050 & 1.08 & 2.17 & 5.65 & -3.920 & 2.84 & 6.50 + 0.060 & 4.33 & 1.10 & 3.77 & -3.512 & 2.94 & 4.78 + 0.070 & 8.29 & 5.36 & 8.32 & -3.178 & 2.98 & 3.40 + 0.080 & 1.00 & 1.99 & 9.91 & -2.898 & 2.96 & 2.59 + 0.090 & 8.78 & 3.77 & 7.72 & -2.658 & 2.89 & 2.20 + 0.100 & 5.66 & 4.22 & 4.49 & -2.451 & 2.82 & 2.24 + 0.110 & 3.20 & 2.83 & 2.11 & -2.272 & 2.73 & 2.12 + 0.120 & 1.85 & 1.51 & 8.20 & -2.112 & 2.62 & 2.17 + 0.130 & 1.04 & 6.15 & 2.67 & -1.969 & 2.48 & 2.15 + 0.140 & 5.70 & 2.11 & 7.81 & -1.840 & 2.33 & 2.01 + 0.150 & 2.74 & 6.38 & 2.08 & -1.720 & 2.16 & 1.77 + 0.160 & 1.16 & 1.76 & 5.12 & -1.610 & 1.99 & 1.47 + 0.180 & 1.33 & 1.02 & 2.69 & -1.412 & 1.69 & 9.35 + 0.200 & 1.03 & 4.82 & 1.24 & -1.239 & 1.47 & 6.98 + 0.250 & 4.84 & 1.29 & 3.45 & -8.896 & 1.19 & 6.10 + 0.300 & 6.98 & 1.62 & 4.28 & -6.333 & 1.03 & 4.98 + 0.350 & 5.20 & 1.11 & 2.74 & -4.429 & 9.00 & 2.79 + 0.400 & 2.35 & 4.76 & 1.09 & -2.990 & 8.00 & 1.68 + 0.450 & 7.66 & 1.47 & 3.15 & -1.872 & 7.21 & 9.94 + 0.500 & 1.97 & 3.61 & 7.28 & -9.818 & 6.58 & 5.89 + 0.600 & 8.00 & 1.38 & 2.51 & 3.396 & 5.67 & 2.02 + 0.700 & 2.15 & 3.52 & 5.91 & 1.268 & 5.03 & 6.05 + 0.800 & 4.49 & 7.01 & 1.11 & 1.952 & 4.57 & 2.88 + 0.900 & 7.84 & 1.19 & 1.81 & 2.473 & 4.23 & 3.55 + 1.000 & 1.21 & 1.79 & 2.65 & 2.883 & 3.96 & 5.05 + 1.250 & 2.60 & 3.65 & 5.14 & 3.593 & 3.51 & 1.16 + 1.500 & ( 4.23 ) & ( 5.90 ) & ( 8.24 ) & ( 4.078 ) & ( 3.34 ) & + 1.750 & ( 6.29 ) & ( 8.79 ) & ( 1.23 ) & ( 4.476 ) & ( 3.34 ) & + 2.000 & ( 8.57 ) & ( 1.20 ) & ( 1.67 ) & ( 4.785 ) & ( 3.34 ) & + 2.500 & ( 1.33 ) & ( 1.86 ) & ( 2.60 ) & ( 5.226 ) & ( 3.34 ) & + 3.000 & ( 1.80 ) & ( 2.51 ) & ( 3.51 ) & ( 5.527 ) & ( 3.34 ) & + 3.500 & ( 2.24 ) & ( 3.13 ) & ( 4.37 ) & ( 5.747 ) & ( 3.34 ) & + 4.000 & ( 2.66 ) & ( 3.71 ) & ( 5.18 ) & ( 5.916 ) & ( 3.34 ) & + 5.000 & ( 3.40 ) & ( 4.75 ) & ( 6.63 ) & ( 6.162 ) & ( 3.34 ) & + 6.000 & ( 4.08 ) & ( 5.69 ) & ( 7.95 ) & ( 6.345 ) & ( 3.34 ) & + 7.000 & ( 4.67 ) & ( 6.52 ) & ( 9.10 ) & ( 6.480 ) & ( 3.34 ) & + 8.000 & ( 5.23 ) & ( 7.30 ) & ( 1.02 ) & ( 6.594 ) & ( 3.34 ) & + 9.000 & ( 5.76 ) & ( 8.05 ) & ( 1.12 ) & ( 6.690 ) & ( 3.34 ) & + 10.000 & ( 6.35 ) & ( 8.86 ) & ( 1.24 ) & ( 6.787 ) & ( 3.34 ) & + comments : the total rate has contributions from the direct capture process and from 5 resonances located at e kev .the direct capture s - factor as well as the proton and -ray partial widths of the resonances are based on the shell - model .our rate does not take into account several potentially important systematic effects .first , only one level has been observed in within 1 mev of the proton threshold , at e kev , but its spin - parity is unknown .second , the energies of the resonances at e , 500 , 510 and 730 kev are not based on experimental excitation energies , but are derived from coulomb shift calculations ; the adopted value of 100 kev for the resonance energy uncertainty must be regarded as a rough value only .third , the coulomb displacement energy calculations of ref . use as a starting point the experimental excitation energies of the mirror nucleus ; however , for several of these states the spin - parities are also unknown and thus have been based on a comparison to the shell model .fourth , a number of levels observed in the mirror nucleus remain unaccounted for in the derivation of the reaction rates , although their contributions are expected to be small .+ & & & & & & + + & & & & & & + 0.010 & 1.01 & 1.46 & 2.13 & -8.481 & 3.77 & 2.76 + 0.011 & 2.77 & 4.05 & 5.90 & -8.150 & 3.81 & 1.12 + 0.012 & 5.26 & 7.70 & 1.11 & -7.856 & 3.74 & 3.08 + 0.013 & 7.07 & 1.04 & 1.50 & -7.595 & 3.79 & 2.72 + 0.014 & 7.60 & 1.10 & 1.60 & -7.359 & 3.73 & 1.62 + 0.015 & 6.44 & 9.35 & 1.37 & -7.144 & 3.80 & 2.58 + 0.016 & 4.64 & 6.69 & 9.78 & -6.947 & 3.73 & 6.15 + 0.018 & 1.46 & 2.17 & 3.18 & -6.600 & 3.89 & 2.61 + 0.020 & 2.94 & 4.28 & 6.32 & -6.301 & 3.79 & 4.21 + 0.025 & 1.17 & 1.69 & 2.45 & -5.704 & 3.81 & 3.36 + 0.030 & 1.12 & 1.63 & 2.39 & -5.247 & 3.80 & 3.57 + 0.040 & 8.85 & 1.28 & 1.86 & -4.580 & 3.78 & 3.43 + 0.050 & 1.80 & 2.42 & 3.23 & -4.056 & 2.94 & 3.11 + 0.060 & 3.05 & 4.34 & 6.23 & -3.537 & 3.65 & 8.56 + 0.070 & 1.94 & 2.84 & 4.12 & -3.119 & 3.79 & 4.91 + 0.080 & 4.61 & 6.69 & 9.75 & -2.803 & 3.77 & 3.62 + 0.090 & 5.34 & 7.71 & 1.12 & -2.558 & 3.74 & 3.70 + 0.100 & 3.74 & 5.39 & 7.86 & -2.364 & 3.72 & 4.67 + 0.110 & 1.82 & 2.62 & 3.81 & -2.206 & 3.71 & 5.16 + 0.120 & 6.72 & 9.69 & 1.40 & -2.075 & 3.70 & 5.39 + 0.130 & 2.04 & 2.93 & 4.22 & -1.965 & 3.66 & 6.79 + 0.140 & 5.50 & 7.75 & 1.10 & -1.867 & 3.52 & 1.02 + 0.150 & 1.44 & 1.98 & 2.73 & -1.774 & 3.20 & 1.01 + 0.160 & 4.03 & 5.43 & 7.37 & -1.673 & 3.00 & 3.63 + 0.180 & 3.71 & 5.50 & 8.06 & -1.441 & 3.92 & 8.62 + 0.200 & 3.40 & 5.41 & 8.39 & -1.213 & 4.53 & 2.20 + 0.250 & 2.57 & 4.13 & 6.48 & -7.798 & 4.69 & 5.35 + 0.300 & 4.55 & 7.29 & 1.14 & -4.925 & 4.66 & 6.07 + 0.350 & 3.42 & 5.48 & 8.54 & -2.907 & 4.63 & 6.42 + 0.400 & 1.52 & 2.43 & 3.77 & -1.420 & 4.61 & 6.59 + 0.450 & 4.73 & 7.56 & 1.17 & -2.848 & 4.60 & 6.72 + 0.500 & 1.16 & 1.84 & 2.85 & 6.071 & 4.59 & 6.76 + 0.600 & 4.26 & 6.77 & 1.05 & 1.909 & 4.57 & 6.62 + 0.700 & 1.04 & 1.66 & 2.56 & 2.803 & 4.56 & 6.36 + 0.800 & 1.98 & 3.16 & 4.87 & 3.449 & 4.55 & 6.00 + 0.900 & 3.22 & 5.12 & 7.88 & 3.931 & 4.53 & 5.54 + 1.000 & 4.69 & 7.41 & 1.14 & 4.302 & 4.51 & 5.00 + 1.250 & 8.87 & 1.38 & 2.12 & 4.930 & 4.42 & 3.56 + 1.500 & 1.32 & 2.03 & 3.08 & 5.313 & 4.31 & 2.77 + 1.750 & 1.71 & 2.60 & 3.91 & 5.565 & 4.18 & 3.22 + 2.000 & 2.07 & 3.08 & 4.60 & 5.738 & 4.04 & 4.67 + 2.500 & 2.62 & 3.79 & 5.54 & 5.949 & 3.80 & 8.96 + 3.000 & ( 2.98 ) & ( 4.28 ) & ( 6.14 ) & ( 6.059 ) & ( 3.60 ) & + 3.500 & ( 3.73 ) & ( 5.35 ) & ( 7.67 ) & ( 6.282 ) & ( 3.60 ) & + 4.000 & ( 4.42 ) & ( 6.34 ) & ( 9.10 ) & ( 6.453 ) & ( 3.60 ) & + 5.000 & ( 5.65 ) & ( 8.10 ) & ( 1.16 ) & ( 6.697 ) & ( 3.60 ) & + 6.000 & ( 6.73 ) & ( 9.65 ) & ( 1.38 ) & ( 6.873 ) & ( 3.60 ) & + 7.000 & ( 7.69 ) & ( 1.10 ) & ( 1.58 ) & ( 7.006 ) & ( 3.60 ) & + 8.000 & ( 8.56 ) & ( 1.23 ) & ( 1.76 ) & ( 7.113 ) & ( 3.60 ) & + 9.000 & ( 9.36 ) & ( 1.34 ) & ( 1.93 ) & ( 7.202 ) & ( 3.60 ) & + 10.000 & ( 1.02 ) & ( 1.47 ) & ( 2.10 ) & ( 7.292 ) & ( 3.60 ) & + comments : the value of q=5513.7.5 kev is obtained from eronen et al . . in total ,6 resonances in the range of e=163 - 965 kev are taken into account . for e , 407 and 438kev the energies are obtained from the adjusted excitation energies of wrede .note that , unlike ref . , we prefer to calculate all resonance energies from excitation energies . in particular , we do not use the proton energy from the -delayed proton decay of since it is less precise than the value derived from the excitation energy . for e , 882 and 965kev the energies are found from the excitation energies listed in column 1 of tab .i in parpottas et al . ; however , we added an average value of 8 kev , by which the excitation energies measured in ref . to be too low ( see comments in ref . ) . for the spin and parity assignments of the first three resonances , we follow the suggestions of parpottas et al . and bardayanin particular , the assignments advocated in caggiano et al . and bardayan et al . disagree with the measured (p , t) angular distribution of the e=5921 kev level ( bardayan et al .this level has most likely a spin and parity of j=3 .for the last three resonances the j values are uncertain , but note that the reported values of 2 , 2 , 0 are inconsistent both with the known level scheme of the mg mirror and the shell model .spectroscopic factors , -ray partial widths and the direct capture s - factor are adopted from iliadis et al .in particular , we use for the former two quantities the shell - model values listed in their tab .the resonance at e=5 kev , corresponding to a level at e=5518 kev , makes a negligible contribution to the total rates .+ & & & & & & + + & & & & & & + 0.010 & 2.80 & 4.09 & 6.00 & -8.379 & 3.85 & 1.56 + 0.011 & 7.60 & 1.12 & 1.63 & -8.048 & 3.84 & 4.69 + 0.012 & 1.44 & 2.11 & 3.09 & -7.754 & 3.84 & 2.14 + 0.013 & 2.05 & 2.99 & 4.31 & -7.489 & 3.77 & 5.83 + 0.014 & 2.42 & 3.59 & 5.30 & -7.240 & 3.94 & 3.36 + 0.015 & 2.50 & 4.07 & 7.48 & -6.992 & 5.61 & 6.17 + 0.016 & 2.19 & 4.57 & 1.27 & -6.744 & 8.46 & 1.18 + 0.018 & 1.39 & 6.05 & 2.28 & -6.271 & 1.27 & 7.85 + 0.020 & 9.63 & 4.78 & 1.67 & -5.844 & 1.37 & 8.08 + 0.025 & 6.69 & 2.73 & 8.07 & -4.982 & 1.34 & 1.29 + 0.030 & 2.98 & 1.49 & 5.89 & -4.349 & 1.53 & 6.75 + 0.040 & 8.04 & 7.35 & 3.13 & -3.515 & 1.77 & 1.90 + 0.050 & 1.53 & 1.25 & 5.21 & -2.997 & 1.69 & 1.57 + 0.060 & 8.08 & 3.91 & 1.54 & -2.637 & 1.41 & 6.36 + 0.070 & 2.29 & 5.71 & 1.81 & -2.349 & 9.65 & 8.51 + 0.080 & 5.33 & 7.80 & 1.48 & -2.086 & 5.18 & 2.91 + 0.090 & 7.78 & 9.54 & 1.25 & -1.844 & 2.56 & 1.63 + 0.100 & 6.95 & 8.17 & 9.68 & -1.632 & 1.69 & 6.49 + 0.110 & 4.24 & 4.94 & 5.76 & -1.452 & 1.52 & 4.50 + 0.120 & 1.93 & 2.24 & 2.60 & -1.301 & 1.49 & 3.55 + 0.130 & 7.00 & 8.10 & 9.39 & -1.172 & 1.47 & 4.78 + 0.140 & 2.12 & 2.45 & 2.83 & -1.062 & 1.44 & 6.12 + 0.150 & 5.59 & 6.41 & 7.38 & -9.653 & 1.40 & 7.95 + 0.160 & 1.31 & 1.50 & 1.72 & -8.803 & 1.35 & 1.05 + 0.180 & 5.66 & 6.38 & 7.22 & -7.355 & 1.22 & 1.71 + 0.200 & 1.92 & 2.13 & 2.38 & -6.149 & 1.08 & 2.02 + 0.250 & 2.13 & 2.30 & 2.49 & -3.770 & 7.81 & 8.01 + 0.300 & 1.31 & 1.40 & 1.50 & -1.964 & 6.63 & 6.20 + 0.350 & 5.38 & 5.74 & 6.12 & -5.554 & 6.59 & 4.45 + 0.400 & 1.62 & 1.74 & 1.86 & 5.518 & 6.93 & 5.24 + 0.450 & 3.89 & 4.19 & 4.50 & 1.432 & 7.33 & 5.83 + 0.500 & 7.88 & 8.50 & 9.18 & 2.140 & 7.67 & 5.45 + 0.600 & 2.26 & 2.45 & 2.65 & 3.197 & 8.15 & 3.81 + 0.700 & 4.72 & 5.14 & 5.59 & 3.939 & 8.41 & 3.05 + 0.800 & 8.15 & 8.87 & 9.66 & 4.485 & 8.51 & 2.64 + 0.900 & 1.24 & 1.35 & 1.47 & 4.904 & 8.47 & 2.86 + 1.000 & 1.73 & 1.88 & 2.04 & 5.237 & 8.33 & 3.46 + 1.250 & 3.20 & 3.45 & 3.73 & 5.844 & 7.70 & 6.19 + 1.500 & 4.90 & 5.25 & 5.64 & 6.265 & 6.97 & 8.21 + 1.750 & 6.75 & 7.19 & 7.67 & 6.578 & 6.37 & 6.89 + 2.000 & ( 9.44 ) & ( 1.01 ) & ( 1.07 ) & ( 6.914 ) & ( 6.32 ) & + 2.500 & ( 1.57 ) & ( 1.67 ) & ( 1.78 ) & ( 7.422 ) & ( 6.32 ) & + 3.000 & ( 2.21 ) & ( 2.36 ) & ( 2.51 ) & ( 7.764 ) & ( 6.32 ) & + 3.500 & ( 2.84 ) & ( 3.03 ) & ( 3.22 ) & ( 8.015 ) & ( 6.32 ) & + 4.000 & ( 3.43 ) & ( 3.65 ) & ( 3.89 ) & ( 8.204 ) & ( 6.32 ) & + 5.000 & ( 4.49 ) & ( 4.78 ) & ( 5.09 ) & ( 8.472 ) & ( 6.32 ) & + 6.000 & ( 5.43 ) & ( 5.78 ) & ( 6.16 ) & ( 8.663 ) & ( 6.32 ) & + 7.000 & ( 6.22 ) & ( 6.62 ) & ( 7.05 ) & ( 8.798 ) & ( 6.32 ) & + 8.000 & ( 6.96 ) & ( 7.42 ) & ( 7.90 ) & ( 8.912 ) & ( 6.32 ) & + 9.000 & ( 7.63 ) & ( 8.13 ) & ( 8.66 ) & ( 9.003 ) & ( 6.32 ) & + 10.000 & ( 8.37 ) & ( 8.91 ) & ( 9.49 ) & ( 9.095 ) & ( 6.32 ) & + comments : measured resonance energies and strengths are reported in vogelaar . for the strength of the e=189 kev resonance , the weighted average of the values presented in vogelaar and ruiz et al . has been adopted . for almost all resonances below e kev , including the e=189 kev resonance and the threshold states , resonance energies are computed from the excitation energies determined by lotay et al . . for unobserved low - energy resonances ,we compute proton partial widths using c and , except : ( i ) for e kev , for which the lowest possible orbital angular momentum is ; ( ii ) for e kev , for which an s - wave spectroscopic factor of c.002 has been measured by ref . ( assuming ) ; and ( iii ) for e kev , where a proton width upper limit has been computed from the experimental upper limit of the resonance strength ( assuming ) . in total , 19 resonances with e=6 - 895 kev are taken into account .the direct capture component is adopted from the calculation of champagne et al . .+ & & & & & & + + & & & & & & + 0.010 & 3.30 & 4.88 & 7.14 & -8.361 & 3.87 & 5.26 + 0.011 & 9.13 & 1.35 & 1.96 & -8.030 & 3.84 & 9.71 + 0.012 & 1.70 & 2.50 & 3.66 & -7.737 & 3.86 & 2.42 + 0.013 & 2.38 & 3.46 & 5.03 & -7.474 & 3.78 & 2.42 + 0.014 & 2.81 & 4.11 & 5.87 & -7.228 & 3.68 & 1.84 + 0.015 & 2.92 & 4.74 & 7.59 & -6.983 & 4.65 & 2.42 + 0.016 & 2.70 & 5.76 & 1.33 & -6.730 & 7.34 & 2.53 + 0.018 & 1.84 & 9.75 & 2.96 & -6.237 & 1.23 & 8.16 + 0.020 & 1.31 & 8.04 & 2.51 & -5.803 & 1.37 & 9.57 + 0.025 & 6.60 & 2.65 & 7.48 & -4.987 & 1.26 & 8.05 + 0.030 & 2.03 & 7.00 & 1.60 & -4.431 & 1.14 & 1.27 + 0.040 & 2.41 & 8.33 & 1.68 & -3.728 & 1.09 & 1.92 + 0.050 & 1.59 & 5.64 & 1.29 & -3.303 & 1.13 & 1.51 + 0.060 & 2.95 & 9.66 & 2.43 & -3.010 & 1.02 & 7.66 + 0.070 & 9.71 & 1.50 & 2.69 & -2.717 & 4.59 & 5.65 + 0.080 & 3.78 & 4.44 & 5.22 & -2.384 & 1.60 & 9.78 + 0.090 & 7.61 & 8.64 & 9.84 & -2.087 & 1.31 & 7.53 + 0.100 & 8.71 & 9.79 & 1.10 & -1.844 & 1.19 & 5.99 + 0.110 & 6.44 & 7.16 & 7.98 & -1.645 & 1.08 & 4.73 + 0.120 & 3.40 & 3.76 & 4.15 & -1.479 & 1.00 & 4.00 + 0.130 & 1.39 & 1.52 & 1.67 & -1.339 & 9.37 & 3.28 + 0.140 & 4.64 & 5.06 & 5.51 & -1.219 & 8.82 & 2.85 + 0.150 & 1.32 & 1.43 & 1.55 & -1.116 & 8.34 & 3.08 + 0.160 & 3.28 & 3.55 & 3.84 & -1.025 & 7.92 & 3.42 + 0.180 & 1.52 & 1.63 & 1.75 & -8.721 & 7.17 & 3.14 + 0.200 & 5.26 & 5.61 & 5.98 & -7.485 & 6.47 & 2.39 + 0.250 & 5.44 & 5.71 & 5.99 & -5.166 & 4.89 & 5.28 + 0.300 & 2.97 & 3.08 & 3.20 & -3.479 & 3.76 & 4.99 + 0.350 & 1.12 & 1.16 & 1.19 & -2.158 & 3.17 & 5.66 + 0.400 & 3.30 & 3.40 & 3.50 & -1.078 & 2.94 & 4.40 + 0.450 & 8.20 & 8.44 & 8.68 & -1.699 & 2.89 & 3.54 + 0.500 & 1.79 & 1.84 & 1.90 & 6.124 & 2.95 & 2.85 + 0.600 & 6.52 & 6.72 & 6.94 & 1.906 & 3.15 & 8.06 + 0.700 & 1.83 & 1.89 & 1.95 & 2.939 & 3.28 & 1.20 + 0.800 & 4.25 & 4.39 & 4.54 & 3.782 & 3.35 & 2.05 + 0.900 & 8.52 & 8.80 & 9.11 & 4.478 & 3.41 & 3.80 + 1.000 & 1.53 & 1.58 & 1.63 & 5.061 & 3.46 & 6.87 + 1.250 & 4.62 & 4.76 & 4.94 & 6.169 & 3.50 & 1.72 + 1.500 & 1.01 & 1.04 & 1.08 & 6.952 & 3.42 & 2.56 + 1.750 & 1.82 & 1.87 & 1.94 & 7.538 & 3.25 & 2.85 + 2.000 & 2.88 & 2.96 & 3.05 & 7.995 & 3.07 & 2.74 + 2.500 & 5.62 & 5.76 & 5.92 & 8.660 & 2.76 & 1.81 + 3.000 & 8.89 & 9.10 & 9.34 & 9.117 & 2.55 & 9.74 + 3.500 & 1.24 & 1.27 & 1.30 & 9.446 & 2.44 & 5.20 + 4.000 & 1.58 & 1.62 & 1.65 & 9.691 & 2.38 & 3.09 + 5.000 & 2.19 & 2.24 & 2.30 & 1.002 & 2.32 & 1.55 + 6.000 & 2.68 & 2.74 & 2.80 & 1.022 & 2.30 & 1.01 + 7.000 & 3.03 & 3.10 & 3.17 & 1.034 & 2.29 & 8.15 + 8.000 & 3.27 & 3.35 & 3.42 & 1.042 & 2.30 & 7.45 + 9.000 & 3.43 & 3.51 & 3.59 & 1.047 & 2.31 & 6.89 + 10.000 & 3.52 & 3.60 & 3.68 & 1.049 & 2.32 & 6.83 + comments : a total of 109 resonances at energies of e kev are taken into account for calculating the reaction rates .the strengths of measured resonances ( e kev ) are renormalized to the standard strengths listed in tab . 1 of iliadis et alfor the contribution of threshold states and direct capture , see the comments in ref . .+ & & & & & & + + & & & & & & + 0.010 & 3.11 & 9.05 & 2.08 & -9.000 & 9.20 & 2.13 + 0.011 & 9.83 & 2.96 & 6.96 & -8.652 & 9.49 & 2.43 + 0.012 & 3.24 & 1.24 & 3.27 & -8.285 & 1.13 & 4.74 + 0.013 & 1.65 & 1.22 & 3.56 & -7.845 & 1.51 & 1.06 + 0.014 & 1.11 & 9.73 & 2.88 & -7.417 & 1.72 & 1.51 + 0.015 & 5.08 & 4.56 & 1.35 & -7.036 & 1.79 & 1.68 + 0.016 & 1.46 & 1.32 & 3.88 & -6.700 & 1.80 & 1.71 + 0.018 & 4.02 & 3.56 & 1.03 & -6.139 & 1.74 & 1.64 + 0.020 & 3.55 & 3.07 & 8.90 & -5.691 & 1.67 & 1.53 + 0.025 & 1.33 & 9.13 & 2.58 & -4.883 & 1.50 & 1.20 + 0.030 & 3.73 & 1.86 & 5.10 & -4.343 & 1.35 & 9.91 + 0.040 & 4.21 & 1.50 & 3.62 & -3.666 & 1.17 & 1.01 + 0.050 & 2.66 & 9.11 & 1.87 & -3.259 & 1.10 & 1.28 + 0.060 & 4.07 & 1.38 & 2.62 & -2.989 & 1.08 & 1.39 + 0.070 & 2.78 & 9.11 & 1.81 & -2.797 & 1.06 & 1.26 + 0.080 & 1.22 & 3.84 & 7.78 & -2.650 & 9.93 & 9.38 + 0.090 & 4.77 & 1.31 & 2.61 & -2.522 & 8.75 & 6.04 + 0.100 & 1.85 & 4.34 & 8.51 & -2.394 & 8.06 & 2.67 + 0.110 & 6.62 & 1.52 & 3.25 & -2.264 & 8.68 & 6.84 + 0.120 & 2.14 & 5.37 & 1.37 & -2.135 & 9.83 & 2.21 + 0.130 & 6.24 & 1.84 & 5.10 & -2.014 & 1.08 & 2.41 + 0.140 & 1.74 & 5.72 & 1.61 & -1.904 & 1.13 & 7.21 + 0.150 & 4.57 & 1.56 & 4.35 & -1.806 & 1.15 & 1.16 + 0.160 & 1.11 & 3.83 & 1.04 & -1.717 & 1.14 & 1.26 + 0.180 & 5.69 & 1.75 & 4.55 & -1.561 & 1.02 & 5.78 + 0.200 & 2.96 & 6.76 & 1.54 & -1.418 & 7.92 & 6.58 + 0.250 & 2.31 & 2.84 & 3.61 & -1.045 & 2.42 & 1.91 + 0.300 & 6.43 & 7.67 & 9.18 & -7.173 & 1.77 & 4.71 + 0.350 & 7.43 & 8.88 & 1.06 & -4.726 & 1.77 & 3.60 + 0.400 & 4.70 & 5.59 & 6.63 & -2.886 & 1.73 & 3.15 + 0.450 & 1.98 & 2.33 & 2.76 & -1.456 & 1.68 & 3.00 + 0.500 & 6.25 & 7.33 & 8.59 & -3.119 & 1.62 & 3.21 + 0.600 & 3.56 & 4.11 & 4.76 & 1.413 & 1.49 & 3.75 + 0.700 & 1.27 & 1.45 & 1.65 & 2.671 & 1.35 & 4.45 + 0.800 & 3.44 & 3.88 & 4.35 & 3.655 & 1.21 & 3.86 + 0.900 & 7.92 & 8.81 & 9.74 & 4.477 & 1.06 & 3.55 + 1.000 & 1.65 & 1.81 & 1.98 & 5.198 & 9.16 & 4.17 + 1.250 & 7.90 & 8.47 & 9.05 & 6.741 & 6.93 & 3.15 + 1.500 & 2.79 & 2.97 & 3.17 & 7.997 & 6.59 & 2.41 + 1.750 & 7.69 & 8.19 & 8.75 & 9.012 & 6.48 & 4.39 + 2.000 & 1.76 & 1.86 & 1.98 & 9.833 & 6.19 & 6.16 + 2.500 & 6.08 & 6.40 & 6.77 & 1.107 & 5.44 & 7.11 + 3.000 & 1.47 & 1.54 & 1.62 & 1.195 & 4.79 & 4.73 + 3.500 & 2.85 & 2.97 & 3.10 & 1.260 & 4.32 & 2.43 + 4.000 & 4.72 & 4.91 & 5.11 & 1.311 & 3.98 & 2.40 + 5.000 & ( 1.19 ) & ( 1.24 ) & ( 1.29 ) & ( 1.403 ) & ( 3.95 ) & + 6.000 & ( 2.44 ) & ( 2.54 ) & ( 2.65 ) & ( 1.475 ) & ( 3.95 ) & + 7.000 & ( 4.28 ) & ( 4.45 ) & ( 4.63 ) & ( 1.531 ) & ( 3.95 ) & + 8.000 & ( 6.73 ) & ( 7.00 ) & ( 7.29 ) & ( 1.576 ) & ( 3.95 ) & + 9.000 & ( 9.74 ) & ( 1.01 ) & ( 1.05 ) & ( 1.613 ) & ( 3.95 ) & + 10.000 & ( 1.41 ) & ( 1.47 ) & ( 1.53 ) & ( 1.650 ) & ( 3.95 ) & + comments : a total of 91 resonances at energies of e kev are taken into account for calculating the reaction rates .the strengths of measured resonances ( e kev ) are adopted from endt .note that no carefully measured ( that is , standard ) strength exists for this reaction . for the contribution of threshold states , see the comments in iliadis et al . .the levels at e , 11985 and 12015.2 kev ( e , 400 , 431 kev ) have ambiguous assignments . herewe disregard these states based on the unnatural parity assignments ( , , ) from the shell model ; see endt and booten .+ & & & & & & + + & & & & & & + 0.010 & 8.24 & 1.21 & 1.76 & -8.962 & 3.81 & 2.47 + 0.011 & 2.68 & 3.90 & 5.68 & -8.614 & 3.84 & 2.89 + 0.012 & 5.80 & 8.48 & 1.25 & -8.305 & 3.82 & 2.44 + 0.013 & 9.05 & 1.33 & 1.96 & -8.031 & 3.90 & 2.60 + 0.014 & 1.10 & 1.59 & 2.32 & -7.782 & 3.81 & 4.79 + 0.015 & 1.04 & 1.52 & 2.21 & -7.557 & 3.85 & 4.31 + 0.016 & 8.18 & 1.21 & 1.78 & -7.350 & 3.87 & 3.22 + 0.018 & 3.16 & 4.64 & 6.80 & -6.985 & 3.86 & 2.89 + 0.020 & 7.36 & 1.09 & 1.60 & -6.670 & 3.89 & 2.97 + 0.025 & 4.01 & 5.84 & 8.60 & -6.040 & 3.87 & 2.02 + 0.030 & 4.91 & 7.13 & 1.05 & -5.560 & 3.90 & 7.40 + 0.040 & 5.28 & 7.77 & 1.15 & -4.860 & 3.92 & 7.48 + 0.050 & 7.90 & 1.18 & 1.82 & -4.352 & 6.24 & 2.44 + 0.060 & 4.11 & 6.80 & 3.34 & -3.902 & 1.49 & 5.58 + 0.070 & 1.30 & 6.54 & 1.30 & -3.440 & 2.23 & 1.59 + 0.080 & 7.04 & 9.88 & 1.28 & -2.992 & 2.42 & 1.47 + 0.090 & 5.05 & 5.41 & 4.51 & -2.606 & 2.23 & 1.16 + 0.100 & 1.84 & 1.38 & 7.85 & -2.286 & 1.91 & 1.45 + 0.110 & 3.04 & 1.75 & 7.75 & -2.031 & 1.66 & 2.85 + 0.120 & 3.36 & 1.51 & 5.30 & -1.815 & 1.43 & 3.70 + 0.130 & 2.50 & 9.24 & 2.65 & -1.634 & 1.23 & 4.62 + 0.140 & 1.39 & 4.31 & 1.04 & -1.480 & 1.06 & 5.57 + 0.150 & 6.06 & 1.62 & 3.40 & -1.347 & 9.24 & 6.39 + 0.160 & 2.19 & 5.09 & 9.56 & -1.231 & 8.06 & 6.89 + 0.180 & 1.79 & 3.32 & 5.47 & -1.039 & 6.28 & 6.19 + 0.200 & 9.00 & 1.46 & 2.25 & -8.873 & 5.14 & 3.62 + 0.250 & 1.35 & 2.10 & 3.14 & -6.193 & 4.44 & 1.56 + 0.300 & 7.19 & 1.24 & 1.88 & -4.457 & 5.21 & 5.53 + 0.350 & 2.15 & 4.32 & 6.81 & -3.253 & 6.19 & 8.90 + 0.400 & 4.69 & 1.06 & 1.77 & -2.377 & 7.06 & 9.94 + 0.450 & 8.39 & 2.09 & 3.67 & -1.716 & 7.79 & 9.87 + 0.500 & 1.31 & 3.51 & 6.56 & -1.204 & 8.40 & 9.43 + 0.600 & 2.48 & 7.38 & 1.51 & -4.718 & 9.33 & 8.43 + 0.700 & 3.76 & 1.21 & 2.66 & 1.740 & 9.98 & 7.57 + 0.800 & 5.01 & 1.70 & 3.94 & 3.616 & 1.04 & 6.79 + 0.900 & 6.23 & 2.17 & 5.25 & 6.153 & 1.06 & 6.01 + 1.000 & 7.36 & 2.62 & 6.51 & 8.110 & 1.07 & 5.19 + 1.250 & 1.05 & 3.56 & 9.21 & 1.166 & 1.03 & 3.13 + 1.500 & 1.50 & 4.41 & 1.12 & 1.439 & 9.30 & 1.81 + 1.750 & 2.27 & 5.36 & 1.29 & 1.695 & 8.04 & 1.73 + 2.000 & 3.43 & 6.65 & 1.46 & 1.948 & 6.84 & 2.43 + 2.500 & 7.29 & 1.13 & 1.96 & 2.471 & 4.87 & 1.99 + 3.000 & 1.35 & 1.94 & 2.86 & 2.979 & 3.81 & 2.85 + 3.500 & 2.24 & 3.17 & 4.48 & 3.458 & 3.42 & 9.60 + 4.000 & 3.46 & 4.82 & 6.78 & 3.881 & 3.38 & 2.99 + 5.000 & 6.88 & 9.70 & 1.39 & 4.584 & 3.54 & 8.74 + 6.000 & 1.16 & 1.67 & 2.42 & 5.116 & 3.67 & 7.10 + 7.000 & 1.74 & 2.51 & 3.62 & 5.527 & 3.70 & 6.15 + 8.000 & 2.39 & 3.43 & 5.03 & 5.842 & 3.72 & 8.43 + 9.000 & 3.06 & 4.45 & 6.47 & 6.097 & 3.79 & 2.51 + 10.000 & 3.74 & 5.41 & 7.90 & 6.296 & 3.80 & 3.95 + comments : the contributions from the direct capture to the ground state and from 3 resonances are taken into account for the calculation of the total rates .the direct capture s - factor is obtained here using the experimental spectroscopic factor of from the mg mirror state .our s - factor is slightly lower than what has been reported by guo et al . ( using the anc method ) and slightly higher than the shell model based value of herndl et al . .the resonances are located at e , 772 and 1090 kev .the energy of the first resonance ( 3/2 ) is calculated using a measured excitation energy of e kev ( from -ray spectroscopy ) which disagrees with the previously reported value by caggiano et al . .the energy of the second resonance ( ) is adopted from ref .the energy of the third resonance ( ) is a rough estimate that is based on the excitation energy of e kev listed in moon et al . ( which , in turn , was extracted from fig . 4 of ref . ) . all protonpartial widths are computed using experimental spectroscopic factors from the mg mirror levels .the -ray partial widths are either adopted from the shell model or are calculated using the measured lifetime of the mirror state .note that no other resonances are expected to occur below an energy of e kev , according to the study of moon et al . .+ & & & & & & + + & & & & & & + 0.010 & 5.05 & 1.31 & 2.53 & -7.355 & 8.54 & 4.90 + 0.011 & 3.11 & 5.87 & 9.64 & -6.968 & 6.10 & 3.67 + 0.012 & 8.67 & 1.39 & 2.15 & -6.647 & 4.82 & 1.14 + 0.013 & 1.31 & 2.08 & 3.17 & -6.376 & 4.75 & 1.07 + 0.014 & 1.25 & 2.17 & 3.35 & -6.145 & 5.49 & 3.09 + 0.015 & 8.38 & 1.67 & 2.71 & -5.945 & 6.54 & 4.90 + 0.016 & 4.20 & 9.72 & 1.72 & -5.771 & 7.64 & 5.24 + 0.018 & 6.07 & 1.77 & 3.85 & -5.483 & 9.68 & 4.27 + 0.020 & 5.08 & 1.78 & 4.56 & -5.253 & 1.13 & 3.11 + 0.025 & 4.02 & 1.24 & 4.09 & -4.814 & 1.13 & 1.35 + 0.030 & 3.44 & 6.35 & 1.23 & -4.418 & 6.73 & 7.69 + 0.040 & 2.85 & 4.44 & 6.77 & -3.767 & 4.50 & 4.96 + 0.050 & 1.78 & 3.26 & 5.49 & -3.340 & 5.87 & 1.11 + 0.060 & 8.01 & 1.30 & 2.08 & -2.968 & 4.90 & 3.29 + 0.070 & 2.67 & 4.22 & 6.44 & -2.620 & 4.43 & 1.40 + 0.080 & 4.67 & 7.13 & 1.04 & -2.338 & 4.07 & 7.25 + 0.090 & 4.42 & 6.53 & 9.55 & -2.115 & 3.87 & 2.08 + 0.100 & 2.61 & 3.82 & 5.62 & -1.938 & 3.84 & 3.82 + 0.110 & 1.09 & 1.60 & 2.40 & -1.795 & 3.93 & 3.55 + 0.120 & 3.51 & 5.25 & 7.97 & -1.676 & 4.07 & 3.44 + 0.130 & 9.37 & 1.43 & 2.21 & -1.576 & 4.22 & 3.88 + 0.140 & 2.17 & 3.36 & 5.29 & -1.490 & 4.36 & 4.09 + 0.150 & 4.57 & 7.12 & 1.12 & -1.415 & 4.42 & 4.89 + 0.160 & 9.12 & 1.41 & 2.22 & -1.347 & 4.37 & 6.53 + 0.180 & 3.53 & 5.21 & 7.75 & -1.216 & 3.95 & 2.67 + 0.200 & 1.35 & 1.99 & 2.87 & -1.083 & 3.71 & 8.01 + 0.250 & 2.87 & 4.33 & 6.57 & -7.747 & 4.14 & 4.01 + 0.300 & 2.84 & 4.29 & 6.51 & -5.454 & 4.16 & 8.38 + 0.350 & 1.49 & 2.24 & 3.34 & -3.806 & 4.02 & 9.70 + 0.400 & 5.15 & 7.64 & 1.13 & -2.579 & 3.88 & 9.38 + 0.450 & 1.34 & 1.97 & 2.86 & -1.635 & 3.74 & 8.51 + 0.500 & 2.87 & 4.16 & 5.97 & -8.852 & 3.60 & 7.24 + 0.600 & 8.98 & 1.27 & 1.77 & 2.327 & 3.35 & 5.46 + 0.700 & 2.03 & 2.80 & 3.84 & 1.030 & 3.12 & 6.01 + 0.800 & 3.77 & 5.11 & 6.88 & 1.630 & 2.95 & 6.69 + 0.900 & 6.11 & 8.17 & 1.09 & 2.099 & 2.83 & 6.57 + 1.000 & 8.93 & 1.19 & 1.58 & 2.475 & 2.76 & 7.47 + 1.250 & 1.78 & 2.35 & 3.07 & 3.155 & 2.68 & 4.33 + 1.500 & 2.83 & 3.71 & 4.81 & 3.613 & 2.65 & 4.95 + 1.750 & 3.98 & 5.16 & 6.65 & 3.945 & 2.59 & 1.20 + 2.000 & 5.17 & 6.65 & 8.54 & 4.200 & 2.52 & 2.23 + 2.500 & 7.60 & 9.54 & 1.21 & 4.565 & 2.36 & 3.78 + 3.000 & 9.86 & 1.22 & 1.53 & 4.810 & 2.23 & 4.01 + 3.500 & 1.18 & 1.44 & 1.79 & 4.981 & 2.13 & 3.58 + 4.000 & 1.34 & 1.62 & 2.01 & 5.101 & 2.07 & 3.07 + 5.000 & ( 1.79 ) & ( 2.20 ) & ( 2.70 ) & ( 5.392 ) & ( 2.06 ) & + 6.000 & ( 2.29 ) & ( 2.82 ) & ( 3.46 ) & ( 5.641 ) & ( 2.06 ) & + 7.000 & ( 2.81 ) & ( 3.46 ) & ( 4.25 ) & ( 5.845 ) & ( 2.06 ) & + 8.000 & ( 3.35 ) & ( 4.12 ) & ( 5.06 ) & ( 6.020 ) & ( 2.06 ) & + 9.000 & ( 3.89 ) & ( 4.78 ) & ( 5.87 ) & ( 6.169 ) & ( 2.06 ) & + 10.000 & ( 4.44 ) & ( 5.46 ) & ( 6.71 ) & ( 6.303 ) & ( 2.06 ) & + + & & & & & & + + & & & & & & + 0.010 & 6.12 & 8.94 & 1.31 & -8.991 & 3.85 & 3.42 + 0.011 & 1.98 & 2.91 & 4.27 & -8.643 & 3.85 & 5.86 + 0.012 & 4.32 & 6.36 & 9.32 & -8.335 & 3.85 & 3.83 + 0.013 & 6.79 & 1.00 & 1.47 & -8.059 & 3.87 & 5.13 + 0.014 & 8.24 & 1.21 & 1.78 & -7.810 & 3.85 & 1.05 + 0.015 & 7.92 & 1.16 & 1.70 & -7.583 & 3.85 & 2.33 + 0.016 & 6.21 & 9.11 & 1.34 & -7.378 & 3.88 & 5.41 + 0.018 & 2.41 & 3.54 & 5.19 & -7.011 & 3.86 & 5.17 + 0.020 & 5.66 & 8.30 & 1.21 & -6.696 & 3.83 & 8.54 + 0.025 & 3.11 & 4.57 & 6.71 & -6.065 & 3.86 & 2.70 + 0.030 & 3.79 & 5.57 & 8.18 & -5.585 & 3.86 & 1.34 + 0.040 & 4.21 & 6.15 & 9.04 & -4.884 & 3.85 & 4.25 + 0.050 & 6.09 & 8.96 & 1.31 & -4.386 & 3.88 & 3.33 + 0.060 & 2.72 & 3.98 & 5.84 & -4.006 & 3.84 & 3.44 + 0.070 & 5.62 & 8.26 & 1.20 & -3.704 & 3.86 & 4.89 + 0.080 & 6.77 & 9.97 & 1.46 & -3.454 & 3.87 & 2.95 + 0.090 & 5.70 & 8.33 & 1.21 & -3.242 & 3.80 & 2.86 + 0.100 & 4.46 & 6.06 & 8.46 & -3.042 & 3.19 & 7.66 + 0.110 & 5.41 & 6.35 & 7.58 & -2.808 & 1.73 & 3.00 + 0.120 & 7.83 & 8.79 & 9.87 & -2.546 & 1.16 & 1.96 + 0.130 & 9.05 & 1.01 & 1.14 & -2.301 & 1.14 & 2.28 + 0.140 & 7.70 & 8.63 & 9.66 & -2.087 & 1.14 & 1.68 + 0.150 & 4.96 & 5.55 & 6.21 & -1.901 & 1.13 & 2.40 + 0.160 & 2.53 & 2.83 & 3.16 & -1.738 & 1.12 & 2.69 + 0.180 & 3.77 & 4.20 & 4.69 & -1.468 & 1.09 & 3.39 + 0.200 & 3.22 & 3.58 & 3.99 & -1.254 & 1.08 & 3.79 + 0.250 & 1.46 & 1.62 & 1.79 & -8.730 & 1.05 & 4.58 + 0.300 & 1.76 & 1.95 & 2.16 & -6.241 & 1.04 & 5.32 + 0.350 & 1.00 & 1.11 & 1.23 & -4.499 & 1.03 & 5.94 + 0.400 & 3.62 & 4.00 & 4.43 & -3.219 & 1.02 & 6.21 + 0.450 & 9.59 & 1.06 & 1.17 & -2.244 & 1.02 & 6.34 + 0.500 & 2.06 & 2.27 & 2.52 & -1.481 & 1.01 & 6.32 + 0.600 & 6.23 & 6.88 & 7.62 & -3.730 & 1.01 & 6.31 + 0.700 & 1.33 & 1.47 & 1.62 & 3.831 & 1.01 & 6.27 + 0.800 & 2.28 & 2.52 & 2.78 & 9.238 & 1.00 & 6.26 + 0.900 & 3.40 & 3.76 & 4.15 & 1.324 & 1.00 & 6.12 + 1.000 & 4.62 & 5.10 & 5.64 & 1.630 & 9.98 & 6.25 + 1.250 & 7.79 & 8.58 & 9.46 & 2.150 & 9.72 & 7.14 + 1.500 & 1.16 & 1.27 & 1.39 & 2.540 & 8.77 & 1.45 + 1.750 & 1.85 & 1.99 & 2.14 & 2.990 & 7.14 & 1.79 + 2.000 & 3.23 & 3.44 & 3.67 & 3.540 & 6.37 & 9.58 + 2.500 & 9.69 & 1.04 & 1.12 & 4.646 & 7.26 & 1.14 + 3.000 & 2.28 & 2.46 & 2.66 & 5.506 & 7.85 & 5.74 + 3.500 & 4.24 & 4.60 & 4.98 & 6.131 & 8.02 & 3.93 + 4.000 & 6.73 & 7.29 & 7.89 & 6.592 & 8.02 & 3.70 + 5.000 & 1.25 & 1.35 & 1.46 & 7.208 & 7.83 & 4.19 + 6.000 & 1.83 & 1.97 & 2.13 & 7.587 & 7.58 & 5.24 + 7.000 & 2.35 & 2.52 & 2.71 & 7.834 & 7.35 & 6.36 + 8.000 & 2.77 & 2.98 & 3.20 & 7.999 & 7.14 & 7.47 + 9.000 & 3.11 & 3.33 & 3.58 & 8.113 & 6.97 & 8.45 + 10.000 & 3.37 & 3.60 & 3.86 & 8.191 & 6.82 & 9.12 + comments : in total , 10 resonances with e=357 - 2991 kev are taken into account for the estimation of the rates .the resonance energies are calculated from the excitation energies listed in endt and the q - value ( see tab . [tab : master ] ) , except for two broad resonances ( e=1594 and 2009 kev ) whose energies are adopted from graff et al .for the resonance strengths we use the average values presented in angulo et al . . below e=1.1 mevthe nonresonant rate contribution is dominated by direct capture and the tail of the broad e=1594 kev resonance .the corresponding s - factor is adopted from graff et al . .the tail of the e=357 kev resonance is found to be negligible compared to other contributions .the present recommended rates are in agreement with those of angulo et al . , but our rate uncertainties are smaller .+ & & & & & & + + & & & & & & + 0.010 & 1.37 & 2.01 & 2.97 & -8.910 & 3.83 & 5.06 + 0.011 & 4.41 & 6.50 & 9.64 & -8.562 & 3.92 & 3.17 + 0.012 & 9.79 & 1.44 & 2.10 & -8.253 & 3.87 & 2.73 + 0.013 & 1.55 & 2.28 & 3.33 & -7.977 & 3.87 & 1.32 + 0.014 & 1.88 & 2.74 & 3.99 & -7.728 & 3.82 & 3.88 + 0.015 & 1.77 & 2.61 & 3.86 & -7.502 & 3.89 & 9.27 + 0.016 & 1.41 & 2.06 & 3.01 & -7.296 & 3.84 & 2.58 + 0.018 & 5.51 & 8.12 & 1.20 & -6.928 & 3.87 & 3.38 + 0.020 & 1.38 & 2.05 & 3.04 & -6.606 & 4.03 & 2.41 + 0.025 & 1.76 & 7.77 & 3.43 & -5.778 & 1.37 & 2.68 + 0.030 & 2.22 & 2.18 & 8.53 & -5.025 & 1.85 & 1.18 + 0.040 & 4.89 & 4.85 & 1.56 & -4.040 & 1.96 & 2.63 + 0.050 & 1.85 & 1.80 & 5.44 & -3.452 & 1.96 & 3.12 + 0.060 & 8.81 & 8.73 & 2.65 & -3.065 & 1.96 & 3.20 + 0.070 & 1.35 & 1.33 & 4.12 & -2.790 & 1.92 & 3.01 + 0.080 & 1.05 & 9.97 & 3.13 & -2.583 & 1.80 & 2.54 + 0.090 & 5.54 & 4.71 & 1.49 & -2.416 & 1.58 & 1.76 + 0.100 & 3.72 & 1.77 & 5.28 & -2.262 & 1.18 & 8.67 + 0.110 & 3.72 & 7.59 & 1.71 & -2.097 & 6.98 & 3.56 + 0.120 & 3.54 & 5.10 & 7.47 & -1.909 & 3.68 & 1.74 + 0.130 & 2.64 & 3.56 & 4.79 & -1.715 & 3.00 & 2.36 + 0.140 & 1.59 & 2.15 & 2.92 & -1.535 & 3.06 & 5.51 + 0.150 & 7.83 & 1.06 & 1.45 & -1.375 & 3.10 & 5.41 + 0.160 & 3.22 & 4.36 & 5.98 & -1.234 & 3.10 & 5.33 + 0.180 & 3.45 & 4.64 & 6.33 & -9.973 & 3.04 & 7.26 + 0.200 & 2.33 & 3.09 & 4.18 & -8.074 & 2.92 & 1.25 + 0.250 & 7.69 & 9.75 & 1.27 & -4.619 & 2.51 & 4.32 + 0.300 & 8.42 & 1.02 & 1.27 & -2.268 & 2.08 & 8.66 + 0.350 & 4.81 & 5.65 & 6.78 & -5.603 & 1.74 & 1.13 + 0.400 & 1.80 & 2.06 & 2.41 & 7.326 & 1.50 & 1.12 + 0.450 & 5.00 & 5.66 & 6.48 & 1.740 & 1.33 & 9.24 + 0.500 & 1.13 & 1.27 & 1.43 & 2.544 & 1.21 & 6.90 + 0.600 & 3.77 & 4.18 & 4.66 & 3.736 & 1.07 & 3.27 + 0.700 & 8.75 & 9.63 & 1.06 & 4.569 & 9.98 & 1.57 + 0.800 & 1.61 & 1.77 & 1.95 & 5.180 & 9.53 & 8.57 + 0.900 & 2.58 & 2.82 & 3.09 & 5.643 & 9.21 & 5.78 + 1.000 & 3.71 & 4.05 & 4.44 & 6.005 & 8.96 & 4.26 + 1.250 & 7.00 & 7.60 & 8.29 & 6.634 & 8.48 & 3.99 + 1.500 & 1.05 & 1.14 & 1.23 & 7.036 & 8.13 & 3.89 + 1.750 & 1.39 & 1.50 & 1.63 & 7.315 & 7.86 & 2.82 + 2.000 & 1.72 & 1.85 & 2.00 & 7.525 & 7.61 & 2.16 + 2.500 & 2.37 & 2.54 & 2.72 & 7.839 & 7.10 & 1.58 + 3.000 & 3.04 & 3.25 & 3.46 & 8.085 & 6.58 & 3.15 + 3.500 & 3.77 & 4.01 & 4.25 & 8.296 & 6.13 & 3.43 + 4.000 & 4.55 & 4.82 & 5.10 & 8.480 & 5.80 & 4.67 + 5.000 & 6.17 & 6.50 & 6.87 & 8.782 & 5.41 & 9.58 + 6.000 & ( 7.69 ) & ( 8.11 ) & ( 8.56 ) & ( 9.001 ) & ( 5.40 ) & + 7.000 & ( 9.06 ) & ( 9.56 ) & ( 1.01 ) & ( 9.165 ) & ( 5.40 ) & + 8.000 & ( 1.03 ) & ( 1.09 ) & ( 1.15 ) & ( 9.293 ) & ( 5.40 ) & + 9.000 & ( 1.14 ) & ( 1.20 ) & ( 1.27 ) & ( 9.392 ) & ( 5.40 ) & + 10.000 & ( 1.24 ) & ( 1.30 ) & ( 1.38 ) & ( 9.476 ) & ( 5.40 ) & + comments : the same input information as in iliadis et al . is used for the calculation of the rates , with the exception of the unobserved e=296 kev resonance , which has previously been disregarded in the reaction rate calculation . according to endt ,the corresponding e=5890 kev state in is the analog of the e=5232 kev state ( j;t=3;1 ) in . with this assumption , a spectroscopic factor of s.025can be deduced from the results of a (d , p) study ( mackh et al . ) , in reasonable agreement with the shell model value reported in baxter and hinds . in total ,79 resonances with energies in the range of e=107 - 3075 kev are taken into account .+ & & & & & & + + & & & & & & + 0.010 & 9.47 & 1.87 & 4.58 & -8.441 & 2.78 & 1.59 + 0.011 & 1.42 & 3.90 & 6.38 & -7.939 & 2.83 & 5.46 + 0.012 & 1.55 & 3.37 & 3.97 & -7.504 & 2.73 & 8.39 + 0.013 & 8.27 & 1.48 & 1.32 & -7.132 & 2.58 & 1.36 + 0.014 & 2.49 & 3.76 & 2.70 & -6.812 & 2.45 & 1.91 + 0.015 & 4.57 & 6.20 & 3.78 & -6.535 & 2.35 & 2.44 + 0.016 & 5.81 & 7.14 & 3.84 & -6.293 & 2.28 & 2.92 + 0.018 & 3.62 & 4.06 & 1.93 & -5.890 & 2.20 & 3.59 + 0.020 & 9.19 & 1.01 & 4.65 & -5.570 & 2.19 & 3.81 + 0.025 & 2.47 & 3.08 & 1.58 & -4.997 & 2.25 & 3.28 + 0.030 & 9.39 & 1.30 & 7.94 & -4.619 & 2.34 & 2.46 + 0.040 & 7.47 & 1.28 & 1.03 & -4.150 & 2.40 & 1.16 + 0.050 & 1.37 & 1.91 & 1.84 & -3.858 & 2.21 & 7.55 + 0.060 & 2.76 & 1.27 & 1.22 & -3.630 & 1.78 & 2.94 + 0.070 & 5.35 & 1.05 & 5.07 & -3.413 & 1.22 & 6.37 + 0.080 & 6.34 & 1.27 & 2.44 & -3.194 & 8.14 & 2.00 + 0.090 & 5.05 & 1.08 & 2.13 & -2.988 & 6.89 & 5.89 + 0.100 & 3.03 & 6.79 & 1.43 & -2.805 & 6.93 & 1.46 + 0.110 & 1.46 & 3.31 & 7.11 & -2.647 & 7.03 & 1.68 + 0.120 & 6.20 & 1.34 & 2.81 & -2.506 & 6.69 & 1.54 + 0.130 & 2.70 & 5.43 & 9.93 & -2.368 & 6.01 & 1.17 + 0.140 & 1.11 & 2.47 & 4.48 & -2.220 & 6.75 & 8.12 + 0.150 & 4.25 & 1.13 & 3.03 & -2.061 & 9.07 & 6.70 + 0.160 & 1.54 & 6.13 & 1.91 & -1.901 & 1.12 & 1.66 + 0.180 & 2.24 & 1.37 & 4.51 & -1.600 & 1.30 & 2.69 + 0.200 & 3.02 & 1.75 & 5.72 & -1.344 & 1.26 & 2.69 + 0.250 & 4.27 & 1.74 & 5.39 & -8.742 & 1.08 & 2.40 + 0.300 & 1.29 & 3.80 & 1.08 & -5.585 & 9.10 & 2.33 + 0.350 & 1.54 & 3.54 & 9.15 & -3.304 & 7.68 & 2.41 + 0.400 & 1.03 & 1.95 & 4.56 & -1.566 & 6.48 & 2.49 + 0.450 & 4.60 & 7.63 & 1.60 & -1.930 & 5.49 & 2.46 + 0.500 & 1.54 & 2.34 & 4.42 & 9.227 & 4.69 & 2.27 + 0.600 & 9.59 & 1.31 & 2.10 & 2.625 & 3.57 & 1.50 + 0.700 & 3.53 & 4.62 & 6.56 & 3.859 & 2.91 & 7.17 + 0.800 & 9.29 & 1.19 & 1.58 & 4.790 & 2.51 & 2.76 + 0.900 & 1.96 & 2.47 & 3.14 & 5.513 & 2.28 & 9.94 + 1.000 & 3.55 & 4.40 & 5.48 & 6.089 & 2.13 & 3.77 + 1.250 & 1.02 & 1.23 & 1.49 & 7.114 & 1.91 & 1.04 + 1.500 & 2.02 & 2.40 & 2.87 & 7.786 & 1.78 & 2.47 + 1.750 & 3.28 & 3.85 & 4.56 & 8.260 & 1.67 & 5.11 + 2.000 & 4.73 & 5.49 & 6.43 & 8.615 & 1.55 & 8.18 + 2.500 & 7.99 & 9.07 & 1.04 & 9.119 & 1.35 & 1.45 + 3.000 & 1.16 & 1.29 & 1.45 & 9.468 & 1.16 & 1.99 + 3.500 & 1.53 & 1.68 & 1.86 & 9.732 & 1.00 & 2.36 + 4.000 & 1.91 & 2.07 & 2.26 & 9.942 & 8.73 & 2.54 + 5.000 & 2.66 & 2.83 & 3.04 & 1.025 & 6.88 & 2.40 + 6.000 & ( 3.64 ) & ( 3.89 ) & ( 4.16 ) & ( 1.057 ) & ( 6.65 ) & + 7.000 & ( 4.74 ) & ( 5.06 ) & ( 5.41 ) & ( 1.083 ) & ( 6.65 ) & + 8.000 & ( 5.80 ) & ( 6.20 ) & ( 6.63 ) & ( 1.104 ) & ( 6.65 ) & + 9.000 & ( 6.87 ) & ( 7.34 ) & ( 7.84 ) & ( 1.120 ) & ( 6.65 ) & + 10.000 & ( 8.13 ) & ( 8.69 ) & ( 9.29 ) & ( 1.137 ) & ( 6.65 ) & + comments : the same input information as in iliadis et al . is used for the calculation of the rates , except that the energies and resonance strengths of the threshold states have been adjusted according to a slight change in the reaction q - value ( audi , wapstra and thibault ) . in total ,97 resonances with energies in the range of e=17 - 2929 kev are taken into account .+ & & & & & & + + & & & & & & + 0.010 & 1.24 & 2.00 & 3.20 & -9.602 & 4.73 & 3.14 + 0.011 & 4.75 & 7.65 & 1.22 & -9.238 & 4.74 & 3.93 + 0.012 & 1.21 & 1.94 & 3.12 & -8.914 & 4.76 & 4.43 + 0.013 & 2.16 & 3.50 & 5.59 & -8.624 & 4.85 & 5.58 + 0.014 & 2.96 & 4.69 & 7.60 & -8.365 & 4.77 & 5.51 + 0.015 & 3.10 & 4.97 & 8.04 & -8.128 & 4.75 & 4.08 + 0.016 & 2.78 & 4.42 & 7.08 & -7.910 & 4.69 & 3.00 + 0.018 & 1.27 & 2.03 & 3.24 & -7.528 & 4.78 & 2.85 + 0.020 & 3.43 & 5.47 & 8.71 & -7.198 & 4.73 & 2.58 + 0.025 & 2.60 & 4.10 & 6.55 & -6.536 & 4.71 & 2.06 + 0.030 & 3.90 & 6.26 & 9.90 & -6.034 & 4.66 & 2.97 + 0.040 & 6.00 & 9.67 & 1.54 & -5.299 & 4.79 & 1.43 + 0.050 & 1.10 & 1.78 & 2.85 & -4.778 & 4.78 & 2.02 + 0.060 & 5.87 & 9.39 & 1.53 & -4.380 & 4.81 & 3.13 + 0.070 & 1.41 & 2.27 & 3.62 & -4.063 & 4.77 & 3.69 + 0.080 & 1.92 & 3.06 & 4.95 & -3.802 & 4.78 & 4.13 + 0.090 & 1.76 & 2.79 & 4.42 & -3.581 & 4.69 & 5.89 + 0.100 & 1.18 & 1.87 & 3.05 & -3.391 & 4.75 & 6.37 + 0.110 & 6.05 & 9.80 & 1.56 & -3.226 & 4.76 & 3.17 + 0.120 & 2.69 & 4.27 & 6.84 & -3.078 & 4.73 & 3.20 + 0.130 & 9.97 & 1.57 & 2.53 & -2.947 & 4.69 & 4.04 + 0.140 & 3.24 & 5.21 & 8.38 & -2.828 & 4.77 & 2.43 + 0.150 & 9.42 & 1.53 & 2.44 & -2.721 & 4.77 & 4.23 + 0.160 & 2.51 & 3.98 & 6.37 & -2.625 & 4.69 & 9.14 + 0.180 & 1.43 & 2.29 & 3.71 & -2.449 & 4.77 & 4.73 + 0.200 & 6.45 & 1.03 & 1.65 & -2.300 & 4.76 & 3.80 + 0.250 & 1.28 & 2.04 & 3.25 & -2.001 & 4.69 & 1.25 + 0.300 & 1.23 & 1.98 & 3.19 & -1.773 & 4.72 & 1.89 + 0.350 & 7.56 & 1.22 & 1.94 & -1.593 & 4.72 & 6.03 + 0.400 & 3.36 & 5.37 & 8.71 & -1.443 & 4.78 & 2.29 + 0.450 & 1.18 & 1.87 & 3.03 & -1.318 & 4.68 & 7.85 + 0.500 & 3.54 & 5.66 & 8.95 & -1.209 & 4.67 & 5.09 + 0.600 & 2.08 & 3.34 & 5.32 & -1.031 & 4.75 & 2.32 + 0.700 & 8.65 & 1.39 & 2.21 & -8.888 & 4.69 & 3.87 + 0.800 & 2.80 & 4.44 & 7.13 & -7.719 & 4.71 & 3.89 + 0.900 & 7.59 & 1.20 & 1.93 & -6.722 & 4.73 & 6.71 + 1.000 & 1.78 & 2.81 & 4.46 & -5.873 & 4.56 & 3.51 + 1.250 & 9.64 & 1.51 & 2.36 & -4.192 & 4.57 & 2.84 + 1.500 & 3.47 & 5.50 & 8.74 & -2.901 & 4.59 & 3.30 + 1.750 & 9.53 & 1.50 & 2.41 & -1.888 & 4.62 & 5.55 + 2.000 & 2.18 & 3.44 & 5.43 & -1.064 & 4.64 & 4.09 + 2.500 & 7.98 & 1.28 & 2.02 & 2.434 & 4.65 & 5.28 + 3.000 & 2.15 & 3.38 & 5.28 & 1.218 & 4.60 & 3.29 + 3.500 & 4.70 & 7.47 & 1.19 & 2.011 & 4.68 & 1.02 + 4.000 & 8.82 & 1.40 & 2.25 & 2.642 & 4.66 & 3.17 + 5.000 & 2.40 & 3.82 & 6.06 & 3.642 & 4.68 & 5.28 + 6.000 & 5.18 & 8.16 & 1.31 & 4.409 & 4.67 & 6.51 + 7.000 & 9.36 & 1.49 & 2.41 & 5.007 & 4.72 & 4.03 + 8.000 & 1.52 & 2.41 & 3.84 & 5.486 & 4.69 & 3.46 + 9.000 & 2.27 & 3.67 & 5.76 & 5.897 & 4.70 & 4.66 + 10.000 & 3.23 & 5.16 & 8.31 & 6.247 & 4.75 & 2.94 + comments : no unbound states have been observed in the compound nucleus .based on the comparison to the structure of the mg mirror nucleus it can be concluded that the lowest - lying resonance in +p is expected near a relatively high energy of 1 mev .in fact , a coulomb displacement energy calculation finds e kev , which is adopted in the present work ; we estimate an uncertainty of 100 kev , although this value should be regarded as a rough guess only .the proton and -ray partial widths are calculated by using the shell model .the total reaction rates are dominated by the direct capture process into the ground and first excited states of at all temperatures of interest .we adopt the direct capture s - factor from ref . , which is based on shell model spectroscopic factors .higher lying resonances are expected to make a minor contribution to the total rate .+ & & & & & & + + & & & & & & + 0.010 & 4.66 & 6.84 & 1.01 & -9.479 & 3.84 & 2.42 + 0.011 & 1.79 & 2.65 & 3.87 & -9.113 & 3.89 & 3.37 + 0.012 & 4.57 & 6.75 & 9.96 & -8.789 & 3.92 & 2.60 + 0.013 & 8.14 & 1.20 & 1.75 & -8.502 & 3.83 & 4.75 + 0.014 & 1.10 & 1.63 & 2.40 & -8.241 & 3.87 & 2.63 + 0.015 & 1.19 & 1.73 & 2.54 & -8.004 & 3.85 & 1.51 + 0.016 & 1.04 & 1.52 & 2.20 & -7.788 & 3.81 & 3.14 + 0.018 & 4.73 & 7.00 & 1.03 & -7.404 & 3.87 & 3.30 + 0.020 & 1.29 & 1.89 & 2.79 & -7.074 & 3.83 & 2.42 + 0.025 & 9.59 & 1.42 & 2.10 & -6.412 & 3.92 & 4.30 + 0.030 & 1.47 & 2.17 & 3.24 & -5.909 & 3.93 & 5.09 + 0.040 & 2.32 & 3.39 & 5.00 & -5.174 & 3.86 & 3.20 + 0.050 & 4.28 & 6.17 & 8.99 & -4.653 & 3.81 & 4.94 + 0.060 & 2.25 & 3.32 & 4.84 & -4.255 & 3.88 & 2.60 + 0.070 & 5.45 & 7.96 & 1.17 & -3.937 & 3.86 & 1.65 + 0.080 & 9.32 & 1.31 & 1.84 & -3.657 & 3.47 & 6.53 + 0.090 & 2.08 & 3.21 & 5.15 & -3.335 & 4.60 & 6.20 + 0.100 & 5.88 & 1.00 & 1.72 & -2.993 & 5.41 & 1.16 + 0.110 & 1.21 & 2.06 & 3.42 & -2.691 & 5.28 & 3.79 + 0.120 & 1.57 & 2.60 & 4.19 & -2.438 & 5.00 & 4.08 + 0.130 & 1.38 & 2.23 & 3.49 & -2.223 & 4.74 & 4.15 + 0.140 & 8.81 & 1.39 & 2.14 & -2.040 & 4.54 & 4.14 + 0.150 & 4.36 & 6.78 & 1.03 & -1.882 & 4.37 & 4.03 + 0.160 & 1.76 & 2.69 & 4.03 & -1.744 & 4.24 & 3.90 + 0.180 & 1.75 & 2.63 & 3.90 & -1.515 & 4.05 & 3.89 + 0.200 & 1.08 & 1.61 & 2.36 & -1.334 & 3.92 & 4.05 + 0.250 & 2.83 & 4.15 & 6.02 & -1.009 & 3.77 & 2.46 + 0.300 & 2.51 & 3.64 & 5.29 & -7.921 & 3.77 & 2.08 + 0.350 & 1.21 & 1.77 & 2.63 & -6.329 & 3.93 & 3.40 + 0.400 & 4.08 & 6.14 & 9.42 & -5.085 & 4.17 & 1.18 + 0.450 & 1.10 & 1.70 & 2.67 & -4.071 & 4.42 & 1.37 + 0.500 & 2.50 & 3.97 & 6.37 & -3.223 & 4.59 & 1.05 + 0.600 & 9.45 & 1.54 & 2.45 & -1.881 & 4.65 & 2.35 + 0.700 & 2.66 & 4.27 & 6.52 & -8.679 & 4.41 & 4.08 + 0.800 & 6.09 & 9.39 & 1.38 & -7.662 & 4.05 & 3.80 + 0.900 & 1.20 & 1.78 & 2.52 & 5.589 & 3.68 & 2.32 + 1.000 & 2.10 & 2.97 & 4.12 & 1.081 & 3.35 & 1.06 + 1.250 & 5.91 & 7.79 & 1.02 & 2.052 & 2.78 & 2.20 + 1.500 & 1.18 & 1.51 & 1.94 & 2.718 & 2.49 & 1.88 + 1.750 & 1.94 & 2.44 & 3.09 & 3.196 & 2.36 & 2.94 + 2.000 & ( 2.80 ) & ( 3.55 ) & ( 4.48 ) & ( 3.569 ) & ( 2.35 ) & + 2.500 & ( 4.86 ) & ( 6.14 ) & ( 7.77 ) & ( 4.118 ) & ( 2.35 ) & + 3.000 & ( 7.16 ) & ( 9.06 ) & ( 1.15 ) & ( 4.506 ) & ( 2.35 ) & + 3.500 & ( 9.49 ) & ( 1.20 ) & ( 1.52 ) & ( 4.787 ) & ( 2.35 ) & + 4.000 & ( 1.19 ) & ( 1.51 ) & ( 1.91 ) & ( 5.017 ) & ( 2.35 ) & + 5.000 & ( 1.66 ) & ( 2.11 ) & ( 2.66 ) & ( 5.350 ) & ( 2.35 ) & + 6.000 & ( 2.13 ) & ( 2.69 ) & ( 3.41 ) & ( 5.596 ) & ( 2.35 ) & + 7.000 & ( 2.57 ) & ( 3.24 ) & ( 4.10 ) & ( 5.782 ) & ( 2.35 ) & + 8.000 & ( 2.98 ) & ( 3.77 ) & ( 4.76 ) & ( 5.931 ) & ( 2.35 ) & + 9.000 & ( 3.37 ) & ( 4.26 ) & ( 5.38 ) & ( 6.054 ) & ( 2.35 ) & + 10.000 & ( 3.80 ) & ( 4.81 ) & ( 6.08 ) & ( 6.176 ) & ( 2.35 ) & + comments : altogether 8 resonances at energies of e kev are considered for calculating the reaction rates .their energies are obtained from measured excitation energies and the reaction q - value .the exception is the e kev resonance for which the corresponding level has not been observed .its excitation energy is obtained by applying the isobaric multiplet mass equation ( imme ) , with a corresponding uncertainty of about 40 kev ( see iliadis et al . for details ) .the spins and parities of the resonances are not known unambiguously .they are based here on experimental restrictions , comparison of mirror reaction cross sections , and the application of the imme .note that the present resonance energies and some assignments differ in general from those of iliadis et al . since new information became recently available .in particular we assume that the e kev level from fynbo et al . is the same as the e kev ( 3 ) level of yokota et al . ( comparison of the excitation energies in refs . shows that the former values are too high by about kev ) .obviously , an experimental verification of the present assignments is desirable .proton partial widths are calculated by using values of mirror states ( we estimate a value of for e kev from the published ( d , p ) angular distribution of mackh et al .gamma - ray partial widths are estimated using measured lifetimes of mirror states , except for e kev for which no measured lifetime is available ; here we roughly estimate the transition strengths using rul s and the known -ray branching and mixing ratios of the e kev mirror in .the direct capture s - factor is also computed using experimental values of mirror states .+ & & & & & & + + & & & & & & + 0.010 & 1.08 & 1.59 & 2.35 & -9.394 & 3.84 & 5.78 + 0.011 & 4.18 & 6.15 & 8.92 & -9.029 & 3.75 & 4.65 + 0.012 & 1.08 & 1.58 & 2.31 & -8.704 & 3.84 & 2.13 + 0.013 & 1.94 & 2.85 & 4.17 & -8.415 & 3.82 & 4.30 + 0.014 & 2.62 & 3.87 & 5.66 & -8.154 & 3.89 & 4.22 + 0.015 & 2.83 & 4.17 & 6.10 & -7.917 & 3.87 & 2.19 + 0.016 & 2.46 & 3.60 & 5.30 & -7.700 & 3.88 & 7.66 + 0.018 & 1.15 & 1.68 & 2.44 & -7.317 & 3.84 & 3.24 + 0.020 & 3.12 & 4.54 & 6.70 & -6.986 & 3.85 & 3.56 + 0.025 & 2.36 & 3.44 & 5.03 & -6.323 & 3.80 & 4.04 + 0.030 & 3.61 & 5.30 & 7.75 & -5.820 & 3.84 & 4.13 + 0.040 & 6.23 & 8.87 & 1.27 & -5.078 & 3.52 & 3.84 + 0.050 & 1.14 & 1.74 & 2.82 & -4.317 & 4.55 & 6.22 + 0.060 & 1.28 & 2.05 & 3.38 & -3.611 & 4.92 & 7.12 + 0.070 & 2.22 & 3.49 & 5.59 & -3.098 & 4.65 & 3.35 + 0.080 & 1.06 & 1.62 & 2.53 & -2.714 & 4.40 & 2.96 + 0.090 & 2.11 & 3.16 & 4.82 & -2.418 & 4.21 & 3.51 + 0.100 & 2.26 & 3.35 & 5.00 & -2.182 & 4.05 & 3.68 + 0.110 & 1.55 & 2.27 & 3.36 & -1.990 & 3.93 & 3.68 + 0.120 & 7.67 & 1.11 & 1.63 & -1.831 & 3.81 & 3.48 + 0.130 & 2.96 & 4.26 & 6.15 & -1.697 & 3.70 & 3.29 + 0.140 & 9.54 & 1.35 & 1.93 & -1.581 & 3.55 & 3.74 + 0.150 & 2.71 & 3.77 & 5.27 & -1.479 & 3.33 & 6.67 + 0.160 & 7.11 & 9.58 & 1.31 & -1.385 & 3.03 & 1.51 + 0.180 & 4.31 & 5.33 & 6.75 & -1.213 & 2.25 & 4.37 + 0.200 & 2.27 & 2.66 & 3.13 & -1.053 & 1.62 & 2.57 + 0.250 & 6.78 & 7.68 & 8.75 & -7.169 & 1.28 & 1.43 + 0.300 & 7.86 & 8.87 & 10.00 & -4.726 & 1.21 & 1.03 + 0.350 & 4.78 & 5.36 & 5.97 & -2.928 & 1.12 & 6.47 + 0.400 & 1.91 & 2.12 & 2.35 & -1.551 & 1.04 & 2.25 + 0.450 & 5.72 & 6.31 & 6.96 & -4.598 & 9.83 & 1.38 + 0.500 & 1.40 & 1.53 & 1.69 & 4.278 & 9.44 & 2.16 + 0.600 & 5.44 & 5.96 & 6.52 & 1.785 & 9.11 & 2.11 + 0.700 & 1.46 & 1.60 & 1.75 & 2.773 & 9.07 & 3.09 + 0.800 & 3.09 & 3.40 & 3.71 & 3.524 & 9.11 & 4.32 + 0.900 & 5.58 & 6.12 & 6.69 & 4.114 & 9.12 & 3.79 + 1.000 & 8.99 & 9.85 & 1.08 & 4.589 & 9.07 & 3.42 + 1.250 & 2.16 & 2.36 & 2.57 & 5.462 & 8.67 & 2.94 + 1.500 & 4.00 & 4.34 & 4.69 & 6.072 & 8.00 & 3.89 + 1.750 & 6.42 & 6.92 & 7.42 & 6.538 & 7.24 & 4.77 + 2.000 & 9.45 & 1.01 & 1.07 & 6.916 & 6.52 & 4.61 + 2.500 & 1.72 & 1.81 & 1.91 & 7.502 & 5.46 & 5.71 + 3.000 & ( 2.66 ) & ( 2.80 ) & ( 2.94 ) & ( 7.936 ) & ( 4.94 ) & + 3.500 & ( 3.67 ) & ( 3.86 ) & ( 4.06 ) & ( 8.259 ) & ( 4.94 ) & + 4.000 & ( 4.74 ) & ( 4.98 ) & ( 5.23 ) & ( 8.512 ) & ( 4.94 ) & + 5.000 & ( 6.83 ) & ( 7.18 ) & ( 7.54 ) & ( 8.879 ) & ( 4.94 ) & + 6.000 & ( 8.87 ) & ( 9.32 ) & ( 9.80 ) & ( 9.140 ) & ( 4.94 ) & + 7.000 & ( 1.08 ) & ( 1.13 ) & ( 1.19 ) & ( 9.332 ) & ( 4.94 ) & + 8.000 & ( 1.25 ) & ( 1.31 ) & ( 1.38 ) & ( 9.482 ) & ( 4.94 ) & + 9.000 & ( 1.40 ) & ( 1.48 ) & ( 1.55 ) & ( 9.599 ) & ( 4.94 ) & + 10.000 & ( 1.58 ) & ( 1.66 ) & ( 1.74 ) & ( 9.717 ) & ( 4.94 ) & + comments : in total , 41 resonances at e kev are taken into account for the calculation of the total reaction rate .the direct capture s - factor is adopted from iliadis et al .resonance energies and strengths are adopted from endt , where the latter values are renormalized using the standard strengths given in tab . 1 of iliadis et al . .the subthreshold level at e kev ( 2 ) has not been taken into account since its contribution is negligible .for information on unobserved low - energy resonances , see the comments in ref . .two levels are omitted in the present work : ( i ) e kev , since its observation has only been reported as a private communication ( it has not been observed in proton transfer ) , and ( ii ) e kev , since ref . considers it to be identical to the e kev level .the interference of the two 1 resonances at e and 1468 kev is explicitly taken into account ( the interference has an unknown sign and is thus sampled by using a binary probability density ; see sec 4.4 in paper i ) .+ & & & & & & + + & & & & & & + 0.010 & 2.78 & 1.54 & 9.29 & -1.017 & 2.85 & 2.58 + 0.011 & 1.14 & 1.69 & 3.76 & -9.797 & 2.82 & 2.77 + 0.012 & 2.78 & 1.15 & 9.69 & -9.482 & 2.86 & 2.58 + 0.013 & 6.77 & 7.70 & 1.82 & -9.177 & 2.78 & 2.68 + 0.014 & 7.69 & 5.35 & 2.53 & -8.918 & 2.83 & 2.66 + 0.015 & 8.74 & 1.04 & 2.83 & -8.676 & 2.81 & 2.71 + 0.016 & 8.46 & 1.24 & 2.53 & -8.450 & 2.78 & 2.84 + 0.018 & 4.29 & 5.76 & 1.23 & -8.061 & 2.76 & 2.80 + 0.020 & 1.20 & 1.56 & 3.50 & -7.726 & 2.75 & 2.74 + 0.025 & 1.15 & 4.62 & 2.96 & -7.051 & 2.67 & 2.47 + 0.030 & 2.92 & 1.83 & 5.14 & -6.511 & 2.39 & 2.81 + 0.040 & 1.01 & 1.86 & 3.02 & -5.470 & 5.66 & 1.56 + 0.050 & 5.86 & 9.93 & 1.70 & -4.606 & 5.31 & 2.06 + 0.060 & 2.28 & 3.72 & 6.04 & -4.013 & 4.93 & 1.91 + 0.070 & 1.66 & 2.64 & 4.18 & -3.588 & 4.68 & 7.86 + 0.080 & 4.11 & 6.70 & 1.05 & -3.265 & 4.75 & 1.82 + 0.090 & 4.97 & 8.42 & 1.38 & -3.012 & 5.15 & 1.06 + 0.100 & 3.65 & 6.48 & 1.15 & -2.807 & 5.71 & 4.74 + 0.110 & 1.94 & 3.58 & 6.80 & -2.634 & 6.17 & 2.07 + 0.120 & 9.42 & 1.66 & 3.19 & -2.478 & 5.96 & 9.39 + 0.130 & 5.36 & 8.19 & 1.40 & -2.317 & 4.79 & 1.85 + 0.140 & 3.38 & 4.66 & 6.72 & -2.147 & 3.48 & 5.89 + 0.150 & 1.98 & 2.63 & 3.51 & -1.976 & 2.86 & 4.73 + 0.160 & 9.92 & 1.31 & 1.72 & -1.816 & 2.73 & 3.02 + 0.180 & 1.56 & 2.06 & 2.69 & -1.540 & 2.72 & 9.17 + 0.200 & 1.46 & 1.90 & 2.50 & -1.317 & 2.78 & 6.01 + 0.250 & 7.76 & 1.02 & 1.34 & -9.188 & 2.73 & 4.22 + 0.300 & 1.09 & 1.42 & 1.86 & -6.555 & 2.66 & 6.03 + 0.350 & 7.76 & 9.73 & 1.23 & -4.628 & 2.35 & 1.18 + 0.400 & 3.72 & 4.59 & 5.63 & -3.081 & 2.04 & 7.63 + 0.450 & 1.43 & 1.71 & 2.06 & -1.762 & 1.81 & 7.47 + 0.500 & 4.52 & 5.42 & 6.46 & -6.141 & 1.76 & 4.79 + 0.600 & 2.93 & 3.50 & 4.25 & 1.259 & 1.88 & 5.49 + 0.700 & 1.19 & 1.44 & 1.76 & 2.669 & 1.98 & 4.89 + 0.800 & 3.53 & 4.28 & 5.22 & 3.758 & 1.96 & 3.44 + 0.900 & 8.52 & 1.02 & 1.24 & 4.629 & 1.86 & 5.97 + 1.000 & 1.79 & 2.12 & 2.52 & 5.356 & 1.72 & 1.05 + 1.250 & 8.19 & 9.25 & 1.05 & 6.833 & 1.24 & 2.18 + 1.500 & 2.93 & 3.22 & 3.55 & 8.078 & 9.53 & 3.91 + 1.750 & 8.64 & 9.48 & 1.04 & 9.158 & 9.49 & 4.92 + 2.000 & 2.13 & 2.35 & 2.60 & 1.006 & 9.81 & 2.69 + 2.500 & ( 7.77 ) & ( 8.56 ) & ( 9.44 ) & ( 1.136 ) & ( 9.78 ) & + 3.000 & ( 2.00 ) & ( 2.20 ) & ( 2.43 ) & ( 1.230 ) & ( 9.78 ) & + 3.500 & ( 4.22 ) & ( 4.66 ) & ( 5.13 ) & ( 1.305 ) & ( 9.78 ) & + 4.000 & ( 7.82 ) & ( 8.62 ) & ( 9.51 ) & ( 1.367 ) & ( 9.78 ) & + 5.000 & ( 2.04 ) & ( 2.25 ) & ( 2.48 ) & ( 1.462 ) & ( 9.78 ) & + 6.000 & ( 4.11 ) & ( 4.54 ) & ( 5.00 ) & ( 1.533 ) & ( 9.78 ) & + 7.000 & ( 7.12 ) & ( 7.85 ) & ( 8.65 ) & ( 1.588 ) & ( 9.78 ) & + 8.000 & ( 1.10 ) & ( 1.21 ) & ( 1.34 ) & ( 1.631 ) & ( 9.78 ) & + 9.000 & ( 1.56 ) & ( 1.72 ) & ( 1.90 ) & ( 1.666 ) & ( 9.78 ) & + 10.000 & ( 2.22 ) & ( 2.44 ) & ( 2.70 ) & ( 1.701 ) & ( 9.78 ) & + comments : in total , 41 resonances at e kev are taken into account for the calculation of the total reaction rate .resonance energies and strengths are adopted from endt .note that no reliable resonance strength standard is available for this reaction .the subthreshold level at e kev ( 2 ) has not been taken into account since its contribution is expected to be small . for information on unobserved low - energy resonances , see the comments in ref . .two levels are omitted in the present work : ( i ) e kev , since its observation has only been reported as a private communication ( it has not been observed in proton transfer ) , and ( ii ) e kev , since ref . considers it to be identical to the e kev level .the existence of some -particle strength near the proton threshold is apparent from the measured ( , d) spectrum , shown in fig . 4 of ref .the interference of the two 1 resonances at e and 1468 kev is explicitly taken into account , giving rise to the bimodal reaction rate probability density function at temperatures below 40 mk , as can be seen in the panel below .( the interference has an unknown sign and is thus sampled by using a binary probability density ; see sec 4.4 in paper i ) .+ & & & & & & + + & & & & & & + 0.010 & 1.53 & 2.44 & 3.92 & -1.027 & 4.71 & 2.23 + 0.011 & 7.00 & 1.12 & 1.78 & -9.890 & 4.71 & 1.93 + 0.012 & 2.01 & 3.28 & 5.23 & -9.553 & 4.76 & 5.31 + 0.013 & 4.17 & 6.64 & 1.06 & -9.251 & 4.73 & 2.25 + 0.014 & 6.41 & 1.01 & 1.66 & -8.978 & 4.75 & 9.00 + 0.015 & 7.59 & 1.20 & 1.95 & -8.731 & 4.73 & 5.32 + 0.016 & 7.29 & 1.16 & 1.86 & -8.505 & 4.70 & 2.36 + 0.018 & 4.06 & 6.48 & 1.02 & -8.103 & 4.63 & 7.54 + 0.020 & 1.27 & 2.06 & 3.31 & -7.757 & 4.75 & 4.70 + 0.025 & 1.26 & 2.04 & 3.27 & -7.067 & 4.74 & 2.72 + 0.030 & 2.48 & 3.98 & 6.30 & -6.540 & 4.69 & 3.98 + 0.040 & 5.34 & 8.49 & 1.35 & -5.773 & 4.70 & 1.88 + 0.050 & 1.26 & 2.04 & 3.27 & -5.225 & 4.77 & 2.42 + 0.060 & 7.98 & 1.30 & 2.08 & -4.810 & 4.79 & 5.02 + 0.070 & 2.23 & 3.58 & 5.65 & -4.479 & 4.72 & 3.66 + 0.080 & 3.40 & 5.51 & 8.80 & -4.204 & 4.68 & 3.80 + 0.090 & 3.44 & 5.53 & 8.82 & -3.974 & 4.67 & 2.67 + 0.100 & 2.57 & 4.09 & 6.54 & -3.774 & 4.67 & 2.83 + 0.110 & 1.52 & 2.41 & 3.86 & -3.596 & 4.69 & 6.58 + 0.120 & 8.41 & 1.34 & 2.21 & -3.422 & 5.13 & 1.01 + 0.130 & 4.90 & 8.64 & 1.88 & -3.228 & 7.37 & 6.62 + 0.140 & 3.27 & 7.44 & 2.30 & -3.010 & 9.78 & 3.97 + 0.150 & 2.55 & 7.15 & 2.33 & -2.790 & 1.09 & 8.93 + 0.160 & 1.97 & 5.75 & 1.81 & -2.585 & 1.10 & 1.78 + 0.180 & 7.11 & 1.95 & 5.54 & -2.234 & 1.02 & 4.03 + 0.200 & 1.28 & 3.26 & 8.47 & -1.953 & 9.46 & 2.89 + 0.250 & 2.22 & 4.93 & 1.10 & -1.452 & 8.01 & 2.32 + 0.300 & 6.61 & 1.33 & 2.68 & -1.122 & 7.10 & 2.31 + 0.350 & 7.10 & 1.35 & 2.59 & -8.905 & 6.51 & 2.13 + 0.400 & 4.10 & 7.47 & 1.37 & -7.193 & 6.10 & 2.11 + 0.450 & 1.56 & 2.77 & 4.94 & -5.883 & 5.81 & 2.55 + 0.500 & 4.46 & 7.78 & 1.36 & -4.852 & 5.59 & 2.87 + 0.600 & 2.08 & 3.53 & 6.00 & -3.341 & 5.31 & 2.53 + 0.700 & 6.03 & 1.00 & 1.68 & -2.298 & 5.13 & 2.12 + 0.800 & 1.30 & 2.13 & 3.52 & -1.542 & 5.02 & 1.98 + 0.900 & 2.30 & 3.77 & 6.16 & -9.751 & 4.95 & 2.03 + 1.000 & 3.57 & 5.82 & 9.47 & -5.382 & 4.90 & 2.04 + 1.250 & 7.51 & 1.22 & 1.96 & 1.987 & 4.82 & 2.04 + 1.500 & 1.18 & 1.91 & 3.05 & 6.420 & 4.77 & 2.12 + 1.750 & 1.57 & 2.53 & 4.04 & 9.273 & 4.72 & 2.54 + 2.000 & 1.92 & 3.07 & 4.88 & 1.122 & 4.65 & 3.97 + 2.500 & 2.51 & 3.90 & 6.12 & 1.366 & 4.45 & 9.67 + 3.000 & 3.02 & 4.52 & 6.96 & 1.520 & 4.18 & 2.39 + 3.500 & 3.50 & 5.04 & 7.57 & 1.633 & 3.89 & 3.78 + 4.000 & 3.88 & 5.48 & 8.03 & 1.717 & 3.64 & 4.50 + 5.000 & 4.45 & 6.07 & 8.52 & 1.815 & 3.31 & 3.69 + 6.000 & 4.54 & 6.17 & 8.56 & 1.830 & 3.18 & 2.02 + 7.000 & 4.38 & 5.99 & 8.28 & 1.795 & 3.19 & 1.44 + 8.000 & 4.12 & 5.64 & 7.90 & 1.740 & 3.28 & 1.04 + 9.000 & 3.88 & 5.39 & 7.58 & 1.691 & 3.34 & 7.34 + 10.000 & 3.70 & 5.18 & 7.31 & 1.651 & 3.38 & 6.74 + comments : because of the small q - value ( tab .[ tab : master ] ) only the first and second excited states in are expected to contribute to the resonant reaction rate .the second excited state has been observed at an excitation energy of e kev by fynbo et al . ; from their proton energy observed in the -delayed decay spectrum ( e kev ) we find a resonance energy of e kev .the resonance energy for the astrophysically more important first excited state was recently estimated using the isobaric multiplet mass equation ( imme ) , resulting in a value of e kev .this value is consistent with the energy of the proton peak ( e kev ) observed in the -delayed decay work of axelsson et al . ; the observed energy yields a resonance energy of e kev .although we adopt this value for the resonance energy , an independent measurement would be highly desirable in view of the possibility that the observed proton peak in ref . may arise from the -delayed 2p - decay ( instead of single proton emission ) of .the proton and -ray partial widths of these two resonances , as well as the direct capture s - factor , are derived from shell model results .+ & & & & & & + + & & & & & & + 0.010 & 2.33 & 3.48 & 5.12 & -1.001 & 3.89 & 4.83 + 0.011 & 1.06 & 1.57 & 2.32 & -9.625 & 3.85 & 3.57 + 0.012 & 3.19 & 4.66 & 6.80 & -9.287 & 3.81 & 2.25 + 0.013 & 6.53 & 9.52 & 1.40 & -8.985 & 3.83 & 5.53 + 0.014 & 9.78 & 1.44 & 2.11 & -8.713 & 3.90 & 3.03 + 0.015 & 1.19 & 1.74 & 2.57 & -8.464 & 3.89 & 2.91 + 0.016 & 1.15 & 1.70 & 2.45 & -8.237 & 3.85 & 7.08 + 0.018 & 6.23 & 9.13 & 1.33 & -7.838 & 3.80 & 2.33 + 0.020 & 1.97 & 2.89 & 4.25 & -7.492 & 3.85 & 4.41 + 0.025 & 1.98 & 2.95 & 4.31 & -6.801 & 3.86 & 4.60 + 0.030 & 3.98 & 5.84 & 8.63 & -6.271 & 3.84 & 6.29 + 0.040 & 4.33 & 1.17 & 3.59 & -5.274 & 1.02 & 5.95 + 0.050 & 3.08 & 7.18 & 1.59 & -4.410 & 8.34 & 2.82 + 0.060 & 1.31 & 2.44 & 4.37 & -3.827 & 6.11 & 3.41 + 0.070 & 9.29 & 1.51 & 2.39 & -3.414 & 4.76 & 2.06 + 0.080 & 2.16 & 3.24 & 4.82 & -3.106 & 4.06 & 5.35 + 0.090 & 2.36 & 3.45 & 5.07 & -2.869 & 3.82 & 3.91 + 0.100 & 1.54 & 2.26 & 3.33 & -2.681 & 3.86 & 5.54 + 0.110 & 7.00 & 1.04 & 1.55 & -2.529 & 4.04 & 7.02 + 0.120 & 2.39 & 3.67 & 5.61 & -2.403 & 4.28 & 7.43 + 0.130 & 6.69 & 1.06 & 1.65 & -2.298 & 4.54 & 9.83 + 0.140 & 1.60 & 2.61 & 4.15 & -2.208 & 4.80 & 1.29 + 0.150 & 3.36 & 5.65 & 9.19 & -2.131 & 5.04 & 1.57 + 0.160 & 6.40 & 1.10 & 1.83 & -2.064 & 5.26 & 1.80 + 0.180 & 1.84 & 3.31 & 5.72 & -1.954 & 5.65 & 2.08 + 0.200 & 4.29 & 7.86 & 1.41 & -1.868 & 5.94 & 2.00 + 0.250 & 3.13 & 4.93 & 8.14 & -1.680 & 4.74 & 2.63 + 0.300 & 6.83 & 9.10 & 1.22 & -1.391 & 2.91 & 4.26 + 0.350 & 1.19 & 1.59 & 2.16 & -1.105 & 3.02 & 4.26 + 0.400 & 1.10 & 1.48 & 2.02 & -8.812 & 3.05 & 9.34 + 0.450 & 6.31 & 8.44 & 1.16 & -7.067 & 3.10 & 1.68 + 0.500 & 2.53 & 3.40 & 4.68 & -5.672 & 3.15 & 2.34 + 0.600 & 2.01 & 2.72 & 3.77 & -3.592 & 3.23 & 3.26 + 0.700 & 8.66 & 1.18 & 1.65 & -2.124 & 3.29 & 3.76 + 0.800 & 2.56 & 3.50 & 4.91 & -1.037 & 3.33 & 4.01 + 0.900 & 5.90 & 8.05 & 1.13 & -2.047 & 3.34 & 4.17 + 1.000 & 1.14 & 1.56 & 2.18 & 4.530 & 3.33 & 4.34 + 1.250 & 3.68 & 4.99 & 6.95 & 1.618 & 3.24 & 4.76 + 1.500 & 8.03 & 1.07 & 1.48 & 2.384 & 3.11 & 4.75 + 1.750 & 1.40 & 1.85 & 2.52 & 2.931 & 2.97 & 4.19 + 2.000 & 2.14 & 2.79 & 3.75 & 3.341 & 2.87 & 3.44 + 2.500 & 3.82 & 4.95 & 6.59 & 3.911 & 2.75 & 2.24 + 3.000 & 5.55 & 7.16 & 9.45 & 4.280 & 2.71 & 1.82 + 3.500 & 7.12 & 9.21 & 1.22 & 4.531 & 2.72 & 1.82 + 4.000 & 8.47 & 1.09 & 1.45 & 4.704 & 2.74 & 1.88 + 5.000 & 1.03 & 1.34 & 1.79 & 4.912 & 2.78 & 2.29 + 6.000 & 1.14 & 1.48 & 1.98 & 5.012 & 2.81 & 2.69 + 7.000 & 1.19 & 1.54 & 2.06 & 5.054 & 2.83 & 2.97 + 8.000 & 1.19 & 1.56 & 2.08 & 5.062 & 2.85 & 3.19 + 9.000 & 1.18 & 1.54 & 2.06 & 5.049 & 2.86 & 3.34 + 10.000 & 1.15 & 1.50 & 2.01 & 5.023 & 2.87 & 3.46 + + & & & & & & + + & & & & & & + 0.010 & 3.29 & 4.79 & 7.04 & -9.975 & 3.82 & 2.01 + 0.011 & 1.53 & 2.22 & 3.24 & -9.591 & 3.76 & 2.65 + 0.012 & 6.90 & 9.37 & 1.28 & -9.217 & 3.12 & 5.18 + 0.013 & 5.31 & 7.72 & 1.15 & -8.775 & 3.86 & 2.04 + 0.014 & 4.55 & 7.10 & 1.09 & -8.324 & 4.33 & 3.18 + 0.015 & 2.57 & 3.98 & 6.13 & -7.921 & 4.31 & 2.82 + 0.016 & 8.96 & 1.37 & 2.09 & -7.567 & 4.23 & 2.06 + 0.018 & 3.30 & 4.96 & 7.47 & -6.978 & 4.09 & 1.28 + 0.020 & 3.63 & 5.43 & 8.12 & -6.508 & 3.99 & 1.42 + 0.025 & 1.65 & 2.42 & 3.59 & -5.668 & 3.89 & 2.98 + 0.030 & 4.27 & 6.25 & 9.23 & -5.113 & 3.86 & 2.68 + 0.040 & 4.02 & 5.89 & 8.75 & -4.428 & 3.88 & 2.48 + 0.050 & 2.25 & 3.33 & 4.94 & -4.024 & 3.92 & 2.20 + 0.060 & 3.13 & 4.67 & 6.93 & -3.760 & 3.96 & 2.71 + 0.070 & 1.99 & 2.98 & 4.42 & -3.575 & 3.98 & 3.02 + 0.080 & 7.82 & 1.17 & 1.74 & -3.438 & 3.98 & 3.38 + 0.090 & 2.29 & 3.37 & 5.00 & -3.332 & 3.90 & 5.47 + 0.100 & 5.72 & 8.22 & 1.19 & -3.243 & 3.66 & 1.31 + 0.110 & 1.40 & 1.91 & 2.65 & -3.158 & 3.21 & 1.07 + 0.120 & 3.62 & 4.78 & 6.33 & -3.067 & 2.83 & 5.40 + 0.130 & 1.10 & 1.43 & 1.86 & -2.958 & 2.62 & 1.35 + 0.140 & 4.59 & 5.63 & 7.09 & -2.819 & 2.21 & 7.47 + 0.150 & 2.38 & 2.85 & 3.44 & -2.658 & 1.87 & 6.37 + 0.160 & 1.26 & 1.51 & 1.81 & -2.491 & 1.84 & 2.81 + 0.180 & 2.74 & 3.28 & 3.90 & -2.184 & 1.77 & 2.51 + 0.200 & 4.05 & 4.70 & 5.45 & -1.917 & 1.50 & 4.09 + 0.250 & 8.99 & 9.97 & 1.11 & -1.382 & 1.05 & 4.75 + 0.300 & 4.42 & 4.94 & 5.53 & -9.914 & 1.11 & 1.25 + 0.350 & 7.58 & 8.51 & 9.58 & -7.068 & 1.17 & 1.30 + 0.400 & 6.37 & 7.16 & 8.09 & -4.937 & 1.20 & 1.25 + 0.450 & 3.29 & 3.71 & 4.19 & -3.293 & 1.21 & 1.20 + 0.500 & 1.21 & 1.36 & 1.54 & -1.992 & 1.22 & 1.17 + 0.600 & 8.21 & 9.28 & 1.05 & -7.347 & 1.23 & 1.12 + 0.700 & 3.12 & 3.53 & 4.00 & 1.262 & 1.24 & 1.10 + 0.800 & 8.27 & 9.35 & 1.06 & 2.238 & 1.24 & 1.08 + 0.900 & 1.73 & 1.96 & 2.22 & 2.976 & 1.25 & 1.07 + 1.000 & 3.07 & 3.47 & 3.94 & 3.550 & 1.25 & 1.06 + 1.250 & 8.19 & 9.28 & 1.05 & 4.532 & 1.25 & 1.05 + 1.500 & 1.50 & 1.70 & 1.93 & 5.138 & 1.25 & 1.04 + 1.750 & 2.23 & 2.53 & 2.87 & 5.535 & 1.26 & 1.04 + 2.000 & 2.93 & 3.32 & 3.77 & 5.808 & 1.25 & 1.04 + 2.500 & 4.10 & 4.64 & 5.26 & 6.142 & 1.25 & 1.09 + 3.000 & 4.95 & 5.59 & 6.33 & 6.328 & 1.22 & 1.20 + 3.500 & 5.58 & 6.27 & 7.08 & 6.444 & 1.18 & 1.39 + 4.000 & 6.08 & 6.79 & 7.63 & 6.524 & 1.13 & 1.64 + 5.000 & 6.88 & 7.60 & 8.44 & 6.635 & 1.02 & 2.19 + 6.000 & 7.55 & 8.25 & 9.05 & 6.717 & 9.06 & 2.51 + 7.000 & 8.12 & 8.78 & 9.55 & 6.780 & 8.17 & 2.46 + 8.000 & 8.56 & 9.21 & 9.94 & 6.827 & 7.52 & 2.06 + 9.000 & ( 9.46 ) & ( 1.02 ) & ( 1.09 ) & ( 6.925 ) & ( 7.27 ) & + 10.000 & ( 1.14 ) & ( 1.23 ) & ( 1.32 ) & ( 7.114 ) & ( 7.27 ) & + + & & & & & & + + & & & & & & + 0.010 & 9.44 & 1.49 & 2.38 & -1.055 & 4.67 & 2.75 + 0.011 & 4.98 & 7.94 & 1.26 & -1.015 & 4.66 & 2.68 + 0.012 & 1.69 & 2.69 & 4.26 & -9.802 & 4.69 & 2.35 + 0.013 & 3.94 & 6.32 & 1.01 & -9.487 & 4.71 & 2.73 + 0.014 & 6.72 & 1.07 & 1.73 & -9.203 & 4.75 & 5.46 + 0.015 & 8.80 & 1.44 & 2.29 & -8.945 & 4.75 & 5.95 + 0.016 & 9.38 & 1.52 & 2.40 & -8.709 & 4.70 & 1.93 + 0.018 & 6.22 & 9.91 & 1.59 & -8.290 & 4.74 & 4.07 + 0.020 & 2.26 & 3.60 & 5.70 & -7.931 & 4.67 & 3.87 + 0.025 & 3.10 & 4.83 & 7.78 & -7.210 & 4.68 & 8.54 + 0.030 & 7.46 & 1.19 & 1.89 & -6.660 & 4.68 & 2.67 + 0.040 & 2.25 & 3.57 & 5.70 & -5.859 & 4.71 & 3.37 + 0.050 & 6.60 & 1.05 & 1.69 & -5.291 & 4.70 & 4.91 + 0.060 & 5.02 & 8.14 & 1.31 & -4.856 & 4.79 & 2.83 + 0.070 & 1.60 & 2.60 & 4.13 & -4.510 & 4.73 & 2.05 + 0.080 & 2.76 & 4.40 & 7.05 & -4.227 & 4.67 & 1.40 + 0.090 & 3.07 & 4.98 & 7.98 & -3.985 & 4.77 & 3.26 + 0.100 & 2.48 & 3.90 & 6.25 & -3.777 & 4.68 & 6.95 + 0.110 & 1.51 & 2.41 & 3.84 & -3.596 & 4.67 & 5.66 + 0.120 & 7.58 & 1.20 & 1.90 & -3.436 & 4.72 & 5.40 + 0.130 & 3.16 & 5.08 & 8.12 & -3.291 & 4.71 & 1.63 + 0.140 & 1.14 & 1.85 & 2.94 & -3.163 & 4.73 & 2.81 + 0.150 & 3.71 & 5.91 & 9.63 & -3.045 & 4.77 & 3.01 + 0.160 & 1.10 & 1.76 & 2.80 & -2.937 & 4.73 & 2.33 + 0.180 & 7.11 & 1.15 & 1.85 & -2.749 & 4.81 & 1.37 + 0.200 & 3.76 & 6.04 & 9.66 & -2.584 & 4.76 & 3.46 + 0.250 & 9.76 & 1.56 & 2.49 & -2.258 & 4.77 & 4.04 + 0.300 & 1.18 & 1.88 & 2.99 & -2.009 & 4.68 & 3.77 + 0.350 & 8.39 & 1.35 & 2.15 & -1.812 & 4.76 & 3.66 + 0.400 & 4.36 & 7.00 & 1.13 & -1.647 & 4.78 & 5.73 + 0.450 & 1.72 & 2.76 & 4.37 & -1.511 & 4.72 & 3.01 + 0.500 & 5.65 & 8.99 & 1.44 & -1.392 & 4.68 & 1.87 + 0.600 & 4.03 & 6.39 & 1.02 & -1.196 & 4.69 & 3.12 + 0.700 & 1.88 & 3.00 & 4.84 & -1.041 & 4.74 & 2.53 + 0.800 & 6.69 & 1.08 & 1.73 & -9.136 & 4.76 & 4.80 + 0.900 & 1.99 & 3.14 & 5.03 & -8.063 & 4.66 & 3.14 + 1.000 & 5.00 & 7.93 & 1.27 & -7.138 & 4.73 & 3.97 + 1.250 & 3.18 & 5.05 & 8.02 & -5.287 & 4.68 & 2.23 + 1.500 & 1.29 & 2.07 & 3.31 & -3.879 & 4.72 & 1.30 + 1.750 & 3.89 & 6.25 & 1.01 & -2.772 & 4.72 & 2.65 + 2.000 & 9.89 & 1.57 & 2.51 & -1.849 & 4.70 & 1.95 + 2.500 & 4.12 & 6.48 & 1.05 & -4.224 & 4.68 & 1.11 + 3.000 & 1.22 & 1.97 & 3.13 & 6.684 & 4.69 & 4.68 + 3.500 & 2.90 & 4.65 & 7.39 & 1.534 & 4.71 & 3.77 + 4.000 & 5.90 & 9.40 & 1.49 & 2.236 & 4.66 & 2.28 + 5.000 & 1.75 & 2.78 & 4.49 & 3.330 & 4.69 & 3.38 + 6.000 & 4.06 & 6.51 & 1.04 & 4.175 & 4.73 & 3.26 + 7.000 & 7.91 & 1.27 & 2.05 & 4.843 & 4.72 & 4.20 + 8.000 & 1.34 & 2.19 & 3.46 & 5.382 & 4.74 & 4.92 + 9.000 & 2.10 & 3.38 & 5.37 & 5.820 & 4.71 & 1.98 + 10.000 & 3.05 & 5.00 & 7.84 & 6.203 & 4.78 & 1.24 + comments : no unbound states have been observed in the compound nucleus .based on the comparison to the structure of the mirror nucleus it can be concluded that the lowest - lying resonance in +p is expected to occur above an energy of 1 mev .in fact , a coulomb displacement energy calculation yields e kev , which is adopted in the present work ; we estimate an uncertainty of 100 kev , although this value should be regarded as a rough guess only .the proton and -ray partial widths are calculated by using the shell model .the total reaction rates are dominated by the direct capture process into the ground and first excited states of at all temperatures of interest .we adopt the direct capture s - factor from ref . , which is based on shell model spectroscopic factors .higher lying resonances are expected to make a minor contribution to the total rate .+ & & & & & & + + & & & & & & + 0.010 & 7.69 & 9.36 & 3.01 & -1.018 & 4.59 & 3.03 + 0.011 & 3.97 & 1.34 & 6.72 & -9.842 & 4.32 & 3.42 + 0.012 & 1.27 & 3.09 & 8.18 & -9.546 & 4.03 & 3.91 + 0.013 & 2.87 & 6.33 & 6.92 & -9.280 & 3.71 & 4.40 + 0.014 & 4.92 & 9.93 & 4.21 & -9.036 & 3.40 & 4.94 + 0.015 & 6.29 & 1.20 & 1.98 & -8.814 & 3.11 & 5.38 + 0.016 & 6.55 & 1.22 & 7.63 & -8.607 & 2.82 & 5.79 + 0.018 & 4.17 & 7.52 & 8.59 & -8.231 & 2.31 & 5.83 + 0.020 & 1.76 & 3.31 & 2.46 & -7.874 & 1.90 & 4.16 + 0.025 & 8.01 & 5.16 & 4.18 & -6.961 & 1.83 & 2.10 + 0.030 & 3.87 & 2.01 & 8.93 & -6.155 & 1.54 & 9.28 + 0.040 & 2.97 & 6.93 & 1.41 & -5.109 & 8.00 & 1.79 + 0.050 & 1.96 & 3.26 & 5.38 & -4.488 & 5.18 & 1.42 + 0.060 & 1.13 & 1.98 & 3.34 & -4.078 & 5.69 & 4.49 + 0.070 & 1.75 & 3.71 & 6.90 & -3.790 & 7.15 & 1.40 + 0.080 & 1.30 & 3.22 & 6.77 & -3.576 & 8.56 & 1.65 + 0.090 & 5.96 & 1.69 & 3.99 & -3.411 & 9.73 & 1.57 + 0.100 & 1.98 & 6.15 & 1.61 & -3.281 & 1.06 & 1.38 + 0.110 & 5.21 & 1.77 & 5.07 & -3.175 & 1.13 & 1.12 + 0.120 & 1.20 & 4.24 & 1.29 & -3.086 & 1.17 & 8.02 + 0.130 & 2.62 & 8.95 & 2.85 & -3.007 & 1.15 & 4.61 + 0.140 & 6.95 & 1.91 & 5.74 & -2.924 & 1.00 & 9.46 + 0.150 & 3.04 & 5.79 & 1.29 & -2.812 & 7.29 & 1.29 + 0.160 & 1.59 & 2.71 & 4.71 & -2.662 & 5.51 & 9.37 + 0.180 & 4.40 & 7.38 & 1.28 & -2.332 & 5.39 & 7.71 + 0.200 & 7.90 & 1.31 & 2.23 & -2.045 & 5.26 & 6.00 + 0.250 & 1.46 & 2.34 & 3.82 & -1.526 & 4.86 & 4.33 + 0.300 & 4.52 & 7.12 & 1.14 & -1.185 & 4.66 & 3.52 + 0.350 & 5.06 & 7.88 & 1.24 & -9.442 & 4.58 & 3.72 + 0.400 & 3.02 & 4.70 & 7.35 & -7.657 & 4.57 & 4.72 + 0.450 & 1.20 & 1.87 & 2.91 & -6.279 & 4.61 & 9.32 + 0.500 & 3.56 & 5.60 & 8.77 & -5.182 & 4.70 & 2.07 + 0.600 & 1.82 & 2.88 & 4.61 & -3.531 & 4.96 & 7.61 + 0.700 & 5.89 & 9.45 & 1.56 & -2.329 & 5.30 & 1.68 + 0.800 & 1.45 & 2.38 & 4.06 & -1.395 & 5.66 & 2.35 + 0.900 & 3.00 & 5.03 & 9.08 & -6.376 & 5.97 & 2.49 + 1.000 & 5.56 & 9.29 & 1.77 & -4.986 & 6.19 & 2.23 + 1.250 & 1.79 & 3.16 & 6.21 & 1.203 & 6.38 & 1.24 + 1.500 & 4.23 & 7.53 & 1.44 & 2.054 & 6.21 & 6.15 + 1.750 & 8.06 & 1.41 & 2.62 & 2.675 & 5.89 & 3.31 + 2.000 & 1.33 & 2.27 & 4.03 & 3.141 & 5.55 & 2.02 + 2.500 & 2.68 & 4.33 & 7.16 & 3.781 & 4.95 & 1.10 + 3.000 & 4.19 & 6.52 & 1.03 & 4.186 & 4.51 & 9.29 + 3.500 & 5.66 & 8.58 & 1.31 & 4.456 & 4.20 & 8.16 + 4.000 & 6.97 & 1.04 & 1.53 & 4.642 & 3.97 & 8.62 + 5.000 & 8.93 & 1.29 & 1.86 & 4.862 & 3.70 & 8.67 + 6.000 & 9.98 & 1.43 & 2.03 & 4.963 & 3.57 & 7.65 + 7.000 & 1.04 & 1.49 & 2.10 & 5.004 & 3.50 & 6.86 + 8.000 & 1.06 & 1.49 & 2.10 & 5.010 & 3.46 & 6.37 + 9.000 & 1.05 & 1.48 & 2.07 & 4.997 & 3.43 & 6.18 + 10.000 & 1.03 & 1.44 & 2.02 & 4.973 & 3.41 & 5.91 + comments : the total rate has contributions from the direct capture process and from 5 resonances located at e kev .the direct capture s - factor as well as the proton and -ray partial widths of the resonances are based on the shell - model .our rate does not take into account three potentially important systematic effects .first , the spin - parity assignments of the measured levels at e and 3819 kev , corresponding to the lowest - lying resonances , are not unambiguously known ; the assignments , and for these levels are based on the analogy with the analog nucleus " .second , the energies of the resonances at e and 1387 kev are not based on experimental excitation energies , but are derived from coulomb shift calculations ; the adopted value of 100 kev for the resonance energy uncertainty must be regarded as a rough value only .third , comparison to the structure of the mirror nucleus reveals that two more unobserved levels ( and ) are expected as resonances near e mev ; these remain at present unaccounted for in the total rate .+ & & & & & & + + & & & & & & + 0.010 & 5.15 & 1.12 & 2.45 & -9.199 & 7.90 & 3.10 + 0.011 & 7.83 & 1.62 & 3.35 & -8.702 & 7.36 & 2.64 + 0.012 & 5.09 & 1.01 & 2.00 & -8.288 & 6.92 & 2.36 + 0.013 & 1.72 & 3.29 & 6.30 & -7.939 & 6.55 & 2.49 + 0.014 & 3.52 & 6.49 & 1.20 & -7.641 & 6.25 & 2.78 + 0.015 & 4.75 & 8.51 & 1.55 & -7.384 & 5.99 & 2.92 + 0.016 & 4.60 & 8.07 & 1.43 & -7.159 & 5.77 & 3.21 + 0.018 & 1.99 & 3.37 & 5.79 & -6.786 & 5.42 & 3.80 + 0.020 & 4.00 & 6.59 & 1.11 & -6.489 & 5.14 & 4.56 + 0.025 & 9.71 & 1.51 & 2.35 & -5.946 & 4.49 & 9.20 + 0.030 & 5.10 & 1.16 & 2.77 & -5.511 & 7.72 & 4.00 + 0.040 & 2.25 & 1.82 & 6.15 & -4.809 & 1.53 & 1.86 + 0.050 & 1.19 & 3.29 & 8.17 & -4.261 & 9.13 & 3.07 + 0.060 & 2.75 & 5.15 & 9.63 & -3.751 & 6.38 & 7.55 + 0.070 & 1.85 & 3.37 & 6.33 & -3.332 & 6.16 & 1.25 + 0.080 & 4.80 & 8.36 & 1.50 & -3.011 & 5.74 & 1.21 + 0.090 & 6.09 & 1.02 & 1.75 & -2.760 & 5.31 & 1.63 + 0.100 & 4.75 & 7.62 & 1.26 & -2.559 & 4.90 & 1.94 + 0.110 & 2.66 & 4.12 & 6.48 & -2.391 & 4.49 & 1.23 + 0.120 & 1.19 & 1.80 & 2.67 & -2.244 & 4.08 & 6.27 + 0.130 & 4.67 & 6.87 & 9.83 & -2.111 & 3.68 & 2.11 + 0.140 & 1.71 & 2.42 & 3.40 & -1.984 & 3.34 & 4.69 + 0.150 & 5.84 & 8.08 & 1.11 & -1.864 & 3.09 & 5.83 + 0.160 & 1.88 & 2.56 & 3.43 & -1.749 & 2.92 & 4.99 + 0.180 & 1.60 & 2.10 & 2.72 & -1.538 & 2.64 & 1.98 + 0.200 & 1.07 & 1.35 & 1.70 & -1.352 & 2.34 & 4.48 + 0.250 & 4.98 & 5.99 & 7.16 & -9.726 & 1.84 & 1.38 + 0.300 & 8.39 & 1.00 & 1.19 & -6.908 & 1.79 & 2.42 + 0.350 & 6.93 & 8.23 & 9.81 & -4.799 & 1.77 & 4.69 + 0.400 & 3.51 & 4.14 & 4.91 & -3.182 & 1.70 & 1.01 + 0.450 & 1.27 & 1.48 & 1.74 & -1.908 & 1.61 & 1.46 + 0.500 & 3.58 & 4.15 & 4.82 & -8.784 & 1.51 & 1.78 + 0.600 & 1.75 & 1.99 & 2.27 & 6.904 & 1.32 & 2.14 + 0.700 & 5.63 & 6.29 & 7.07 & 1.842 & 1.15 & 2.17 + 0.800 & 1.40 & 1.54 & 1.70 & 2.736 & 9.98 & 2.06 + 0.900 & 2.91 & 3.18 & 3.46 & 3.459 & 8.73 & 1.68 + 1.000 & 5.37 & 5.79 & 6.26 & 4.060 & 7.74 & 1.15 + 1.250 & 1.71 & 1.81 & 1.93 & 5.201 & 6.24 & 3.80 + 1.500 & 3.83 & 4.05 & 4.29 & 6.004 & 5.62 & 3.57 + 1.750 & 6.96 & 7.33 & 7.73 & 6.597 & 5.36 & 4.63 + 2.000 & 1.10 & 1.16 & 1.22 & 7.053 & 5.19 & 6.78 + 2.500 & 2.13 & 2.23 & 2.34 & 7.711 & 4.89 & 9.76 + 3.000 & 3.38 & 3.54 & 3.70 & 8.171 & 4.61 & 9.91 + 3.500 & 4.78 & 4.99 & 5.21 & 8.515 & 4.39 & 8.64 + 4.000 & 6.25 & 6.51 & 6.79 & 8.782 & 4.23 & 8.56 + 5.000 & ( 9.20 ) & ( 9.58 ) & ( 9.97 ) & ( 9.167 ) & ( 4.03 ) & + 6.000 & ( 1.21 ) & ( 1.26 ) & ( 1.31 ) & ( 9.442 ) & ( 4.03 ) & + 7.000 & ( 1.50 ) & ( 1.56 ) & ( 1.62 ) & ( 9.653 ) & ( 4.03 ) & + 8.000 & ( 1.76 ) & ( 1.83 ) & ( 1.91 ) & ( 9.816 ) & ( 4.03 ) & + 9.000 & ( 2.01 ) & ( 2.10 ) & ( 2.18 ) & ( 9.950 ) & ( 4.03 ) & + 10.000 & ( 2.30 ) & ( 2.40 ) & ( 2.50 ) & ( 1.008 ) & ( 4.03 ) & + comments : in total , 92 resonances at energies of e kev are taken into account for calculating the total reaction rates .the direct capture s - factor is adopted from iliadis et al .measured resonance energies and strengths are from johnson , meyer and reitmann , endt and van der leun and iliadis et al . , where all the strengths are normalized by using the standard value listed in tab . 1 of iliadis et al .when no uncertainties of resonance strengths are reported , we adopt a value of 20% .three threshold levels are reported in rpke , brenneisen and lickert at e , 8739 and 8919 kev : we assume that the latter state is identical to the e kev level , while the former two states are new , corresponding to resonance energies of e and 232 kev .+ & & & & & & + + & & & & & & + 0.010 & 1.50 & 1.76 & 8.99 & -9.889 & 2.33 & 1.72 + 0.011 & 2.16 & 2.54 & 1.30 & -9.391 & 2.33 & 1.74 + 0.012 & 1.34 & 1.59 & 8.16 & -8.978 & 2.32 & 1.75 + 0.013 & 4.39 & 5.22 & 2.66 & -8.629 & 2.32 & 1.76 + 0.014 & 8.70 & 1.03 & 5.21 & -8.331 & 2.32 & 1.76 + 0.015 & 1.13 & 1.35 & 6.80 & -8.073 & 2.31 & 1.75 + 0.016 & 1.07 & 1.28 & 6.44 & -7.848 & 2.30 & 1.72 + 0.018 & 4.46 & 5.35 & 2.68 & -7.474 & 2.26 & 1.59 + 0.020 & 8.97 & 1.04 & 5.24 & -7.173 & 2.18 & 1.33 + 0.025 & 3.37 & 2.46 & 1.08 & -6.612 & 1.82 & 7.19 + 0.030 & 3.02 & 1.70 & 6.38 & -6.185 & 1.63 & 7.66 + 0.040 & 1.30 & 9.21 & 7.02 & -5.534 & 1.92 & 8.60 + 0.050 & 6.43 & 2.92 & 1.03 & -4.971 & 1.51 & 4.03 + 0.060 & 1.01 & 5.53 & 1.69 & -4.462 & 1.53 & 1.50 + 0.070 & 4.89 & 3.49 & 1.16 & -4.052 & 1.63 & 1.42 + 0.080 & 1.40 & 8.94 & 2.92 & -3.724 & 1.56 & 1.25 + 0.090 & 2.59 & 1.23 & 3.74 & -3.454 & 1.36 & 8.30 + 0.100 & 3.58 & 1.19 & 3.10 & -3.219 & 1.14 & 5.84 + 0.110 & 3.55 & 9.84 & 2.17 & -3.007 & 9.90 & 7.35 + 0.120 & 2.50 & 6.91 & 1.50 & -2.812 & 9.62 & 6.88 + 0.130 & 1.41 & 4.13 & 9.56 & -2.633 & 1.01 & 5.24 + 0.140 & 6.43 & 2.13 & 5.26 & -2.471 & 1.07 & 5.18 + 0.150 & 2.61 & 9.38 & 2.39 & -2.324 & 1.11 & 5.79 + 0.160 & 9.57 & 3.55 & 9.26 & -2.191 & 1.13 & 6.01 + 0.180 & 10.00 & 3.49 & 9.11 & -1.960 & 1.09 & 4.73 + 0.200 & 7.71 & 2.35 & 5.81 & -1.766 & 9.92 & 2.80 + 0.250 & 4.81 & 1.06 & 2.04 & -1.382 & 7.53 & 3.53 + 0.300 & 9.65 & 2.02 & 3.70 & -1.087 & 6.83 & 4.18 + 0.350 & 9.94 & 2.08 & 4.09 & -8.514 & 6.89 & 1.84 + 0.400 & 6.77 & 1.40 & 2.81 & -6.595 & 6.86 & 1.41 + 0.450 & 3.31 & 6.79 & 1.33 & -5.014 & 6.63 & 1.39 + 0.500 & 1.30 & 2.54 & 4.80 & -3.691 & 6.27 & 1.22 + 0.600 & 1.16 & 2.04 & 3.54 & -1.594 & 5.43 & 5.63 + 0.700 & 6.19 & 9.99 & 1.60 & -2.111 & 4.67 & 2.65 + 0.800 & 2.31 & 3.45 & 5.20 & 1.248 & 4.05 & 5.16 + 0.900 & 6.68 & 9.40 & 1.35 & 2.255 & 3.55 & 1.01 + 1.000 & 1.60 & 2.15 & 2.98 & 3.084 & 3.16 & 1.46 + 1.250 & 8.20 & 1.02 & 1.32 & 4.645 & 2.41 & 2.01 + 1.500 & 2.70 & 3.20 & 3.87 & 5.779 & 1.83 & 2.13 + 1.750 & 6.99 & 7.94 & 9.16 & 6.686 & 1.38 & 1.86 + 2.000 & 1.55 & 1.71 & 1.91 & 7.449 & 1.06 & 1.19 + 2.500 & ( 5.49 ) & ( 5.93 ) & ( 6.41 ) & ( 8.688 ) & ( 7.73 ) & + 3.000 & ( 1.56 ) & ( 1.69 ) & ( 1.83 ) & ( 9.735 ) & ( 7.73 ) & + 3.500 & ( 3.70 ) & ( 3.99 ) & ( 4.32 ) & ( 1.060 ) & ( 7.73 ) & + 4.000 & ( 7.56 ) & ( 8.17 ) & ( 8.82 ) & ( 1.131 ) & ( 7.73 ) & + 5.000 & ( 2.32 ) & ( 2.51 ) & ( 2.71 ) & ( 1.243 ) & ( 7.73 ) & + 6.000 & ( 5.37 ) & ( 5.80 ) & ( 6.27 ) & ( 1.327 ) & ( 7.73 ) & + 7.000 & ( 1.03 ) & ( 1.12 ) & ( 1.21 ) & ( 1.392 ) & ( 7.73 ) & + 8.000 & ( 1.74 ) & ( 1.88 ) & ( 2.03 ) & ( 1.444 ) & ( 7.73 ) & + 9.000 & ( 2.67 ) & ( 2.89 ) & ( 3.12 ) & ( 1.488 ) & ( 7.73 ) & + 10.000 & ( 4.11 ) & ( 4.44 ) & ( 4.80 ) & ( 1.531 ) & ( 7.73 ) & + comments : in total , 95 resonances at energies of e kev are taken into account for calculating the total reaction rates .measured resonance energies and strengths are from bosnjakovic et al . , endt and van der leun and iliadis et al . . note that no carefully measured standard resonance strength exists for this reaction .when no uncertainties of resonance strengths are reported , we adopt a value of 25% . the s - factor describing the low - energy tails of broad , higher - lying resonancesis also taken into account ( see comments in ref .it must be emphasized that no direct measurement exists for energies below e kev and that the excitation energy region corresponding to e kev has neither been studied in proton transfer nor in coincidence measurements . for these levelswe adopt single - particle estimates for the upper limits of proton and -particle widths .three threshold levels are reported in rpke , brenneisen and lickert at e , 8739 and 8919 kev : we assume that the latter state is identical to the e kev level , while the former two states are new , corresponding to resonance energies of e and 232 kev .+ & & & & & & + + & & & & & & + 0.010 & 1.13 & 1.81 & 2.91 & -1.122 & 4.77 & 3.04 + 0.011 & 7.08 & 1.15 & 1.83 & -1.081 & 4.78 & 2.84 + 0.012 & 2.83 & 4.56 & 7.26 & -1.044 & 4.70 & 4.92 + 0.013 & 7.53 & 1.19 & 1.90 & -1.011 & 4.69 & 4.26 + 0.014 & 1.44 & 2.30 & 3.72 & -9.818 & 4.76 & 1.85 + 0.015 & 2.14 & 3.39 & 5.42 & -9.549 & 4.67 & 1.86 + 0.016 & 2.46 & 3.95 & 6.17 & -9.304 & 4.68 & 7.25 + 0.018 & 1.92 & 3.04 & 4.87 & -8.868 & 4.74 & 4.21 + 0.020 & 8.13 & 1.29 & 2.06 & -8.494 & 4.71 & 2.14 + 0.025 & 1.44 & 2.31 & 3.75 & -7.744 & 4.77 & 3.59 + 0.030 & 4.40 & 7.12 & 1.15 & -7.172 & 4.79 & 4.04 + 0.040 & 1.83 & 2.97 & 4.68 & -6.340 & 4.77 & 6.74 + 0.050 & 6.85 & 1.10 & 1.76 & -5.747 & 4.79 & 2.97 + 0.060 & 6.33 & 1.00 & 1.61 & -5.295 & 4.69 & 2.89 + 0.070 & 2.33 & 3.73 & 5.99 & -4.934 & 4.69 & 2.52 + 0.080 & 4.53 & 7.21 & 1.14 & -4.638 & 4.66 & 2.19 + 0.090 & 5.54 & 8.93 & 1.40 & -4.387 & 4.67 & 9.87 + 0.100 & 4.81 & 7.59 & 1.23 & -4.171 & 4.72 & 8.84 + 0.110 & 3.17 & 5.07 & 8.03 & -3.982 & 4.71 & 5.32 + 0.120 & 1.69 & 2.70 & 4.37 & -3.815 & 4.76 & 2.81 + 0.130 & 7.52 & 1.20 & 1.89 & -3.667 & 4.67 & 4.12 + 0.140 & 2.88 & 4.67 & 7.40 & -3.531 & 4.74 & 6.67 + 0.150 & 9.69 & 1.55 & 2.51 & -3.409 & 4.77 & 1.64 + 0.160 & 2.98 & 4.79 & 7.63 & -3.297 & 4.73 & 2.90 + 0.180 & 2.17 & 3.48 & 5.57 & -3.099 & 4.73 & 2.75 + 0.200 & 1.20 & 1.91 & 3.08 & -2.928 & 4.73 & 4.10 + 0.250 & 3.55 & 5.73 & 9.08 & -2.590 & 4.72 & 1.11 + 0.300 & 4.74 & 7.61 & 1.21 & -2.330 & 4.70 & 5.75 + 0.350 & 3.67 & 5.91 & 9.54 & -2.124 & 4.76 & 4.66 + 0.400 & 2.03 & 3.25 & 5.17 & -1.954 & 4.71 & 2.62 + 0.450 & 8.61 & 1.36 & 2.16 & -1.811 & 4.70 & 3.49 + 0.500 & 2.98 & 4.74 & 7.41 & -1.687 & 4.64 & 6.15 + 0.600 & 2.25 & 3.58 & 5.71 & -1.484 & 4.75 & 2.44 + 0.700 & 1.12 & 1.81 & 2.88 & -1.323 & 4.76 & 2.15 + 0.800 & 4.32 & 6.93 & 1.10 & -1.188 & 4.70 & 1.32 + 0.900 & 1.34 & 2.13 & 3.35 & -1.076 & 4.63 & 2.65 + 1.000 & 3.58 & 5.60 & 8.97 & -9.782 & 4.57 & 7.66 + 1.250 & 2.64 & 4.10 & 6.40 & -7.793 & 4.42 & 5.25 + 1.500 & 1.24 & 1.85 & 2.88 & -6.275 & 4.20 & 1.87 + 1.750 & 4.12 & 6.19 & 9.40 & -5.080 & 4.17 & 9.69 + 2.000 & 1.08 & 1.61 & 2.45 & -4.119 & 4.08 & 9.27 + 2.500 & 4.72 & 7.05 & 1.08 & -2.644 & 4.16 & 1.04 + 3.000 & 1.41 & 2.13 & 3.27 & -1.541 & 4.19 & 1.33 + 3.500 & 3.30 & 5.06 & 7.94 & -6.690 & 4.40 & 1.16 + 4.000 & 6.66 & 1.04 & 1.62 & 4.177 & 4.48 & 9.31 + 5.000 & 2.03 & 3.22 & 5.13 & 1.176 & 4.67 & 6.84 + 6.000 & 4.79 & 7.56 & 1.21 & 2.028 & 4.64 & 3.16 + 7.000 & 9.37 & 1.50 & 2.42 & 2.709 & 4.70 & 4.69 + 8.000 & 1.64 & 2.62 & 4.19 & 3.266 & 4.75 & 3.02 + 9.000 & 2.62 & 4.13 & 6.61 & 3.728 & 4.66 & 3.31 + 10.000 & 3.91 & 6.29 & 9.92 & 4.136 & 4.69 & 6.31 + comments : a significantly improved q - value ( q= kev ; tab .[ tab : master ] ) is obtained by using the measured mass excess of yazidjian et al . .because of the small q - value only the first excited state ( 1/2 ) in is expected to contribute to the resonant reaction rate .we calculate a corresponding resonance energy of e kev directly from the observed energy of -delayed protons from the decay of , as reported in trinder et al .( this energy also agrees with the measured excitation energy reported by benenson et al .the proton and -ray partial width , as well as the direct capture s - factor , are adopted from the shell model calculation of ref .the total reaction rates are dominated by the direct capture process .higher lying resonances are expected to make a minor contribution to the total rate .+ & & & & & & + + & & & & & & + 0.010 & 1.45 & 2.23 & 5.04 & -1.081 & 4.20 & 1.03 + 0.011 & 8.88 & 1.38 & 5.34 & -1.041 & 3.97 & 1.05 + 0.012 & 3.54 & 5.34 & 1.17 & -1.006 & 3.75 & 1.08 + 0.013 & 9.34 & 1.42 & 2.86 & -9.747 & 3.54 & 1.08 + 0.014 & 1.76 & 2.68 & 4.93 & -9.466 & 3.35 & 1.09 + 0.015 & 2.55 & 3.86 & 7.00 & -9.210 & 3.16 & 1.09 + 0.016 & 2.97 & 4.52 & 7.97 & -8.974 & 2.98 & 1.09 + 0.018 & 2.31 & 3.48 & 5.78 & -8.556 & 2.66 & 1.08 + 0.020 & 9.71 & 1.44 & 2.35 & -8.196 & 2.38 & 1.07 + 0.025 & 1.72 & 2.61 & 4.08 & -7.469 & 1.81 & 9.66 + 0.030 & 5.41 & 7.92 & 1.22 & -6.907 & 1.48 & 8.65 + 0.040 & 2.65 & 4.56 & 3.50 & -5.999 & 1.77 & 4.29 + 0.050 & 6.37 & 1.31 & 2.42 & -5.036 & 2.75 & 1.33 + 0.060 & 4.71 & 5.45 & 5.21 & -4.218 & 2.35 & 5.66 + 0.070 & 3.23 & 2.11 & 1.06 & -3.623 & 1.76 & 1.11 + 0.080 & 4.14 & 1.78 & 5.95 & -3.179 & 1.35 & 1.72 + 0.090 & 1.75 & 5.49 & 1.33 & -2.835 & 1.04 & 2.50 + 0.100 & 3.45 & 8.28 & 1.60 & -2.562 & 8.00 & 3.12 + 0.110 & 3.84 & 7.45 & 1.24 & -2.340 & 6.25 & 2.88 + 0.120 & 2.65 & 4.55 & 7.02 & -2.156 & 5.07 & 1.65 + 0.130 & 1.33 & 2.09 & 3.13 & -2.001 & 4.44 & 6.42 + 0.140 & 5.03 & 7.71 & 1.16 & -1.869 & 4.29 & 4.49 + 0.150 & 1.53 & 2.43 & 3.65 & -1.756 & 4.48 & 8.09 + 0.160 & 3.99 & 6.60 & 1.01 & -1.657 & 4.88 & 1.55 + 0.180 & 1.88 & 3.45 & 5.58 & -1.494 & 5.87 & 2.87 + 0.200 & 6.18 & 1.27 & 2.22 & -1.366 & 6.86 & 3.28 + 0.250 & 4.86 & 1.26 & 2.61 & -1.139 & 8.88 & 2.82 + 0.300 & 1.76 & 5.46 & 1.33 & -9.929 & 1.05 & 2.96 + 0.350 & 4.64 & 1.51 & 4.01 & -8.903 & 1.11 & 1.60 + 0.400 & 9.79 & 3.22 & 9.06 & -8.110 & 1.11 & 7.00 + 0.450 & 2.11 & 6.11 & 1.71 & -7.410 & 1.03 & 1.17 + 0.500 & 4.92 & 1.15 & 2.96 & -6.723 & 8.96 & 4.09 + 0.600 & ( 2.04 ) & ( 5.00 ) & ( 1.23 ) & ( -5.297 ) & ( 8.96 ) & + 0.700 & ( 6.12 ) & ( 1.50 ) & ( 3.67 ) & ( -4.200 ) & ( 8.96 ) & + 0.800 & ( 1.43 ) & ( 3.49 ) & ( 8.56 ) & ( -3.355 ) & ( 8.96 ) & + 0.900 & ( 2.82 ) & ( 6.92 ) & ( 1.69 ) & ( -2.671 ) & ( 8.96 ) & + 1.000 & ( 4.92 ) & ( 1.21 ) & ( 2.95 ) & ( -2.116 ) & ( 8.96 ) & + 1.250 & ( 1.40 ) & ( 3.44 ) & ( 8.42 ) & ( -1.068 ) & ( 8.96 ) & + 1.500 & ( 2.89 ) & ( 7.07 ) & ( 1.73 ) & ( -3.462 ) & ( 8.96 ) & + 1.750 & ( 4.97 ) & ( 1.22 ) & ( 2.98 ) & ( 1.959 ) & ( 8.96 ) & + 2.000 & ( 7.59 ) & ( 1.86 ) & ( 4.56 ) & ( 6.203 ) & ( 8.96 ) & + 2.500 & ( 1.43 ) & ( 3.49 ) & ( 8.56 ) & ( 1.251 ) & ( 8.96 ) & + 3.000 & ( 2.25 ) & ( 5.52 ) & ( 1.35 ) & ( 1.708 ) & ( 8.96 ) & + 3.500 & ( 3.21 ) & ( 7.85 ) & ( 1.92 ) & ( 2.061 ) & ( 8.96 ) & + 4.000 & ( 4.28 ) & ( 1.05 ) & ( 2.57 ) & ( 2.349 ) & ( 8.96 ) & + 5.000 & ( 6.74 ) & ( 1.65 ) & ( 4.04 ) & ( 2.804 ) & ( 8.96 ) & + 6.000 & ( 9.53 ) & ( 2.34 ) & ( 5.72 ) & ( 3.151 ) & ( 8.96 ) & + 7.000 & ( 1.26 ) & ( 3.09 ) & ( 7.57 ) & ( 3.431 ) & ( 8.96 ) & + 8.000 & ( 1.59 ) & ( 3.89 ) & ( 9.54 ) & ( 3.662 ) & ( 8.96 ) & + 9.000 & ( 1.92 ) & ( 4.72 ) & ( 1.16 ) & ( 3.853 ) & ( 8.96 ) & + 10.000 & ( 2.33 ) & ( 5.71 ) & ( 1.40 ) & ( 4.045 ) & ( 8.96 ) & + comments : the reaction rate , including uncertainties , is calculated from the same input information as in iliadis et al .note that the highest - lying resonance for which reliable input information is available is located at a relatively low energy of e=744 kev .consequently , the hauser - feshbach model has to be used for calculating the rates beyond a relatively low temperature .the rate uncertainties presented here do not include the systematic error introduced by 5 additionally expected resonances with e mev . according to iliadis et al . , these resonances may increase the high rate at t.5 gk by a factor of 2 .+ & & & & & & + + & & & & & & + 0.010 & 1.28 & 1.87 & 2.75 & -1.099 & 3.87 & 4.13 + 0.011 & 8.00 & 1.18 & 1.72 & -1.058 & 3.85 & 5.49 + 0.012 & 3.12 & 4.57 & 6.71 & -1.021 & 3.85 & 3.08 + 0.013 & 8.29 & 1.22 & 1.79 & -9.881 & 3.85 & 2.64 + 0.014 & 1.60 & 2.34 & 3.43 & -9.586 & 3.86 & 1.84 + 0.015 & 2.33 & 3.45 & 5.06 & -9.317 & 3.89 & 4.25 + 0.016 & 2.74 & 4.03 & 5.91 & -9.071 & 3.85 & 3.77 + 0.018 & 2.11 & 3.11 & 4.58 & -8.636 & 3.88 & 2.92 + 0.020 & 9.02 & 1.32 & 1.93 & -8.262 & 3.82 & 4.17 + 0.025 & 1.62 & 2.38 & 3.51 & -7.512 & 3.87 & 2.96 + 0.030 & 4.94 & 7.27 & 1.06 & -6.940 & 3.84 & 5.21 + 0.040 & 2.05 & 3.01 & 4.42 & -6.107 & 3.88 & 2.85 + 0.050 & 7.61 & 1.11 & 1.65 & -5.515 & 3.87 & 4.55 + 0.060 & 1.12 & 1.45 & 1.93 & -5.027 & 2.75 & 2.03 + 0.070 & 1.69 & 1.95 & 2.25 & -4.308 & 1.42 & 3.11 + 0.080 & 8.78 & 1.01 & 1.17 & -3.683 & 1.44 & 3.67 + 0.090 & 1.13 & 1.31 & 1.51 & -3.197 & 1.44 & 3.65 + 0.100 & 5.43 & 6.26 & 7.22 & -2.810 & 1.43 & 3.65 + 0.110 & 1.27 & 1.46 & 1.69 & -2.495 & 1.43 & 3.66 + 0.120 & 1.74 & 2.00 & 2.31 & -2.233 & 1.43 & 3.68 + 0.130 & 1.57 & 1.82 & 2.09 & -2.013 & 1.43 & 3.69 + 0.140 & 1.03 & 1.19 & 1.37 & -1.825 & 1.43 & 3.69 + 0.150 & 5.24 & 6.03 & 6.96 & -1.662 & 1.42 & 3.69 + 0.160 & 2.15 & 2.48 & 2.86 & -1.521 & 1.42 & 3.71 + 0.180 & 2.24 & 2.58 & 2.97 & -1.287 & 1.42 & 3.76 + 0.200 & 1.43 & 1.65 & 1.90 & -1.101 & 1.42 & 3.80 + 0.250 & 3.85 & 4.43 & 5.11 & -7.722 & 1.42 & 3.85 + 0.300 & 3.28 & 3.78 & 4.36 & -5.578 & 1.42 & 3.89 + 0.350 & 1.46 & 1.69 & 1.94 & -4.083 & 1.42 & 3.92 + 0.400 & 4.37 & 5.03 & 5.81 & -2.989 & 1.42 & 3.96 + 0.450 & 1.00 & 1.16 & 1.33 & -2.158 & 1.42 & 4.01 + 0.500 & 1.92 & 2.21 & 2.55 & -1.510 & 1.42 & 4.04 + 0.600 & 4.91 & 5.65 & 6.51 & -5.709 & 1.41 & 4.01 + 0.700 & 9.43 & 1.08 & 1.24 & 7.936 & 1.39 & 3.83 + 0.800 & 1.56 & 1.78 & 2.03 & 5.780 & 1.32 & 4.10 + 0.900 & 2.44 & 2.74 & 3.10 & 1.010 & 1.19 & 7.20 + 1.000 & 3.74 & 4.14 & 4.60 & 1.423 & 1.03 & 1.10 + 1.250 & 1.04 & 1.12 & 1.21 & 2.419 & 7.49 & 2.42 + 1.500 & 2.44 & 2.62 & 2.82 & 3.268 & 7.17 & 3.42 + 1.750 & 4.70 & 5.06 & 5.45 & 3.924 & 7.41 & 6.52 + 2.000 & 7.72 & 8.31 & 8.98 & 4.421 & 7.61 & 8.86 + 2.500 & 1.51 & 1.63 & 1.77 & 5.095 & 7.76 & 1.07 + 3.000 & 2.29 & 2.47 & 2.68 & 5.511 & 7.75 & 1.16 + 3.500 & 3.01 & 3.24 & 3.51 & 5.782 & 7.69 & 1.26 + 4.000 & 3.62 & 3.90 & 4.21 & 5.966 & 7.58 & 1.37 + 5.000 & 4.53 & 4.87 & 5.25 & 6.189 & 7.32 & 1.63 + 6.000 & 5.11 & 5.48 & 5.89 & 6.307 & 7.06 & 1.83 + 7.000 & 5.47 & 5.85 & 6.27 & 6.372 & 6.85 & 1.76 + 8.000 & ( 6.42 ) & ( 6.87 ) & ( 7.36 ) & ( 6.533 ) & ( 6.79 ) & + 9.000 & ( 7.92 ) & ( 8.47 ) & ( 9.07 ) & ( 6.742 ) & ( 6.79 ) & + 10.000 & ( 9.76 ) & ( 1.04 ) & ( 1.12 ) & ( 6.952 ) & ( 6.79 ) & + + & & & & & & + + & & & & & & + 0.010 & 1.94 & 3.11 & 4.95 & -1.163 & 4.70 & 1.82 + 0.011 & 1.42 & 2.31 & 3.66 & -1.120 & 4.69 & 1.02 + 0.012 & 6.39 & 1.03 & 1.67 & -1.082 & 4.72 & 3.81 + 0.013 & 1.92 & 3.05 & 4.88 & -1.048 & 4.67 & 4.93 + 0.014 & 4.19 & 6.69 & 1.07 & -1.017 & 4.74 & 5.10 + 0.015 & 6.76 & 1.08 & 1.69 & -9.894 & 4.69 & 1.84 + 0.016 & 8.56 & 1.38 & 2.23 & -9.638 & 4.77 & 3.42 + 0.018 & 7.80 & 1.25 & 2.01 & -9.188 & 4.75 & 2.09 + 0.020 & 3.83 & 6.17 & 9.85 & -8.799 & 4.76 & 3.88 + 0.025 & 9.26 & 1.48 & 2.35 & -8.020 & 4.72 & 2.46 + 0.030 & 3.47 & 5.53 & 8.94 & -7.427 & 4.74 & 4.55 + 0.040 & 1.96 & 3.18 & 5.10 & -6.562 & 4.76 & 2.81 + 0.050 & 9.18 & 1.48 & 2.39 & -5.947 & 4.77 & 3.32 + 0.060 & 1.00 & 1.61 & 2.55 & -5.479 & 4.75 & 3.74 + 0.070 & 4.25 & 6.77 & 1.10 & -5.104 & 4.89 & 1.29 + 0.080 & 9.40 & 1.50 & 2.45 & -4.792 & 5.59 & 4.07 + 0.090 & 1.30 & 2.18 & 3.80 & -4.518 & 8.23 & 2.71 + 0.100 & 1.39 & 2.49 & 6.86 & -4.251 & 1.29 & 4.70 + 0.110 & 1.20 & 2.48 & 3.26 & -3.982 & 1.89 & 4.07 + 0.120 & 9.00 & 3.98 & 1.06 & -3.707 & 2.35 & 2.02 + 0.130 & 6.86 & 8.71 & 2.04 & -3.438 & 2.62 & 7.07 + 0.140 & 7.22 & 1.41 & 2.62 & -3.185 & 2.70 & 1.97 + 0.150 & 8.56 & 1.60 & 2.34 & -2.955 & 2.66 & 8.27 + 0.160 & 8.29 & 1.33 & 1.59 & -2.748 & 2.55 & 8.58 + 0.180 & 3.89 & 4.50 & 3.83 & -2.399 & 2.28 & 1.37 + 0.200 & 8.29 & 7.39 & 4.87 & -2.119 & 2.02 & 1.84 + 0.250 & 1.94 & 1.08 & 4.36 & -1.620 & 1.55 & 3.19 + 0.300 & 7.03 & 2.86 & 8.26 & -1.293 & 1.24 & 4.77 + 0.350 & 8.77 & 2.85 & 6.53 & -1.062 & 1.02 & 6.25 + 0.400 & 5.66 & 1.55 & 3.05 & -8.920 & 8.67 & 7.27 + 0.450 & 2.33 & 5.57 & 1.00 & -7.618 & 7.53 & 7.61 + 0.500 & 7.10 & 1.52 & 2.58 & -6.592 & 6.69 & 7.29 + 0.600 & 3.60 & 6.62 & 1.04 & -5.091 & 5.59 & 5.52 + 0.700 & 1.08 & 1.84 & 2.79 & -4.054 & 5.00 & 3.73 + 0.800 & 2.34 & 3.84 & 5.74 & -3.303 & 4.71 & 2.65 + 0.900 & 4.18 & 6.67 & 1.00 & -2.740 & 4.58 & 2.20 + 1.000 & 6.52 & 1.02 & 1.53 & -2.306 & 4.55 & 2.14 + 1.250 & 1.37 & 2.14 & 3.19 & -1.574 & 4.66 & 2.68 + 1.500 & 2.13 & 3.33 & 5.01 & -1.133 & 4.81 & 3.32 + 1.750 & 2.82 & 4.45 & 6.75 & -8.468 & 4.90 & 3.60 + 2.000 & 3.43 & 5.45 & 8.28 & -6.461 & 4.91 & 3.47 + 2.500 & 4.67 & 7.26 & 1.09 & -3.528 & 4.61 & 2.46 + 3.000 & 6.45 & 9.58 & 1.38 & -6.219 & 4.01 & 1.02 + 3.500 & 9.41 & 1.37 & 1.91 & 2.937 & 3.65 & 5.89 + 4.000 & 1.44 & 2.05 & 2.90 & 7.135 & 3.58 & 1.29 + 5.000 & 3.31 & 4.88 & 7.27 & 1.591 & 4.03 & 7.85 + 6.000 & 7.09 & 1.10 & 1.70 & 2.399 & 4.44 & 3.72 + 7.000 & 1.37 & 2.17 & 3.38 & 3.075 & 4.57 & 7.22 + 8.000 & 2.38 & 3.77 & 6.05 & 3.633 & 4.65 & 1.36 + 9.000 & 3.85 & 6.05 & 9.62 & 4.109 & 4.66 & 3.93 + 10.000 & 5.83 & 9.22 & 1.50 & 4.529 & 4.70 & 4.85 + comments : a single resonance , located at e kev , is taken into account in the evaluation of the total rate .its energy is obtained from the measured excitation energy of the first - excited ( 2 ) state in ( e kev ) and the reaction q - value ( see tab . [tab : master ] ) that is obtained from the mass differences presented in ref .note that the new resonance energy is significantly lower than previous estimates and this strongly affects both the proton and the -ray partial width . for these quantitieswe adopt the shell model result of herndl et al . , but correct the published values for the new excitation energy .the direct capture s - factor is also adopted from ref .we assume uncertainties of 50% for the partial widths and the direct capture s - factor .our monte carlo rates do not take higher - lying resonances into account .their influence is presumably small since , based on the known level scheme of the mirror nucleus , they are expected to occur at much higher energies ( see discussion in ref . ) .+ & & & & & & + + & & & & & & + 0.010 & 2.14 & 3.14 & 4.63 & -1.209 & 3.86 & 4.75 + 0.011 & 1.83 & 2.71 & 3.96 & -1.164 & 3.85 & 7.24 + 0.012 & 9.55 & 1.40 & 2.05 & -1.125 & 3.84 & 2.07 + 0.013 & 3.28 & 4.78 & 7.03 & -1.090 & 3.83 & 4.49 + 0.014 & 7.71 & 1.15 & 1.68 & -1.058 & 3.91 & 3.37 + 0.015 & 1.41 & 2.05 & 3.03 & -1.029 & 3.88 & 4.45 + 0.016 & 1.98 & 2.89 & 4.23 & -1.003 & 3.84 & 1.79 + 0.018 & 2.11 & 3.14 & 4.60 & -9.557 & 3.88 & 1.11 + 0.020 & 1.18 & 1.74 & 2.56 & -9.155 & 3.88 & 2.37 + 0.025 & 3.83 & 5.56 & 8.18 & -8.348 & 3.83 & 3.68 + 0.030 & 1.78 & 2.63 & 3.85 & -7.732 & 3.85 & 4.21 + 0.040 & 1.50 & 2.17 & 3.12 & -6.830 & 3.67 & 3.43 + 0.050 & 2.42 & 4.29 & 7.90 & -5.840 & 5.89 & 1.16 + 0.060 & 1.63 & 2.72 & 4.60 & -4.965 & 5.18 & 5.02 + 0.070 & 8.75 & 1.40 & 2.21 & -4.342 & 4.64 & 3.63 + 0.080 & 9.45 & 1.46 & 2.23 & -3.877 & 4.30 & 2.40 + 0.090 & 3.52 & 5.32 & 7.96 & -3.517 & 4.10 & 2.33 + 0.100 & 6.23 & 9.33 & 1.37 & -3.231 & 3.98 & 2.43 + 0.110 & 6.44 & 9.56 & 1.40 & -2.999 & 3.91 & 3.07 + 0.120 & 4.45 & 6.58 & 9.59 & -2.806 & 3.87 & 3.86 + 0.130 & 2.27 & 3.35 & 4.86 & -2.643 & 3.84 & 4.55 + 0.140 & 9.12 & 1.34 & 1.94 & -2.504 & 3.80 & 4.27 + 0.150 & 3.06 & 4.47 & 6.44 & -2.383 & 3.75 & 3.66 + 0.160 & 8.89 & 1.29 & 1.85 & -2.277 & 3.68 & 3.41 + 0.180 & 5.52 & 7.77 & 1.09 & -2.098 & 3.45 & 4.90 + 0.200 & 2.55 & 3.46 & 4.73 & -1.948 & 3.15 & 6.68 + 0.250 & 4.95 & 6.54 & 8.70 & -1.654 & 2.84 & 2.39 + 0.300 & 4.15 & 5.62 & 7.65 & -1.439 & 3.10 & 5.08 + 0.350 & 1.99 & 2.78 & 3.90 & -1.279 & 3.37 & 7.15 + 0.400 & 6.55 & 9.36 & 1.33 & -1.158 & 3.54 & 6.64 + 0.450 & 1.65 & 2.39 & 3.43 & -1.064 & 3.66 & 6.32 + 0.500 & 3.43 & 5.02 & 7.24 & -9.901 & 3.73 & 6.16 + 0.600 & 1.00 & 1.48 & 2.16 & -8.818 & 3.82 & 6.12 + 0.700 & 2.12 & 3.14 & 4.58 & -8.068 & 3.84 & 6.27 + 0.800 & 3.79 & 5.55 & 8.02 & -7.497 & 3.73 & 7.32 + 0.900 & 6.58 & 9.20 & 1.29 & -6.985 & 3.37 & 1.50 + 1.000 & ( 1.19 ) & ( 1.58 ) & ( 2.11 ) & ( -6.449 ) & ( 2.86 ) & + 1.250 & ( 3.82 ) & ( 5.08 ) & ( 6.76 ) & ( -5.282 ) & ( 2.86 ) & + 1.500 & ( 8.76 ) & ( 1.16 ) & ( 1.55 ) & ( -4.452 ) & ( 2.86 ) & + 1.750 & ( 1.67 ) & ( 2.22 ) & ( 2.95 ) & ( -3.810 ) & ( 2.86 ) & + 2.000 & ( 2.80 ) & ( 3.72 ) & ( 4.95 ) & ( -3.291 ) & ( 2.86 ) & + 2.500 & ( 6.20 ) & ( 8.25 ) & ( 1.10 ) & ( -2.495 ) & ( 2.86 ) & + 3.000 & ( 1.13 ) & ( 1.51 ) & ( 2.00 ) & ( -1.894 ) & ( 2.86 ) & + 3.500 & ( 1.83 ) & ( 2.44 ) & ( 3.24 ) & ( -1.412 ) & ( 2.86 ) & + 4.000 & ( 2.73 ) & ( 3.63 ) & ( 4.84 ) & ( -1.012 ) & ( 2.86 ) & + 5.000 & ( 5.22 ) & ( 6.95 ) & ( 9.25 ) & ( -3.636 ) & ( 2.86 ) & + 6.000 & ( 8.65 ) & ( 1.15 ) & ( 1.53 ) & ( 1.403 ) & ( 2.86 ) & + 7.000 & ( 1.31 ) & ( 1.74 ) & ( 2.32 ) & ( 5.541 ) & ( 2.86 ) & + 8.000 & ( 1.83 ) & ( 2.44 ) & ( 3.25 ) & ( 8.921 ) & ( 2.86 ) & + 9.000 & ( 2.43 ) & ( 3.24 ) & ( 4.31 ) & ( 1.174 ) & ( 2.86 ) & + 10.000 & ( 3.23 ) & ( 4.29 ) & ( 5.71 ) & ( 1.457 ) & ( 2.86 ) & + comments : the reaction rate is calculated by using the excitation energies and a=40 mirror state assignments presented in tabs .i and ii of hansper et al .additional information , including the direct capture component , is adopted from iliadis et al .we specifically assume for the e=1671 , 1703 and 1797 kev levels in assignments of j=(2 ) , ( 1 ) and ( 3 ) , respectively . in total ,5 resonances with energies in the range of e=234 - 1259 kev are taken into account .the rate uncertainties presented here disregard the fact that these assignments are not unambiguous .also not included in the rate uncertainties is the unobserved 3 level which the isobaric multiplet mass equation predicts at an excitation energy of e.45 mev ( hansper et al . ) . this level may increase the total rates above t=0.8 gk by up to a factor of .interestingly , recent calculations by descouvemont using a microscopic cluster model predict total reaction rates that are larger by 12 orders of magnitude below t=0.5 gk . this strong disagreement can be traced back to the different values of spectroscopic factors used in the two approaches .for the dominant 2 level , descouvemont s model predicts values of c=1.05 and c=0.03 ( see his tab .3 ) . in contrast, we use the _ experimental _ values of c=0.014 and c=0.92 which were measured in the (d , p) neutron - transfer to the mirror state ( fink and schiffer ) . the measured ( d , p ) angular distribution ( see their fig .3 ) clearly reveals a dominant =3 transfer with a relatively small =1 component , whereas descouvemont finds that ... the =1 component is strongly dominant in the wave function ... " .we prefer using experimental rather than calculated spectroscopic factors when estimating reliable reaction rates .but we agree with descouvemont that a new measurement of spectroscopic factors in ( and perhaps ) would be helpful to clarify the situation .+ & & & & & & + + & & & & & & + 0.010 & 1.07 & 1.57 & 2.29 & -1.216 & 3.83 & 1.94 + 0.011 & 9.25 & 1.36 & 1.99 & -1.171 & 3.86 & 2.22 + 0.012 & 4.74 & 7.00 & 1.04 & -1.132 & 3.92 & 2.76 + 0.013 & 1.64 & 2.39 & 3.50 & -1.097 & 3.85 & 3.05 + 0.014 & 3.89 & 5.72 & 8.42 & -1.065 & 3.83 & 2.02 + 0.015 & 7.00 & 1.03 & 1.51 & -1.036 & 3.88 & 2.09 + 0.016 & 9.99 & 1.46 & 2.14 & -1.009 & 3.82 & 1.60 + 0.018 & 1.07 & 1.55 & 2.28 & -9.626 & 3.86 & 6.16 + 0.020 & 6.02 & 8.79 & 1.29 & -9.223 & 3.85 & 3.32 + 0.025 & 1.94 & 2.81 & 4.13 & -8.416 & 3.87 & 8.83 + 0.030 & 9.08 & 1.34 & 1.98 & -7.800 & 3.91 & 3.11 + 0.040 & 7.22 & 1.07 & 1.56 & -6.902 & 3.88 & 2.80 + 0.050 & 4.29 & 6.25 & 9.13 & -6.264 & 3.85 & 4.62 + 0.060 & 5.57 & 8.21 & 1.20 & -5.776 & 3.90 & 1.81 + 0.070 & 2.68 & 3.97 & 5.83 & -5.389 & 3.89 & 2.19 + 0.080 & 6.64 & 9.71 & 1.44 & -5.068 & 3.89 & 3.28 + 0.090 & 1.01 & 1.47 & 2.14 & -4.797 & 3.83 & 3.44 + 0.100 & 1.03 & 1.51 & 2.21 & -4.564 & 3.88 & 2.28 + 0.110 & 7.93 & 1.17 & 1.71 & -4.360 & 3.84 & 3.15 + 0.120 & 4.79 & 7.05 & 1.04 & -4.180 & 3.89 & 3.21 + 0.130 & 2.40 & 3.52 & 5.15 & -4.019 & 3.79 & 6.30 + 0.140 & 1.05 & 1.54 & 2.24 & -3.872 & 3.85 & 7.48 + 0.150 & 4.20 & 6.01 & 8.71 & -3.734 & 3.66 & 9.05 + 0.160 & 1.87 & 2.48 & 3.34 & -3.592 & 2.95 & 6.37 + 0.180 & 8.42 & 9.42 & 1.06 & -3.229 & 1.17 & 2.12 + 0.200 & 3.57 & 3.97 & 4.44 & -2.855 & 1.10 & 3.04 + 0.250 & 3.78 & 4.21 & 4.72 & -2.159 & 1.12 & 3.07 + 0.300 & 3.80 & 4.24 & 4.74 & -1.698 & 1.12 & 2.95 + 0.350 & 9.88 & 1.10 & 1.23 & -1.372 & 1.12 & 2.94 + 0.400 & 1.11 & 1.24 & 1.38 & -1.130 & 1.12 & 2.92 + 0.450 & 7.10 & 7.92 & 8.86 & -9.442 & 1.12 & 2.92 + 0.500 & 3.09 & 3.45 & 3.85 & -7.972 & 1.12 & 2.91 + 0.600 & 2.70 & 3.01 & 3.37 & -5.803 & 1.12 & 2.89 + 0.700 & 1.23 & 1.37 & 1.53 & -4.290 & 1.12 & 2.89 + 0.800 & 3.72 & 4.15 & 4.64 & -3.181 & 1.12 & 2.89 + 0.900 & 8.62 & 9.62 & 1.08 & -2.340 & 1.12 & 2.89 + 1.000 & 1.66 & 1.85 & 2.07 & -1.684 & 1.12 & 2.89 + 1.250 & 5.18 & 5.77 & 6.45 & -5.488 & 1.11 & 2.94 + 1.500 & 1.07 & 1.19 & 1.32 & 1.716 & 1.09 & 3.16 + 1.750 & 1.79 & 1.98 & 2.20 & 6.830 & 1.05 & 3.87 + 2.000 & ( 2.81 ) & ( 3.10 ) & ( 3.42 ) & ( 1.131 ) & ( 9.78 ) & + 2.500 & ( 8.10 ) & ( 8.93 ) & ( 9.85 ) & ( 2.190 ) & ( 9.78 ) & + 3.000 & ( 1.76 ) & ( 1.94 ) & ( 2.14 ) & ( 2.965 ) & ( 9.78 ) & + 3.500 & ( 3.22 ) & ( 3.55 ) & ( 3.92 ) & ( 3.570 ) & ( 9.78 ) & + 4.000 & ( 5.28 ) & ( 5.83 ) & ( 6.42 ) & ( 4.065 ) & ( 9.78 ) & + 5.000 & ( 1.15 ) & ( 1.26 ) & ( 1.39 ) & ( 4.840 ) & ( 9.78 ) & + 6.000 & ( 2.08 ) & ( 2.29 ) & ( 2.53 ) & ( 5.435 ) & ( 9.78 ) & + 7.000 & ( 3.34 ) & ( 3.68 ) & ( 4.06 ) & ( 5.908 ) & ( 9.78 ) & + 8.000 & ( 4.94 ) & ( 5.44 ) & ( 6.00 ) & ( 6.299 ) & ( 9.78 ) & + 9.000 & ( 6.82 ) & ( 7.52 ) & ( 8.30 ) & ( 6.623 ) & ( 9.78 ) & + 10.000 & ( 8.91 ) & ( 9.83 ) & ( 1.08 ) & ( 6.890 ) & ( 9.78 ) & + a. s. adekola , ph .d. thesis , ohio university ( 2009 ) .j. h. aitken et al . , can .48 ( 1970 ) 1617 .f. ajzenberg - selove , nucl .phys . a 190 ( 1972 ) 1 .f. ajzenberg - selove , nucl .phys . a 523 ( 1991 ) 1 .r. almanza et al .phys . a 248 ( 1975 ) 214 .n. anantaraman et al .phys . a 279 ( 1977 ) 474 . c. angulo et al .phys . a 656 ( 1999 ) 3 .a. anttila , j. keinonen and m. bister , j. phys .g 3 ( 1977 ) 1241 .a. arazi et al . , phys .c 74 ( 2006 ) 025802 .m. arnould and s. goriely , nucl .phys . a 777 ( 2006 ) 157 .g. audi , a. h. wapstra and c. thibault , nucl .phys . a 729 ( 2003 ) 337 .l. axelsson et al . , nucl .phys . a 634 ( 1998 ) 475 .d. w. bardayan et al .c 62 ( 2000 ) 055804 .d. w.bardayan et al . , phys .c 63 ( 2001 ) 065802 .d. w. bardayan et al .c 65 ( 2002 ) 032801(r ) .d. w. bardayan et al .( 2002 ) 262501 .d. w. bardayan et al .c 70 ( 2004 ) 015804 .d. w. bardayan , r. l. kozub and m. s. smith , phys .c 71 ( 2005 ) 018801 .d. w. bardayan et al .c 74 ( 2006 ) 045804 .d. w. bardayan et al .c 76 ( 2007 ) 045803 .g. a. bartholomew et al . , can .33 ( 1955 ) 441 .n. bateman et al .c 63 ( 2001 ) 035803 .a. m. baxter and s. hinds , nucl .a 211 ( 1973 ) 7 .h. w. becker et al ., z. phys . a 305 ( 1982 ) 319 .h. w. becker et al ., z. phys . a 351 ( 1995 ) 453 .w. benenson et al .c 13 ( 1976 ) 1479 . w. benenson et al . , phys .c 15 ( 1977 ) 1187 .u. e. p. berg and k. wiehard , nucl .phys . a 318 ( 1979 ) 453 .r. bloch , t. knellwolf and r.e .pixley , nucl .phys . a 123 ( 1969 ) 129 .j. bommer et al . , nucl .phys . a 251 ( 1975 ) 246 .b. bosnjakovic et al . , nucl .phys . a 110 ( 1968 )l. buchmann , j. m. dauria and p. mccorquodale , astrophys . j. 324 ( 1988 ) 953 .l. buchmann et al . , phys .c 75 ( 2007 ) 012804(r ) .j. a. caggiano et al . , phys .c 64 ( 2001 ) 025802 .j. a. caggiano et al . , phys .c 65 ( 2002 ) 055801 .t. a. cary et al . , phys .c 29 ( 1984 ) 1273 . k. y. chae et al . , phys .c 74 ( 2006 ) 012801(r ) .a. chafa et al . , phys .c 75 ( 2007 ) 035810 .a. e. champagne and m. pitt , nucl .phys . a 457 ( 1986 ) 367 .a. e. champagne et al .phys . a 487 ( 1988 ) 433 .a. e. champagne , p. v. magnus and m. s. smith , nucl .phys . a 512 ( 1990 ) 317 .a. e. champagne , b. a. brown and r. sherr , nucl .phys . a 556 ( 1993 ) 123 .r. chatterjee , a. okolowicz and m. ploszajczak , nucl .phys . a 764 ( 2006 ) 528 . c. chronidou et al .j. a 6 ( 1999 ) 303 .r. r. c. clement et al .92 ( 2004 ) 172502 .a. coc et al .357 ( 2000 ) 561 .h. comisel et al . , phys .c 75 ( 2007 ) 045807 .s. g. cooper , j. phys .g 12 ( 1986 ) 869 .r. coszach et al .c 50 ( 1994 ) 1695 .r. g. couch et al .phys . a 175 ( 1971 ) 300 .m. couder et al .c 69 ( 2004 ) 022801 .g. cowan , _ statistical data analysis _ ( oxford university press , new york , 1998 ) .j. cseh et al . , nucl .phys . a 385 ( 1982 ) 43 .a. cunsolo et al . , phys.rev .c 24 ( 1981 ) 476 .j. c. dalouzy et al .102 ( 2009 ) 162503 .j. m. dauria et al . , phys .c 69 ( 2004 ) 065804 .b. davids et al .c 67 ( 2003 ) 065808 .f. de oliveira et al . , nucl .phys . a 587 ( 1996 ) 231 ; f. de oliveira , thesis , universit paris sud , unpublished ( 1995 ) . f. de oliveira et al .c 55 ( 1997 ) 3149 .s. dababneh et al . , phys .c 68 ( 2003 ) 025801 . c. m. deibel , ph .d. thesis , yale university ( 2008 ) .p. descouvemont , phys .c 38 ( 1988 ) 2397 .p. descouvemont , astrophys .j. 543 ( 2000 ) 425 .p. descouvemont et al . , at .data nucl .88 ( 2004 ) 203 . n. de srville et al . , nucl .a791 ( 2007 ) 251 .n. de srville et al . , phys .c 79 ( 2009 ) 015801 .w. r. dixon and r. s. storey , can .49 ( 1971 ) 1714 .w. r. dixon et al .( 1971 ) 1460 .w. r. dixon and r. s. storey , nucl .phys . a 284 ( 1977 ) 97 .p. doornenbal et al . , phys .b 647 ( 2007 ) 237 . j. p. draayer et al . , phys .b 53 ( 1974 ) 250 . h. w. drotleff et al . , astrophys .j. 414 , 735 ( 1993 ) .j. dubois , h. odelius and s. o. berglund , phys .scr . 5 ( 1972 ) 163 .m. dufour and p. descouvemont , nucl .phys . a 672 ( 2000 ) 153 .m. dufour and p. descouvemont , nucl .phys . a 730 ( 2004 ) 316 .m. dufour and p. descouvemont , nucl .phys . a 785 ( 2007 ) 381 .m. endt , at .data nucl .data tab . 19 ( 1977 ) 23 .m. endt , nucl .phys . a 521 ( 1990 ) 1 .m. endt , nucl .phys . a 633 ( 1998 ) 1 .p. m. endt and j. g. l. booten , nucl .phys . a 555 ( 1993 ) 499 .p. m. endt and c. van der leun , nucl .phys . a 310 ( 1978 ) 1 .p. m. endt and c. rolfs , nucl .phys . a 467 ( 1987 ) 261 .s. engel et al . , nucl .instr . meth . a 553 ( 2005 ) 491 .t. eronen et al . , phys .c 79 ( 2009 ) 032802(r ) .s. falahat , internal report , university of notre dame ( 2006 ) ( unpubished ) .a. j. ferguson and h. e. gove , can .37 ( 1959 ) 660 . l. k. fifield et al . , nucl .phys . a 309 ( 1978 ) 77 ; nucl .phys . a 322 ( 1979 ) 1 . c. l. fink and j. p. schiffer , nucl .phys . a 225 ( 1974 ) 93 .r. b. firestone , nucl .data sheets 103 ( 2004 ) 269 .r. b. firestone , nucl .data sheets 108 ( 2007 ) 2319 .j. l. fisker et al ., astrophys .j. 650 ( 2006 ) 332 .a. formicola et al . , j. phys .g 35 ( 2008 ) 014013 .h. t. fortune , r. sherr and b. a. brown , phys .c 61 ( 2000 ) 057303 .h. t. fortune and r. sherr , phys .c 73 ( 2006 ) 024302 . c. fox et al .c 71 ( 2005 ) 055801 .j. b. french , s. iwao and e. vogt , phys .122 ( 1961 ) 1248 .h. o. u. fynbo et al . , nucl .phys . a 677 ( 2000 ) 38 .a. gade et al . , phys .c 77 ( 2008 ) 044306 .m. gai et al . , phys .c 36 ( 1987 ) 1256 .u. giesen et al . , nucl .phys . a 561 , 95 ( 1993 ) .u. giesen et al . , nucl .phys . a 567 ( 1994 ) 146 .u. giesen , ph .d. thesis , university of notre dame ( 1994 ) .j. grres et al .phys . a 408 ( 1983 ) 372 .j. grres et al .phys . a 517 ( 1990 ) 329 .j. grres et al .phys . a 548 ( 1992 ) 414 .j. grres et al .c 62 , 055801 ( 2000 ) .t. gomi et al . , j. phys .g 31 ( 2005 ) s1517 .s. goriely , s. hilaire and a. j. koning , astron .astrophys . 487( 2008 ) 767 , and _ private communication_. s. graff et al .phys . a 510 ( 1990 ) 346 .m. b. greenfield et al . , nucl .phys . a 524 ( 1991 ) 228 .b. guo et al . , phys .c 73 ( 2006 ) 048801 .k. i. hahn et al . , phys .c 54 ( 1996 ) 1999 .s. e. hale et al .c 65 ( 2001 ) 015801 .s. e. hale et al .c 70 ( 2004 ) 045802 .v. y. hansper et al . , phys .c 61 ( 2000 ) 028801 .s. harissopulos et al . , eur .j. a 9 ( 2000 ) 479 .v. harms , k. l. kratz and m. wiescher , phys .c 43 , 2849 ( 1991 ) .j. j. he et al .c 76 ( 2007 ) 055802 .h. herndl et al . , phys .c 52 ( 1995 ) 1078 .h. herndl et al . , phys .c 58 ( 1998 ) 1798 .g. j. highland and t. t. thwaites , nucl .phys . a 109 ( 1968 ) 163 .r. d. hoffman et al ., astrophys .j. 521 ( 1999 ) 735 .a. j. howard et al .phys . a 152 ( 1970 ) 317 . c. iliadis ,diplom thesis , universitt mnster ( 1989 ) . c. iliadis et al .phys . a 512 ( 1990 ) 509 . c. iliadis et al .phys . a 559 ( 1993 ) 83 . c. iliadis et al .phys . a 571 ( 1994 ) 132 . c. iliadis et al .c 53 ( 1996 ) 475 . c. iliadis , nucl .a 618 ( 1997 ) 166 . c. iliadis et al ., astrophys .j. 524 ( 1999 ) 434 . c. iliadis et al ., astrophys .j. suppl . 134( 2001 ) 151 . c. iliadis , _ nuclear physics of stars _( wiley - vch , weinheim , 2007 ) . c. iliadis et al .c 77 ( 2008 ) 045802 .m. jaeger et al .87 , 202501 ( 2001 ) .m. jaeger , ph.d .thesis ( universitt stuttgart , 2001 ) .d. g. jenkins et al .92 ( 2004 ) 031101 .d. g. jenkins et al .c 73 ( 2006 ) 065802 .p. m. johnson , m. a. meyer and d. reitmann , nucl .phys . a 218 ( 1974 ) 333 .j. kalifa et al . , phys .c 17 ( 1978 ) 1961 .r. kanungo et al .c 74 ( 2006 ) 045803 .j. keinonen , m. riihonen and a. anttila , phys .c 15 ( 1977 ) 579 .w. e. kieser et al . , nucl .phys . a 327 ( 1979 ) 172 .w. e. kieser , r. e. azuma and k. p. jackson , nucl .phys . a 331 ( 1979 ) 155 .p. e. koehler , phys .c 66 , 055805 ( 2002 ) .r. l. kozub et al . , phys .c 71 ( 2005 ) 032801 .h. m. kuan , c. j. umbarger and d. g. shirk , nucl .phys . a 160 ( 1971 ) 211 ; erratum : nucl .phys . a 196 ( 1972 ) 634 .h. m. kuan and d. g. shirk , phys .c 13 ( 1976 ) 883 . s. kubono et al . , nucl .phys . a 537 ( 1992 ) 153 .s. kubono , t. kajino and s. kato , nucl .phys . a 588 ( 1995 ) 521 .m. la cognata et al . , phys .( 2008 ) 152501 .k. h. langanke et al . , astrophys .j. 301 ( 1986 ) 629 .t. k. li et al . , phys .c 13 ( 1976 ) 55 .r. longland et al ., in print ( 2009 ) . h. lorentz - wirzba et al .phys . a 313 ( 1979 ) 346 .g. lotay et al . , phys .c 77 ( 2008 ) 042802(r ) .g. lotay et al . , phys .102 ( 2009 ) 162502 .m. lugaro et al . , astrophys .j. 615 ( 2004 ) 934 .b. lyons et al .phys . a 130 ( 1969 ) 25 .z. ma et al . , phys .c 76 ( 2007 ) 015803 .j. w. maas et al . , nucl .phys . a 301 ( 1978 ) 213 .j. d. macarthur et al . , phys .c 22 ( 1980 ) 356 . h. mackh et al . , nucl .phys . a 202 ( 1973 ) 497 .p. v. magnus et al .phys . a 470 ( 1987 ) 206 .p. v. magnus et al .phys . a 506 ( 1990 ) 332 .h. b. mak et al . , nucl .phys . a 304 ( 1978 ) 210 .z. q. mao , h. t. fortune and a. g. lacaze , phys .74 ( 1995 ) 3760 .z. q. mao , h. t. fortune and a. g. lacaze , phys .c 53 ( 1996 ) 1197 .a. mayer , ph .d. thesis ( universitt stuttgart , 2001 ) .r. middleton et al .( 1968 ) 1398 .b. h. moazen et al . , phys rev .c 75 ( 2007 ) 065801 .p. mohr , phys .c 72 ( 2005 ) 035803 .j. y. moon et al .phys . a 758 ( 2005 ) 158c .a. m. mukhamedzhanov et al .c 73 ( 2006 ) 035806 .m. mukherjee et al . , phys .lett . , 93 ( 2004 ) 150801 .m. mukherjee et al . , eur .j. a 35 ( 2008 ) 37 .g. murillo et al . , nucl .phys . a 318 ( 1979 ) 352 .a. st . j. murphy et al .c 73 ( 2006 ) 034320 .a. st . j. murphy et al .c 79 ( 2009 ) 058801 .s. mythili et al . , phys .c 77 ( 2008 ) 035803 . c. d. nesaraja et al .c 75 ( 2007 ) 055809 .j. r. newton et al .c 75 ( 2007 ) 055808 .j. r. newton et al .c 75 ( 2007 ) 045801 .j. r. newton , r. longland and c. iliadis , phys .c 78 ( 2008 ) 025805 .j. r. newton et al .c , submitted ( 2009 ) .m. niecke et al . , nucl .phys . a 289 ( 1977 ) 408 .h. orihara , g. rudolf and ph .gorodetzky , nucl .phys . a 203 ( 1973 ) 78 .s. h. park et al .c 59 ( 1999 ) 1182 .y. parpottas et al . , phys .c 70 ( 2004 ) 065805 ; and phys .c 73 ( 2006 ) 049907(e ) .j. r. powers et al .c 4 ( 1971 ) 2030 .d. c. powell et al .phys . a 660 ( 1999 ) 349 .w. h. press , s. a. teukolsky , w. t. vetterling and b. p. flannery , numerical recipes ( cambridge university press , cambridge , 1992 ) .d. m. pringle and w. j. vermeer , nucl .phys . a 499 ( 1989 ) 117 .j. j. ramirez , r. a. blue and h. r. weller , phys .c 5 ( 1972 ) 17 .t. rauscher and f .- k .thielemann , at .data nucl .75 ( 2000 ) 1 .h. rpke , j. brenneisen and m. lickert , eur .j. a 14 ( 2002 ) 159 .d. w. o. rogers , j. h. aitken and a. e. litherland , can .50 ( 1972 ) 268 .d. w. o. rogers , r. p. beukens and w. t. diamond , can .50 ( 1972 ) 2428 .d. w. o. rogers et al ., can . j. phys . 54( 1976 ) 938 . c. rolfs et al .a 217 ( 1973 ) 29 . c. rolfs , i. berka and r. e. azuma , nucl .a 199 ( 1973 ) 306 . c. rolfs, a. m. charlesworth and r. e. azuma , nucl .a 199 ( 1973 ) 257 . c. rolfs et al .phys . a 241 ( 1975 ) 460 .j. g. ross et al . , phys .c 52 ( 1995 ) 1681 . c. rowland et al .c 65 ( 2002 ) 064609 . c. rowland et al . ,j. 615 ( 2004 ) l37 . c. ruiz et al .c 71 ( 2005 ) 025802 . c. ruiz et al .96 ( 2006 ) 252501 .p. schmalbrock et al . , nucl .phys . a 398 ( 1983 ) 279 .s. schmidt et al . , nucl .phys . a 591 ( 1995 ) 227 .h. schatz et al . , phys .79 ( 1997 ) 3845 .h. schatz et al . , phys .c 72 ( 2005 ) 065804 .d. l. sellin , h. w. newson and e. g. bilpuch , ann . of phys .51 ( 1969 ) 461 . j. c. sens , a. pape and r. armbruster , nucl .phys . a 199 ( 1973 ) 241 .s. seuthe et al . , nucl .phys . a 514 ( 1990 ) 471 .d. seweryniak et al .94 ( 2005 ) 032501 .d. seweryniak et al .c 75 ( 2007 ) 062801(r ) .h. smotrich et al . , phys .122 ( 1961 ) 232 .p. j. m. smulders , physica 31 ( 1965 ) 973 .p. j. m. smulders and p. m. endt , physica 28 ( 1962 ) 1093 .e. stech , ph .d. thesis , university of notre dame ( 2004 ) ( unpubished ) .m. a. stephens , j. amer . stat .69 ( 1974 ) 730 . f. stegmller et al . , nucl .phys . a 601 ( 1996 ) 168 .e. strandberg et al . , phys .c 77 ( 2008 ) 055801 .t. j. symons et al . , j. phys .g 4 ( 1978 ) 411 .w. p. tan et al .c 72 ( 2005 ) 041302 .w. p. tan et al .98 ( 2007 ) 242503 .n. tanner , phys .114 ( 1959 ) 1060 .t. tanabe et al . , phys .c 6 ( 1981 ) 2556 .t. tanabe et al . , nucl .phys . a 399 ( 1983 ) 241 .a. terakawa et al . , phys .c 48 ( 1993 ) 2775 .w. j. thompson and c. iliadis , nucl .phys . a 647 ( 1999 ) 259 .d. r. tilley et al . , nucl .phys . a 564 ( 1993 ) 1 .d. r. tilley et al . , nucl .phys . a , 595 ( 1995 ) 1 ; revised 21 march 2007 .d. r. tilley et al . , nucl .phys . a 636 ( 1998 ) 249 .r. timmermann et al .phys . a 477 ( 1988 ) 105 .i. tomandl et al . , phys .c 69 ( 2004 ) 014312 .h. p. trautvetter , nucl .phys . a 243 ( 1975 ) 37 .h. p. trautvetter et al . , nucl .phys . a 297 ( 1978 ) 489 .w. trinder et al .b 459 ( 1999 ) 67 . c. ugalde et al . , phys . rev .c 76 ( 2007 ) 025802 .b. y. underwood et al . , nucl .phys . a 225 ( 1974 ) 253 .s. utku et al . , phys .c 57 ( 1998 ) 2731 .g. vancraeynest et al . , phys .c 57 ( 1998 ) 2711 .d. w. visser et al . , phys .c 69 , ( 2004 ) 048801 .d. w. visser et al . , phys .c 76 ( 2007 ) 065803 .d. w. visser et al . , phys .c 78 ( 2008 ) 028802 .r. b. vogelaar , ph.d .thesis ( caltech , 1989 ) .r. b. vogelaar et al . , phys .c 42 ( 1990 ) 753 .r. b. vogelaar et al . , phys .c 53 ( 1996 ) 1945 .a. h. wapstra , g. audi , and c. thibault .phys . a 729 ( 2003 ) 129 .j. a .. weinman et al .133 ( 1964 ) b590 .m. wiescher et al .phys . a 349 ( 1980 ) 165 .s. wilmes et al .c 52 ( 1995 ) r2823 .s. wilmes et al .c 66 ( 2002 ) 065802 . k. wolke et al . , z. phys . a 334 , 491 ( 1989 ) .c. wrede et al . , phys .c 76 ( 2007 ) 052802(r ) . c. wrede et al .c 79 ( 2009 ) 045808 . c. wrede et al .c 79 ( 2009 ) 045803 . c. wrede , phys .c 79 ( 2009 ) 035803 .k. yagi , j. phys .japan 17 ( 1962 ) 604 . c. yazidjian et al .c 76 ( 2007 ) 024308 .h. yokota et al . , nucl .phys . a 383 ( 1982 ) 298 .k. yoneda et al . , phys .c 74 ( 2006 ) 021303(r ) .j. f. ziegler and j. p. biersack , program srim-2008 ( 2008 ) , unpublished .
numerical values of charged - particle thermonuclear reaction rates for nuclei in the a=14 to 40 region are tabulated . the results are obtained using a method , based on monte carlo techniques , that has been described in the preceding paper of this series ( paper i ) . we present a low rate , median rate and high rate which correspond to the 0.16 , 0.50 and 0.84 quantiles , respectively , of the cumulative reaction rate distribution . the meaning of these quantities is in general different from the commonly reported , but statistically meaningless expressions , lower limit " , nominal value " and upper limit " of the total reaction rate . in addition , we approximate the monte carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters and at each temperature . we also provide a quantitative measure ( anderson - darling test statistic ) for the reliability of the lognormal approximation . the user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis . for each reaction , the monte carlo reaction rate probability density functions , together with their lognormal approximations , are displayed graphically for selected temperatures in order to provide a visual impression . our new reaction rates are appropriate for _ bare nuclei in the laboratory_. the nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this series ( paper iii ) . in the fourth paper of this series ( paper iv ) we compare our new reaction rates to previous results . , ,
large graphs and networks are natural mathematical models of interacting objects such as computers on the internet or articles in citation networks . numerous examples can be found in the biomedical context from metabolic pathways and gene regulatory networks to neural networks . the present work is dedicated to one type of such biomedical network , namely epidemic networks : such a network models the transmission of a directly transmitted infectious disease by recording individuals and their contacts , other individuals to whom they can pass infection . understandingthe dynamic of the transmission of diseases on real world networks can lead to major improvements in public health by enabling effective disease control thanks to better information about risky behavior , targeted vaccination campaigns , etc .while transmissions can be studied on artificial networks , e.g. , some specific types of random networks , such networks fail to exhibit all the characteristics observed in real social networks ( see e.g. ) .it is therefore important to get access to and to analyze large and complex real world epidemic networks .as pointed out in , the actual definition of the social network on which the propagation takes place is difficult , especially for airborne pathogens , as the probability of disease transmission depends strongly on the type of interaction between persons .this explains partially why sexually transmitted diseases ( std ) epidemic networks have been studied more frequently than other networks .we study in this paper a large hiv epidemic network that has some unique characteristics : it records almost 5400 hiv / aids cases in cuba from 1986 to 2004 ; roughly 2400 persons fall into a single connected component of the infection network .std networks studied in the literature are generally smaller and/or do not exhibit such a large connected component and/or contain a very small number of infected persons . for instance, the manitoba study ( in canada , ) covers 4544 individuals with some std , but the largest connected component covers only 82 persons . the older colorado springs study covers around 2200 persons among which 965 falls in connected component ( the full network is larger but mixes sexual contacts and social ones ; additionally , the sexual networks contains only a very small number of hiv positive persons ) .while the large size and coverage of the studied network is promising , it has also a main negative consequence : manual analysis and direct visual exploration , as done in e.g. , is not possible .we propose therefore to analyze the network with state - of - the - art graph visualization methods .we first describe the epidemic network in section [ sec : cuban - hiva - datab ] and give an example of the limited possibilities of macroscopic analysis on this dataset .then section [ sec : visual - mining ] recalls briefly the visual mining technique introduced in and shows how it leads to the discovery of two non obvious sub - networks with distinctive features .the present work studies an anonymized national dataset which lists 5389 cuban residents with hiv / aids , detected between 1986 and 2004 .each patient is described by several variables including gender , sexual orientation , age at hiv / aids detection , etc .( see for details . ) the cuban hiv / aids program produces this global monitoring using several sources that range from systematic testing of pregnant women and all blood donations to general practitioner testing recommendations .in addition , the program conducts an extended form of infection tracing that leads to the epidemic network studied in this work .indeed , each new infected patient is interviewed by health workers and invited to list his / her sexual partners from the last two years .the primary use of this approach is to discover potentially infected persons and to offer them hiv testing .an indirect result is the construction of a network of infected patients .sexual partnerships are indeed recorded in the database for all infected persons .additionally , a probable infection date and a transmission direction are inferred from other medical information , leading to a partially oriented infection network . while this methodology is not contact tracing _stricto sensu _ as non infected patients are not included in the database ( contrarily to e.g. ) , the program records the total number of sexual partners declared for the two years period as well as a few other details , leading to an extended form of infection tracing .( see for differences between contact and infection tracing . )the 5389 patients are linked by 4073 declared sexual relations among which 2287 are oriented by transmission direction .a significant fraction of the patients ( 44 % ) belong to a giant connected component with 2386 members .the rest of the patients are either isolated ( 1627 cases ) or members of very small components ( the second largest connected component contains only 17 members ) .as the sexual behavior has a strong influence on hiv transmission , it seems important to study the relations between the network structure and sexual orientation of the patients . in the database ,female hiv / aids patients are all considered to be heterosexual as almost no hiv transmission between female has been confirmed .male patients are categorized into heterosexual man and `` man having sex with men '' ( msm ) ; the latter being men with at least one male sexual partner identified during their interview . the distributions of genders and of sexual orientations is given in table [ tab : gender : so ] : the giant component contains proportionally more msm than the full population ; this seems logical because of the higher probability of hiv transmission between men ..gender and sexual orientation distributions in the whole network and in the giant component [ cols= " > , > , > , > , > " , ]-1.2em this analysis shows that the two groups made of atypical clusters are far from each other compared to their internal distances .this is confirmed by the detection date analysis displayed on figure [ fig : groups : year ] .it appears that the epidemic in the giant component has two separated components .one mostly male homosexual component tends to dominate the recent cases ( note that even typical clusters contain at least 57 % of msm ) , while a mixed component with a large percentage of female patients was dominating the early epidemic , but tends to diminish recently .it should also be noted that this mix component is dominated by the growth of the homosexual component , but seems to decay only slightly in absolute terms . in other words ,the reduction should be seen as an inability to control the growth homosexual epidemic rather than as a success in eradicating the heterosexual epidemic .the proposed visual mining method for graphs has been shown to provide valuable insights on the epidemic network .it is based on links between modularity and visualization and leverages recent computationally efficient modularity maximizing methods .future works include the integration of the proposed methods in graph mining tools such as and its validation on other aspects of epidemic networks analysis .clmenon , s. , de arazoza , h. , rossi , f. , tran , v.c . : hierarchical clustering for graph visualization . in : proceedings of xviiitheuropean symposium on artificial neural networks ( esann 2011 ) .bruges , belgique ( april 2011 ) , to be published kwakwa , h.a . , ghobrial , m.w . : female - to - female transmission of human immunodeficiency virus . clinical infectious diseases : an official publication of the infectious diseases society of america 36(3 ) ( february 2003 ) noack , a. , rotta , r. : multi - level algorithms for modularity clustering . in : sea 09 : proceedings of the 8th international symposium on experimental algorithms .. 257268 .springer - verlag , berlin , heidelberg ( 2009 ) rothenberg , r.b . , woodhouse , d.e ., potterat , j.j . , muth , s.q . ,darrow , w.w . ,klovdahl , a.s . : social networks in disease transmission : the colorado springs study . in : needle , r.h . , coyle , s.l ., genser , s.g ., trotter ii , r.t .social networks , drug abuse , and hiv transmission , pp .318 . no .151 in research monographs , national institute on drug abuse ( 1995 ) varghese , b. , maher , j. , peterman , t. , branson , b. , steketee , r. : reducing the risk of sexual hiv transmission : quantifying the per - act risk for hiv on the basis of choice of partner , sex act , and condom use .sexually transmitted diseases 29(1 ) , 3843 ( january 2002 )
we show how an interactive graph visualization method based on maximal modularity clustering can be used to explore a large epidemic network . the visual representation is used to display statistical tests results that expose the relations between the propagation of hiv in a sexual contact network and the sexual orientation of the patients .
in this paper we study one - dimensional traveling wave solutions for the models in the form here is a constant ; and are functions whose properties will be specified later ; , ; in the following we put without loss of generality .the model is known as a particular case of the classical keller - segel models to describe chemotaxis , the movement of a population to a chemical signal ( see , e.g. , ) . in system denotes the constant diffusion coefficient ; is the chemotactic sensitivity , which can be either positive or negative ; describes production and degradation of the chemical signal ; it is customary to include also in the second equation of the diffusion term of the form which would describe diffusion of the chemical signal , but , we adopt hereafter , that , to a first approximation , can be taken zero . for biological interpretation of the solutions of werefer to the cited literature , references therein , and to section 3.2 ; macroscopic derivation of equation can be found in , e.g. , .the chemotactic models are the partial differential equations ( pdes ) with cross - diffusion terms ; these systems possess special mathematical peculiarities .such systems were used , e.g. , to model the movement of traveling bands of _ e. coli _ , amoeba clustering , insect invasion in a forest , species migration , tumor encapsulation and tissue invasion .many different spatially non - homogeneous patterns can be observed in chemotactic models , for a survey see , e.g. , and references therein .one such pattern is that of traveling waves which spread through the population .a _ traveling wave _ is a bounded solution of system having the form where and is the speed of wave propagation along -axis ; and are the wave profiles ( -profile and -profile respectively ) of solution ( ) .substituting these traveling wave forms into we obtain where primes denote differentiation with respect to . on integrating the first equation in the last system we have_ the wave system _ of : here is the constant of integration that depends on the boundary conditions for and . in various applicationsit is usually possible to determine this constant prior to analysis of the wave system .for instance , considering system as a model of chemotactic movement , where the variable plays the role of the population density and is an attractant , one usually supposes that should be finite , which implies that ( e.g. , ) . on the contrary , in our analysiswe do not specify the boundary conditions for and consider as a new parameter .each traveling wave solution of has its counterpart as a bounded orbit of for some ; in our study we elucidate the following question : for which there exist traveling wave solutions of and describe all such solutions .we also note that the case of does not exclude a model with infinite mass of if the traveling wave solution is a front ; moreover , the solutions corresponding to finite mass can be only impulses ( see below for the terminology ) .it is worth noting that due to specific form of system with cross - diffusion terms the wave system has the same dimension as the initial system , which significantly simplifies the analysis .this is one of peculiarities which distinguishes cross - diffusion pdes from those with only diffusion terms ( see also ) .we shall study possible wave profiles of and their bifurcations with changes of the parameters and by the methods of phase plane analysis and bifurcation theory . in this way, the problem of describing all traveling wave solutions of system is reduced to the analysis of phase curves and bifurcations of solutions of the wave system without a priory restrictions on boundary conditions for. there exists a known correspondence between the bounded traveling wave solutions of the spatial model and the orbits of the wave system ( e.g. , ) that we only list for the cases most important for our exposition .[ pr1] _ i. _ a wave front in _ _ ( _ _ or _ _ ) _ _ component corresponds to a heteroclinic orbit that connects singular points of with different _ _ ( _ _ or _ _ ) _ _ coordinates _ _ ( _ _ fig.[fig:1]a _ ) _ ; _ ii . _ a wave impulse in _ _ ( _ _ or _ _ )_ _ component corresponds to a heteroclinic orbit that connects singular points with identical _ _ ( _ _ or _ _ ) _ _ coordinates _ _ ( _ _ fig.[fig:1]b _ ) _ or to a homoclinic curve of a singular point of _ _ ( _ _ fig.[fig:1]c_)_. a front - front solution ; a front - impulse solution ; an impulse - impulse solution , title="fig:",scaledwidth=80.0% ] + hereinafter we shall adopt the following terminology : we define the type of a traveling wave solution of with a two word definition ; e.g. , a front - impulse solution means that -profile is a front , and -profile is an impulse ( the order of the terms is important ) . for system several results on the existence of one - dimensional traveling wavesare known ; see , e.g. , . in most of these referencesthe analysis is conducted using a particular model which is given in an explicit form .quite a different approach was used in where the authors consider more general model than and do not restrict themselves to analyzing a model with specific functions ; instead their aim was to understand how these functions have to be related to each other in order to result in traveling wave patterns for and .we consider a general class of models as well , and our task is to infer possible kinds of wave solutions under given restrictions on and .our main goal is as follows : we impose some constrains on the functions and study possible traveling wave solutions with increasing complexity of .special attention is paid to the families of traveling wave solutions such that the corresponding wave system possesses an infinite number of bounded orbits .we present the simplest possible models in the form that display traveling wave solutions of a specific kind . the main class of the models we deal with is defined in the following way. we shall call model the separable model if where are smooth functions for ; is smooth ; is a rational function : for ; here are real constant .the separable model will be called the reduced separable model if holds and we organize the paper as follows . in section [ s2 ]we present full classification of traveling wave solutions of the reduced separable models ; we also specify necessary and sufficient conditions for these models to possess specific kinds of traveling waves .section [ s3 ] is devoted to the analysis of the separable model ; we show which types of traveling waves can be expected in addition to the types described in section [ s2 ] ; we also analyze a generalized keller - segel model , which does not belong to the class of the separable models but display a number of similar properties together with essentially new ones .section [ s4 ] contains discussion and conclusions ; finally , the details of numerical computations are presented in appendix .in this section we present the full classification of possible traveling wave solutions of system that satisfies . the reason we start with the reduced separable models is twofold .first , there are models in the literature that have this particular form ( see , e.g. , ) ; second , the special form of the wave system allows the exhaustive investigation of traveling wave solutions of .the wave system of the reduced separable model reads where the first equation is independent of .we start with the case of general position .we assume that the following conditions of non - degeneracy are fulfilled ( later we will relax some of these assumptions ) : traveling wave solutions of correspond to bounded orbits of different from singular points .due to the structure of system it is impossible to have a homoclinic orbit or a limit cycle in the phase plane of , which yields that it is necessary to have at least two singular points of and a heteroclinic orbit connecting them ( see fig .[ fig:1]a , b ) to prove existence of traveling wave solutions of satisfying . in general , smoothfunctions can be written in the form the following proposition holds for neighboring roots of and .[ pr2] _ i. _ let the wave system satisfying - have singular points and , where and are neighboring roots of then one of these points is a saddle and the other one is a node . __ let the wave system satisfying - have singular points and , where and are neighboring roots of , then these points can both be saddles , nodes or one is a node and another is a saddle .let be a singular point of .the eigenvalues of this point and are real numbers ( henceforth we use prime to denote differentiation when it is clear with respect to which variable it is carried out ) .this implies that singular point of system can not be a focus or center .the claim is a simple conjecture of condition .let us consider two equilibrium points and .the eigenvalues and have opposite signs due to .consider another pair of eigenvalues and and assume that holds .if the number of roots of located between and is even ( or zero ) then the signs of these eigenvalues are the same .this implies that one of the equilibrium points is a saddle whereas the other one is a node . if the number of roots of located between and is odd than the signs of these eigenvalues are opposite which implies that both equilibriums are saddles or nodes ( one node is attracting and another is repelling ) .note , that in case ii of proposition [ pr2 ] in order to guarantee that both singular points are nodes one should have and . due to continuity arguments there exists a family of orbits of which tend to one of the nodes when and to the other node when . taking into account that straight lines and consist of orbits of system we obtain that the phase plane of is divided into bounded rectangular domains whose boundaries are and . we shall call these domains _ the orbit cells_. due to proposition [ pr2 ] it immediately follows that an orbit cell can be one of the following two types ( up to -degree rotation ) that are presented in fig .[ fig:2 ] .the behavior of the orbits inside a cell is completely described by the types of the singular points at the corners of the cell . moreover, any orbit inside a cell corresponds to a bounded traveling wave solution of system . summarizing the previous analysis we obtain the following theorem .[ th1 ] the system satisfying and - possesses traveling wave solutions _ i. _ of a front - front type _ _ ( _ _ fig .[ fig:1]a _ _ ) _ _ if and only if the wave system has four singular points , which are the vertexes of a bounded orbit cell and every two neighboring vertexes are a node and a saddle _ _ ( _ _ fig .[ fig:2]a _ _ ) _ _ ; _ ii . _ of a front - impulse type _ _ ( _ _ fig .[ fig:1]b _ _ ) _ _ if and only if the wave system has two neighboring nodes and , _ _ ( _ _ see fig .[ fig:2]b__)__. * remarks to theorem [ th1 ] .* \1 . in both casesthe orbits of system that correspond to traveling wave solutions of are dense in the corresponding orbit cell .system has a traveling wave solution which is a front in -component and space - homogeneous in -component if and only if the wave system has neighboring saddle and node with identical -coordinate , see singular points and in fig .[ fig:2]b .+ it is possible to write down asymptotics for and profiles ( these asymptotics can be used , e.g. , as initial conditions for numerical solutions of ) .we present these asymptotics only in the simplest case .let us assume that the wave system has the form where are constant .an explicit solution of is where are arbitrary constants .we emphasize here that even with fixed and there is a two - parameter family of wave profiles .let and .we consider non - trivial profiles ( ) .it is straightforward to show that and , hence , -profile is a front ; , and -profile is an impulse . if or then -profile remains the same and -profile becomes a front .formulas can be used as a first approximation for wave profiles of system even in the case where and do not change the sign when .it is worth noting that if then i.e. , the boundary of the profile depends on an arbitrary constant ; we deal with such solutions in the next section .considering and as bifurcation parameters we can relax some of non - degeneracy conditions - .first we note that the right - hand side of the second equation of system does not depend in a non - trivial way on and and we will not consider the case when is violated . in general , by varying the bifurcation parameters we can only achieve that either or do not hold .we shall show that in the latter case new traveling wave solutions can appear in system satisfying .first let us assume that does not hold , i.e. , function has a root of multiplicity for some , and the wave system has a complicated singular point .the system can be written in the form where .then the singular point of system is either a saddle , a node , or a saddle - node .for the first two types of critical points , the structure of the phase plane of the wave system was completely described above . in case of a saddle - node the line divides the plane such that in one half - plane the singular point is topologically equivalent to a node , and in the other one it is topologically equivalent to a saddle ; due to the fact that consists of solutions of this type of singular points does not yield qualitatively new bounded solutions of .therefore , violation of does not result in new types of wave solutions of satisfying .appearance or disappearance of -fronts correspond to appearance or disappearance of the roots of the function which can occur with variation of the parameters and .the simplest case of the appearance of two or three roots corresponds to the fold or cusp bifurcations respectively in the first equation of .the simple conditions for the fold and cusp bifurcations show that , under variation of the boundary conditions ( parameter ) and the wave speed ( parameter ) , appearance of traveling wave solutions of is possible .now we assume that does not hold , i.e. , the functions and have a coinciding root . in this case systemhas a line of non - isolated singularities in the phase plane .each point of the form is a non - isolated singular point ; all the points on the line are either simultaneously attracting or repelling in a transversal direction to this line .if we assume that there exists a node of such that is a root of , , and there are no other roots of between and then , due to continuity arguments , there exists a family of bounded orbits of ( fig .[ fig : fb ] ) . to describe the traveling wave solutions corresponding to this familywe define we shall say that model possesses a family of free - boundary wave fronts in -component if a _ _ ) _ _ every when ; b _ _ ) _ _ there exists an interval such that for any it is possible to find -profile with the property when .is a root of both and .the wave solutions corresponding to bounded orbits of the wave system form a free - boundary family , scaledwidth=40.0% ] summarizing we obtain [ pr3 ] the system satisfying has a traveling wave solution such that -profile is a front and -profile is a free boundary front if and only if condition is violated and there is a node of system such that there are no other singular points of between this node and the line of non - isolated singular points .the primary importance of such traveling wave solutions comes from the fact that for an arbitrary boundary condition ( from a particular interval ) for system we can find a wave solution whose -profile is a front .note that violation of and simultaneous appearance of a free - boundary component naturally occurs when the roots of are shifted under variation of and .it follows from that bifurcation of -profile occurs at such that is a simple root of . herewe present a simple example to illustrate the theoretical analysis from the previous sections .we consider model with where are non - negative parameters .the particular form of the functions obviously satisfies and .the wave system reads the system can have up to six singular points .for instance , if we fix the parameter values then system possesses six singular points ; therefore , there are two orbit cells ensuring existence of traveling wave solutions of system . a phase portrait of is shown in fig .[ fig:3 ] . from fig .[ fig:3 ] it can be seen that , with the given parameter values , there exist two qualitatively different traveling wave solutions of the initial cross - diffusion system which correspond to two cases of theorem [ th1 ] . .,scaledwidth=60.0% ] numerical solutions of system with functions and the given parameter values are shown in fig .[ fig:4 ] ( the details of the numerical computations are presented in the appendix ) .if we change the value of to then we obtain a family of free - boundary traveling wave solutions .( panel , front - impulse solution ) or ( panel , front - front solution ) in fig .[ fig:3 ] .the solutions are shown for the time moments in equal time intervals , scaledwidth=95.0% ]in this section we study models which satisfy . the rational function can be presented in the form where do not have real roots ; ; for any .the wave system has the form by transformation of the independent variable which is smooth for any except for , , system becomes the roots of functions do not depend on parameters and , hence , we will suppose that the following conditions of non - degeneracy are fulfilled : coordinates of singular points of can be found from one of the systems : or from combination of - . to infer possible types of the singular points of we consider , where is the jacobian of evaluated at a singular point .if is a solution of then if is a solution of then if is a solution of then consequently we obtain that is a saddle or a node for the cases corresponding to and , and is a saddle , node , or saddle - node ( see ) in the case .just as for the reduced separable models there are no singular points of of center or focus type . herewe do not pursue the problem of classification of possible structures of the phase plane of and are only concerned with new types of traveling wave solutions . analyzing formulas for and and applying arguments in the line with the proof of proposition [ pr2 ] and theorem [ th1 ] we obtain [ pr4 ] let coordinates of singular points of satisfy and function have two real neighboring roots and . if the function has an odd number of roots between and and point is a node , then is a node as well _ _ ( _ _ and vice versa__)__. let coordinates of singular points of satisfy and be a root of , have two real neighboring roots and . if the function has an odd number of roots between and and point is a node , then is a node as well _ _ ( _ _ and vice versa__)__. due to the structure of system the phase plane is divided into horizontal strips , whose boundaries are given by , where is a root of or ; all singular points of are situated on these boundaries . bringing in the continuity arguments we obtain that under the conditions of proposition [ pr4 ] there is a family of bounded orbits of which correspond to the traveling wave solution of of an impulse - front type .it is worth noting that the structure of the phase plane inside a strip can be quite arbitrary , and we can only indicate asymptotic behavior of orbits in neighborhoods of singular points . as a result , under given boundary conditions ( or , equivalently , fixed ) families of traveling wave solutions may have complex shapes ( opposite to the examples presented in fig .[ fig:1 ] ) .for instance , there can be non - monotonous fronts with humps or impulses which also have multiple humps and hollows .in general , we can only state that the form of impulses and fronts can be quite arbitrary , which is illustrated in fig .[ fig : nn ] . .the left panel shows an impulse - front solution , the right panel shows a front - impulse solution , scaledwidth=95.0% ] under variation of parameters and it is possible that function has a root for some values of the parameters ; in this case system has a line of non - isolated singular points and the analysis in this situation is similar to the analysis which led to proposition [ pr3 ] : a family of free - boundary fronts appears in component .now let us relax the condition ; we assume that there exists such that and . we can always find values of and such that . in this casethe phase plane of has a line of non - isolated singular points . after the change of the independent variable the resulting system still possesses singular point of the form , where is a root of .if this point is a node then , applying continuity arguments , we obtain that there exist a family of orbits of system such that some solutions from a neighborhood of tend to this point if ( or ) and tend to point of the form if ( or ) , where is an arbitrary constant from some interval .these solutions correspond to traveling wave solution of such that -profile is an impulse and component is a free - boundary front ( see fig .[ fig:6 ] ) .summarizing we obtain [ th2 ] the system satisfying and - can only possess traveling wave solutions of the following kinds : _ i. _ front - front solutions ; _ ii . _front - impulse solutions ; _ iii ._ impulse - front solutions ; _ iv ._ under variation of parameters and it is possible to have wave solutions where component is a front , component is a free - boundary front ; _ v. _ under the additional condition that does not hold it is possible to have wave solutions with component is an impulse and component is a free - boundary front .the classical keller - segel model has the form with , , where ( see ) . in our terminologythis model falls in the class of the separable models ; the wave system reads which , with the help of transformation , can be reduced to the form : , and ,scaledwidth=70.0% ] if then system has the only singular point . in this casethe separable model can not possess traveling wave solutions ( theorem [ th2 ] ) .hence we put .note that the last requirement is necessary if one supposes that should be finite. for system has a line of non - isolated singular points , and there is also additional degeneracy at the point ( case v. in theorem ) .this can be seen applying the second transformation of the independent variable , which leads to the system for which the origin is a topological node with the eigenvalues and .thus there exists a family of traveling wave solutions whose -profile is an impulse and -profile is a free - boundary front . in fig .[ fig:6s ] we show how the parametrization of the phase curves of the wave system change after the transformations of the independent variables .this picture can also serve as an illustration to assertion v. of theorem [ th2 ] . due to biological interpretation of the keller - segel modelit is necessary to have and .using the fact that an eigenvector corresponds to and corresponds to it is straightforward to see that to ensure the existence of non - negative travelling way solutions we should have .numerical solutions of the keller - segel model are given in fig .[ fig:6]b .originally , the keller - segel model was suggested to describe movement of bands of _ e. coli _ which were observed to travel at a constant speed when the bacteria are placed in one end of a capillary tube containing oxygen and an energy source . in fig .[ fig:6]b it can be seen that bacteria ( ) seek an optimal environment : the bacteria avoid low concentrations and move preferentially toward higher concentrations of some critical substrate ( ) .the stability of the traveling solutions found was studied analytically in . the phase plane of system ; the parameters are . numerical solutions of the keller - segel system with the parameters given in .the solutions are shown for the time moments in equal time intervals , scaledwidth=95.0% ] in the preceding sections we studied the systems where the functions can be represented as a product of functions that only depend on one variable .the next natural step is to assume that these functions depend on affine expressions , where are not equal to zero simultaneously . herewe present an explicit example of such a system .the example is motivated by appearance of a particular type of traveling wave solutions , which is absent in the separable models .we suppose that if then we have the keller - segel model studied in section 3.2 . the wave system for with the functions given by reads where we put parameter equal zero .after the change of the independent variable system takes the form if then system has a line on non - isolated singular points ; if then is an isolated non - hyperbolic singular point of ( i.e. , both eigenvalues of the jacobian evaluated at this point are zero ) .first we consider the case .after yet another transformation we obtain thus for and the origin is a node for system which implies that in the initial system there exist a family of bounded orbits which tent to for or .this family corresponds to a family of traveling wave solution of the system where -profile is an impulse and -profile is a free - boundary front .the picture is topologically equivalent to the phase portrait shown in fig .[ fig:6]a . the phase plane of system ; the parameters are . numerical solutions of system with the functions given by and the parameter values as in .the solutions are shown for the time moments in equal time intervals , scaledwidth=95.0% ] for the wave system has singular point possessing two elliptic sectors in its neighborhood ( see fig . [fig:7]a ) .the proof of existence of the elliptic sectors can easily be conducted with the methods given in .asymptotics of homoclinics composing the elliptic sector are ( trivial ) and , where is the biggest root of the equation the family of homoclinics in the phase plane correspond to the family of wave impulses for the system ( see fig .[ fig:1]c and fig .[ fig:7]b ) . to our knowledgesuch kind of solutions ( infinitely many traveling wave solutions of impulse - impulse type with the fixed values of the parameters ) was not previously described in the literature .the results of the numerical computations ( fig .[ fig:7]b ) indicate that the family of traveling impulses is clearly non - stationary , since its amplitude decays visibly in time . which is important ,however , is that it is possible to observe moving impulses at least at a finite time interval .in this paper we described all possible traveling wave solutions of the cross - diffusion two - component pde model satisfying , where the cross - diffusion coefficient may depend on both variables and possess singularities .such kind of models is widely used in modeling populations that can chemotactically react to an immovable signal ( attractant ) ( e.g. , and references therein ) .the study of traveling wave solutions of model was carried out by qualitative and bifurcation analysis of the phase portraits of the wave system that depends on parameters and . here is a speed of wave propagation and characterizes boundary conditions under given . any traveling wave solution of with given boundary conditions corresponds to a solution of wave system with specific values of parameters and ; the converse is also true .therefore , instead of trying to construct a traveling wave solution with the given boundary conditions , we study the set of all possible bounded solutions of the wave system considering and as its parameters .this approach allows identifying all boundary conditions for which model possesses traveling wave solutions .the main attention is paid to the so - called separable model , i.e. , to model that satisfies ; it is worth noting that in this case the functions and in are products of factors that depend on a single variable .we showed that for some fixed values of parameters and the solutions of the wave system compose two - parameter family ( for explicit formulas in a simple particular case see ) .one result is that -profiles of the wave solution of the reduced separable models that satisfies can be only front - front or front - impulse ; for more general case of the separable models we can additionally have impulse - front solutions . for some special relations between , and the model parameters model can have wave solutions whose -profiles are free - boundary fronts , i.e. , tends to an arbitrary constant from some interval at or at . note that traveling wave solutions of the well - known keller - segel model as well as of its generalization ( section 3.3 ) have impulse - front profiles with a free - boundary front ( sections 3.2 , 3.3 ) .we also considered a natural extension of the separable models ; namely , we gave an example of model where are products of factors that depend on affine expression of both variables ( see ) .this model can be considered as a generalization of the keller - segel model because it has two additional parameters and turns into the keller - segel model if both of these parameters are zero .if only one of the parameters vanishes , the model has a family of `` keller - segel''-type solutions , i.e. , -profile is an impulse and -profile is a frond with a free boundary .importantly , in some parameter domains this model possesses a two - parameter family of impulse - impulse solutions ( fig .[ fig:7 ] ) . to the best of our knowledge, such type of traveling wave solutions was not previously described in the literature : depending on the initial conditions traveling impulses can have quite a different form for the fixed values of the system parameters . taking into account the fact that such solutions are absent in the separable models , we can consider model with functions as the simplest model possessing this type of traveling wave solutions .rearrangements of traveling wave solutions of pde model , which occur with changes of the wave propagation velocity and the boundary conditions , correspond to bifurcations of its ode wave system .in particular , appearance / disappearance of front - profiles with variation of parameters and correspond to the fold or cusp bifurcations in the wave system ; rearrangement of a front to an impulse can be accompanied by appearance / disappearance of non - isolated singular points in the phase plane of the corresponding wave system ( see section 2.3 ) .existence of non - isolated singular points in the wave system may result in the existence of free - boundary fronts in model .for instance , this is the case for the keller - segel model .we emphasize that the separable model , when the values of parameters and fixed in the wave system , possesses , in general , two - parameter family of traveling wave solutions .there are infinitely many bounded orbits of that correspond to traveling wave solutions of ( see figures [ fig:2 ] , [ fig:4 ] , [ fig:6 ] , [ fig:7 ] ) .it is of particular interest that in all numerical solutions of that we conducted it is possible to observe traveling waves .we did not discuss the issue of stability of the traveling wave solutions found , but we note that it is usually true that unstable solutions can not be produced in numerical calculations .it is tempting to put forward a hypothesis that the presence of additional degrees of freedom ( two free parameters ) is the reason of producing traveling waves in numerical computations .this important question can be a subject of future research .we did numerical simulations of system for $ ] , where varied in different numerical experiments .we used no - flux boundary conditions for the spatial variable .inasmuch as we wanted to study the behavior of the traveling wave solutions in an infinite space we chose such space interval so that to avoid the influence of boundaries .we used an explicit difference scheme .the approximation of the taxis term is an `` upwind '' explicit scheme which is frequently used for cross - diffusion systems ( e.g. , ) .more precisely , where for the positive taxis ( pursuit ) ( i.e. , ) , for the negative taxis ( invasion ) : we used , .for the boundary conditions : for the initial conditions we used numerical solutions of the corresponding wave systems .fsb and gpk express their gratitude to dr .a. stevens for numerous useful discussions on the first version of the present work .the work of fsb has been supported in part by nsf grant # 634156 to howard university .s. habib , c. molina - paris , t.s .deisboeck , complex dynamics of tumors : modeling an emerging brain tumor system with coupled reaction - diffusion equations , physica a : statistical mechanics and its applications 327(3 - 4 ) ( 2003 ) 501 - 524 .m. a. tsyganov , j. brindley , a. v. holden , v. n. biktashev , soliton - like phenomena in one - dimensional cross - diffusion systems : a predator - prey pursuit and evasion example , physica d : nonlinear phenomena , 197(1 - 2 ) ( 2004 ) 18 - 33 .
an analysis of traveling wave solutions of partial differential equation ( pde ) systems with cross - diffusion is presented . the systems under study fall in a general class of the classical keller - segel models to describe chemotaxis . the analysis is conducted using the theory of the phase plane analysis of the corresponding wave systems without a priory restrictions on the boundary conditions of the initial pde . special attention is paid to families of traveling wave solutions . conditions for existence of front - impulse , impulse - front , and front - front travelling wave solutions are formulated . in particular , the simplest mathematical model is presented that has an impulse - impulse solution ; we also show that a non - isolated singular point in the ordinary differential equation ( ode ) wave system implies existence of free - boundary fronts . the results can be used for construction and analysis of different mathematical models describing systems with chemotaxis . [ [ keywords ] ] keywords : + + + + + + + + + keller - segel model , traveling wave solutions , cross - diffusion
transmitting messages with perfect secrecy using physical layer techniques was first studied in on a physically degraded discrete memoryless wiretap channel model . later , this work was extended to more general broadcast channel in and gaussian channel in , respectively .wireless transmissions , being broadcast in nature , can be easily eavesdropped and hence require special attention to design modern secure wireless networks .secrecy rate and capacity of point - to - point multi - antenna wiretap channels have been reported in the literature by several authors , e.g. , . in the above works , the transceiver operates in half - duplex mode , i.e. , either it transmits or receives at any given time instant .on the other hand , full - duplex operation gives the advantage of simultaneous transmission and reception of messages .but loopback self - interference and imperfect channel state information ( csi ) are limitations .full - duplex communication without secrecy constraint has been investigated by many authors , e.g. , .full - duplex communication with secrecy constraint has been investigated in , where the achievable secrecy rate region of two - way ( i.e. , full - duplex ) gaussian and discrete memoryless wiretap channels have been characterized . in the above works , csi in all the links are assumed to be perfect . in this paper , we consider the achievable sum secrecy rate in miso _ full - duplex _ wiretap channel in the presence of a passive eavesdropper and imperfect csi .the users participating in full - duplex communication have multiple transmit antennas , and single receive antenna each .the eavesdropper is assumed to have single receive antenna .the norm of the csi errors in all the links are assumed to be bounded in their respective absolute values .in addition to a message signal , each user transmits a jamming signal in order to improve the secrecy rates .the users operate under individual power constraints . for this scenario ,we obtain the achievable perfect secrecy rate region by maximizing the worst case sum secrecy rate .we also obtain the corresponding transmit covariance matrices associated with the message signals and the jamming signals .numerical results that illustrate the impact of imperfect csi on the achievable secrecy rate region are presented .we also minimize the total transmit power ( sum of the transmit powers of users 1 and 2 ) with imperfect csi subject to receive signal - to - interference - plus - noise ratio ( sinr ) constraints at the users and eavesdropper , and individual transmit power constraints of the users .the rest of the paper is organized as follows .the system model is given in sec .[ sec2 ] . secrecy rate for perfect csi is presented in sec .[ sec3 ] . secrecy rate with imperfect csi is studied in sec .results and discussions are presented in sec .conclusions are presented in sec .[ sec7 ] . implies that is a complex matrix of dimension . and imply that is a positive semidefinite matrix and positive definite matrix , respectively .identity matrix is denoted by .^{\ast} ] denotes expectation operator . denotes 2-norm operator .trace of matrix is denoted by .we consider full - duplex communication between two users and in the presence of an eavesdropper . , are assumed to have and transmit antennas , respectively , and single receive antenna each . is a passive eavesdropper and it has single receive antenna .the complex channel gains on various links are as shown in fig .[ fig1 ] , where , , , , , and .has transmit antennas and single receive antenna . has transmit antennas and single receive antenna . has single receive antenna.,width=321 ] and simultaneously transmit messages and , respectively , in channel uses . and are independent and equiprobable over and , respectively . and are the information rates ( bits per channel use ) associated with and , respectively , which need to be transmitted with perfect secrecy with respect to . and map and to codewords , i.i.d . , \big ) ] , respectively , of length . in order to degrade the eavesdropper channels and improve the secrecy rates , both and inject jamming signals , i.i.d . , \big ) ] , respectively , of length . and transmit the symbols and , respectively , during the channel use , .hereafter , we will denote the symbols in , , and by , and , respectively .we also assume that all the channel gains remain static over the codeword transmit duration .let and be the transmit power budget for and , respectively .this implies that let , , and denote the received signals at , and , respectively .we have where , , and are i.i.d . receiver noise terms .in this section , we assume perfect csi in all the links . since knows the transmitted symbol , in order to detect , subtracts from the received signal , i.e. , similarly , since knows the transmitted symbol , to detect , subtracts from the received signal , i.e. , using ( [ eqn8 ] ) and ( [ eqn9 ] ) , we get the following information rates for and , respectively : using ( [ eqn3 ] ) , we get the information leakage rate at as using ( [ eqn54 ] ) , ( [ eqn53 ] ) , and ( [ eqn55 ] ) , we get the information capacities , , and , respectively , as follows : a secrecy rate pair which falls in the following region is achievable : we intend to maximize the sum secrecy rate subject to the power constraint , i.e. , this is a non - convex optimization problem , and we solve it using two - dimensional search as follows . divide the intervals ] in and small intervals , respectively , of size and where and are large integers .let and , where and . for a given pair , we minimize as follows : the maximum sum secrecy rate is given by .we solve the optimization problem ( [ eqn77 ] ) as follows .dropping the logarithm in the objective function in ( [ eqn77 ] ) , we rewrite the optimization problem ( [ eqn77 ] ) in the following equivalent form : s.t . using the kkt conditions of the above optimization problem , we analyze the ranks of the optimum solutions , , , in the appendix .further , for a given , the above problem is formulated as the following semidefinite feasibility problem : subject to the constraints in ( [ eqn80 ] ) .the minimum value of , denoted by , can be obtained using bisection method as follows .let lie in the interval ] , ] , $ ] .we assume that the magnitudes of the csi errors in all the links are equal , i.e. , .we also assume that . in fig .[ fig2 ] and fig .[ fig3 ] , region in full - duplex communication . db , , , .,width=349 ] region in full - duplex communication . db , , , .,width=349 ] we plot the region obtained by maximizing the sum secrecy rate for various values of .results in fig .[ fig2 ] and fig .[ fig3 ] are generated for fixed powers db and db , respectively .we observe that as the magnitude of the csi error increases the corresponding sum secrecy rate decreases which results in the shrinking of the achievable rate region . also , as the power is increased from 3 db to 6 db , the achievable secrecy rate region increases .we investigated the sum secrecy rate and the corresponding achievable secrecy rate region in miso full - duplex wiretap channel when the csi in all the links were assumed to be imperfect .we obtained the transmit covariance matrices associated with the message signals and the jamming signals which maximized the worst case sum secrecy rate .numerical results illustrated the impact of imperfect csi on the achievable secrecy rate region .we further note that transmit power optimization subject to outage constraint in a slow fading full - duplex miso wiretap channel can be carried out using the approximations by conic optimization in as future extension to this work .in this appendix , we analyze the ranks of the solutions , , , and which are obtained by solving the optimization problem ( [ eqn79 ] ) subject to the constraints in ( [ eqn80 ] ) .we take the lagrangian of the objective function subject to the constraints in ( [ eqn80 ] ) as follows : * ( a1 ) all the constraints in ( [ eqn80 ] ) , * ( a2 ) , * ( a3 ) , * ( a4 ) . since and , * ( a5 ) . since and , * ( a6 ) . since and , * ( a7 ) . since and , * ( a8 ) , * ( a9 ) , * ( a10 ) , * ( a11 ) .this implies that , * ( a12 ) , * ( a13 ) , * ( a14 ) , * ( a15 ) .we first consider the scenario when .the kkt condition ( a12 ) implies that the above expression implies that . since , this further implies that .assuming , the kkt condition ( a4 ) implies that , and the expression ( [ eqn123 ] ) implies that .this means that . with , and , we rewrite the kkt condition ( a13 ) in the following form : if , the above expression implies that .the kkt condition ( a5 ) implies that , and ( assuming ) .now , if , the kkt condition ( a8 ) implies that , i.e. , the received signal power at the eavesdropper will be zero . the expression ( [ eqn128 ] ) , and the kkt condition ( a5 ) further imply that .also , when , the kkt condition ( a2 ) implies that , i.e. , the entire power is used for the transmission .similar rank analysis holds for and when .we now consider the scenario when .assuming and are not collinear , the kkt condition ( a12 ) will be satisfied only when . with this, the expression ( [ eqn123 ] ) implies that and . the kkt condition ( a4 )further implies that the eigen vectors corresponding to the non - zero eigen values of lie in the orthogonal complement subspace of , and .further , with and , the kkt condition will be satisfied only when i.e. , .the above analysis implies that there exist a rank-1 optimum .similar rank analysis holds for and when .m. duarte and a. sabharwal , `` full - duplex wireless communications using off - the - shelf radios : feasibility and first results , '' _ conference record of the forty fourth asilomar conference on signals , systems and computers ( asilomar ) , _ pp .1558 - 1562 , nov .2010 .e. tekin and a. yener , `` the general gaussian multiple - access and two - way wiretap channels : achievable rates and cooperative jamming , '' _ ieee trans .inform . theory _54 , no . 6 , pp .2735 - 2751 , jun . 2008 .e. tekin and a. yener , `` correction to : `` the gaussian multiple - access wire - tap channel '' and `` the general gaussian multiple - access and two - way wiretap channels : achievable rates and cooperative jamming '' , '' _ ieee trans .inform . theory _9 , pp . 4762 - 4763 , sep .2010 .q. li and w. k. ma , `` spatially selective artificial - noise aided transmit optimization for miso multi - eves secrecy rate maximization , '' _ ieee trans .signal process .10 , pp . 2704 - 2717 , mar .2013 .wang , a. m - c .chang , w - k .ma , and c - y .chi , `` outage constrained robust transmit optimization for multiuser miso downlinks : tractable approximations by conic optimization , '' _ ieee trans .signal process_. , vol .21 , pp . 5690 - 5705 , nov .
in this paper , we consider the achievable sum secrecy rate in miso ( multiple - input - single - output ) _ full - duplex _ wiretap channel in the presence of a passive eavesdropper and imperfect channel state information ( csi ) . we assume that the users participating in full - duplex communication have multiple transmit antennas , and that the users and the eavesdropper have single receive antenna each . the users have individual transmit power constraints . they also transmit jamming signals to improve the secrecy rates . we obtain the achievable perfect secrecy rate region by maximizing the worst case sum secrecy rate . we also obtain the corresponding transmit covariance matrices associated with the message signals and the jamming signals . numerical results that show the impact of imperfect csi on the achievable secrecy rate region are presented . _ keywords : _ _ miso , full - duplex , physical layer security , secrecy rate , semidefinite programming . _
the recent financial crisis revealed weaknesses in the financial regulatory framework when it comes to the protection against systemic events .before , it was generally accepted to measure the risk of financial institutions on a stand alone basis . in the aftermath of the financial crisis risk assessment of financial systems as well astheir impact on the real economy has become increasingly important , as is documented by a rapidly growing literature ; see e.g. or for a survey and the references therein .parts of this literature are concerned with designing appropriate risk measures for financial systems , so - called systemic risk measures .the aim of this paper is to axiomatically characterize the class of systemic risk measures which admit a decomposition of the following form : where is a state - wise aggregation function over the -dimensional random risk factors of the financial system , e.g. profits and losses at a given future time horizon , and is a univariate risk measure .the aggregation function determines how much a single risk factor contributes to the total risk of the financial system in every single state , whereas the so - called base risk measure quantifies the risk of . first introduced axioms for systemic risk measures , and showed that these admit a decomposition of type .their studies relied on a finite state space and were carried out in an unconditional framework . extend this to arbitrary probability spaces , but keep the unconditional setting .the main contributions of this paper are : * we axiomatically characterize systemic risk measures of type in a conditional framework , in particular we consider conditional aggregation functions and conditional base risk measures in .* we allow for a very general structure of the aggregation , which is flexible enough to cover examples from the literature which could not be handled in axiomatic approaches to systemic risk so far .* we work in a less restrictive axiomatic setting , which gives us the flexibility to study systemic risk measures which for instance need not necessarily be convex or quasi - convex , etc .this again provides enough flexibility to cover a vast amount of systemic risk measures applied in practice or proposed in the literature .it also allows us to identify the relation between properties of and properties of and , and in particular the mechanisms behind the transfer of properties from to and , and vice versa .this is related to the following point 4 .* we identify the underlying structure of the decomposition by defining systemic risk measures solely in terms of so called risk - consistent properties and properties on constants .in the following we will elaborate on the points 1.4 . above .[ [ a - conditional - framework - for - assessing - systemic - risk ] ] 1 . a conditional framework for assessing systemic risk+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we consider systemic risk in a conditional framework .the standard motivation for considering conditional univariate risk measures ( see e.g. and ) is the conditioning in time , and the argumentation in favor of this also carries over to multivariate risk measures .however , apart from a dynamic assessment of the risk of a financial system , it might be particularly interesting to consider conditioning in space . in that respect recently introduced and studied so - called spatial risk measures for univariate risks .typical examples of spatial conditioning are conditioning on events representing the whole financial system or parts of that system , such as single financial institutions , in distress .this is done to study the impact of such a distress on ( parts of ) the financial system or the real economy , and thereby to identify systemically relevant structures .for instance the conditional value at risk ( covar ) introduced in considers for the -quantile of the distribution of the netted profits / losses of a financial system conditional on a crisis event of institution : see .more examples can be found in , , .such risk measures fit naturally in a conditional framework ; cf . and .[ [ aggregation - of - multidimensional - risk ] ] 2 .aggregation of multidimensional risk + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a quite common aggregation rule for a multivariate risk is simply the sum see the definition of covar in . represents the total profit / loss after the netting of all single profits / losses .however , such an aggregation rule might not always be reasonable when measuring systemic risk .the major drawbacks of this aggregation function in the context of financial systems are that profits can be transferred from one institution to another and that losses of a financial institution can not trigger additional contagion effects .those deficiencies are overcome by aggregation functions which explicitly account for contagion effects within a financial system .for instance , based on the approach in , the authors in introduce such an aggregation rule which however , due to the more restrictive axiomatic setting , exhibits the unrealistic feature that in case of a clearing of the system institutions might decrease their liabilities by more than their total debt level .we will present a more realistic extension of this contagion model together with a small simulation study in .moreover , we present reasonable aggregation functions which are not comprised by the axiomatic framework of or . in particular thisconditional aggregation functions _ which come naturally into play in our framework ; see . [ [ axioms - for - systemic - risk - measures ] ] 3.4 .axioms for systemic risk measures + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our aim is to identify the relation between properties of and properties of and in respectively , and in particular the mechanisms behind the transfer of properties from to and , and vice versa .we will show that this leads to two different classes of axioms for conditional systemic risk measures .one class concerns the behavior on deterministic risks , so - called properties on constants .the other class of axioms ensures a consistency between state - wise and global - in the sense of over all states - risk assessment .this latter class will be called risk - consistent properties .the risk - consistent properties ensure a consistency between local - that is -wise - risk assessment and the measured global risk .for example , _ risk - antitonicity _ is expressed by : if for given risk vectors and it holds that in almost all states , then .the naming _ risk - antitonicity _ , and analogously the naming for the other risk - consistent properties , is motivated by the fact that antitonicity is considered with respect to the order relation induced by the -wise risk comparison of two positions and not with respect to the usual order relation on the space of random vectors. note that for a univariate risk measure which is constant on constants , i.e. for all , risk - antitonicity is equivalent to the classical antitonicity with respect to the usual order relation on the underlying space of random variables . in a general multivariate settingthis equivalence does not hold anymore .however , we will show that properties on constants in conjunction with corresponding risk - consistent properties imply the classical properties on the space of risks .this makes our risk model very flexible , since we may identify systemic risk measures where for example the corresponding aggregation function in is concave , but the base risk measure is not convex .moreover , it will turn out that the properties on constants basically determine the underlying aggregation rule in the systemic risk assessment , whereas the risk - consistent properties translate to properties of the base risk measure in the decomposition .some of the risk - consistent properties , however partly under different names , also appear in the frameworks of and .for instance what we will call risk - antitonicity is called preference consistency in . in our frameworkwe emphasize the link between the risk - consistent properties ( and the properties on constants ) and the decomposition .this aspect has not been clearly worked out so far .it leads us to introducing a number of new axioms and to classifying all axioms within the mentioned classes of risk - consistent properties and properties on constants .[ [ structure - of - the - paper ] ] structure of the paper + + + + + + + + + + + + + + + + + + + + + + in section [ sec : decomp ] we introduce our notation and the main objects of this paper , that is the risk - consistent conditional systemic risk measures , the conditional aggregation functions and the conditional base risk measures as well as their various extensions . at the end of section[ sec : decomp ] we state our main decomposition result ( ) for risk - consistent conditional systemic risk measures . moreover , reveals the connection between risk - consistent properties and properties on constants on the one hand and the classical properties of risk measures on the other hand . section [ sec : proof ] is devoted to the proofs of and . in section [ sec : ex ] we collect our examples .throughout this paper let be a probability space and be a sub--algebra of . refers to the space of -measurable , -almost surely ( a.s . ) bounded random variables and to the -fold cartesian product of . as usual , and denote the corresponding spaces of random variables / vectors modulo -a.s . equality . for -measurable random variables / vectorsanalogue notations are used . in general, upper case letters will represent random variables , where are multidimensional and are one - dimensional , and lower case letters deterministic values. we will use the usual componentwise orderings on and , i.e. for if and only if for all , and similarly if and only if a.s . for all .furthermore and denote the -dimensional vectors whose entries are all equal to 1 or all equal to 0 , respectively .when deriving our main results we will run into similar problems as one faces in the study of stochastic processes : at some point it will not be sufficient to work on equivalence classes , but we will need a specific nice realization or version of the process , for instance a version with continuous paths , etc . in the following , by a realization of a function we mean a selection of one representative in the equivalence class for each , i.e. a function where with for all .we emphasize that in the following we will always denote a realization of a function by its explicit dependence on the two arguments : . indeed, our decomposition result in will be based on the idea to break down a random variable into every single scenario and evaluating it separately .this implies working with appropriate realizations which will satisfy properties which we will denote _ risk - consistent _ properties . also for risk factors we will work both with equivalence classes of random vectors in and their corresponding representatives in , in contrast to the realizations of introduced above , here the considerations do not depend on the specific choice of the representative .hence for risk factors we will stick to usual abuse of notation of also writing for an arbitrary representative in of the corresponding equivalence class .this will become clear from the context .in particular , denotes an arbitrary representative of the corresponding equivalence class evaluated in the state .finally , we write both for real numbers and for ( equivalence classes of ) constant random variables depending on the context .the following definition introduces our main object of interest in this paper : [ defcsrm ] + a function is called a _ risk - consistent conditional systemic risk measure _ ( csrm ) , if it is _ antitone on constants : _ : : for all with we have and if there exists a realization such that the restriction has _ continuous paths _ ,i.e. is continuous in its first argument a.s . , and it satisfies _ risk - antitonicity : _ : : for all with a.s .we have .furthermore , we will consider the following properties of on constants : _ convexity on constants : _ : : for all constants and ] , then ; _ positive homogeneity : _ : : for all .furthermore , a function is a _ conditional aggregation function _ ( caf ) , if 1 .[ lgmb ] for all , 2 .[ lgdaf ] is a daf for all .a caf is called concave ( positive homogeneous ) if is concave ( positive homogeneous ) for all .[ prodmb ] note that , functions like cafs which are continuous in one argument and measurable in the other also appear under the name of carathodory functions in the literature on differential equations . for carathodory functions it is well known ( see e.g. lemma 8.2.6 ) that they are product measurable , i.e. every caf is -measurable. given a caf , we extend the aggregation from deterministic to random vectors in the following way ( which is well - defined due to as well as isotonicity and property ( i ) in the definition of a caf ) : [ hannes : pathwiseconcave ] notice that the aggregation of random vectors is -wise in the sense that given a certain state , in that state we aggregate the sure payoff .consequently , properties such as isotonicity , concavity or positive homogeneity of the caf translate to the extended caf .hence , always satisfies if is concave , then for all and with we have and if is positively homogeneous , then for all and with : the last yet undefined ingredient in our decomposition is the conditional base risk measure which we define next .notice that the domain of depends on the underlying aggregation given by .for example the aggregation function only considers the losses .hence , the corresponding base risk measure a priori only needs to be defined on the negative cone of , even though it in many cases allows for an extension to .we will see in that if is the image of an extended caf then is -conditionally convex , i.e. and with implies .[ def : cbrm ] + let be a -conditionally convex set .a function is a _conditional base risk measure _( cbrm ) , if it is _ antitone : _ : : implies .moreover , we will also consider cbrm s which fulfill additionally one or more of the following properties : _ constant on constants : _ : : for all ; _ quasiconvexity : _ : : for all with ; _ convexity : _ : : for all with ; _ positive homogeneity : _ : : for all with and . constructing a csrm by composing a cbrm and a caf as in, we need a property for which allows to extract the caf in order to obtain the properties on constants of .the constant on constants property serves this purpose , but we will see in that the following weaker property is also sufficient .[ def : constonaggr ] a cbrm is called _ constant on a caf _ , if for all and clearly , if is constant on constants , then it is constant on any caf with an appropriate image as is always satisfied .conditional risk measures have been widely studied in the literature , see for an overview .as already explained above the antitonicity is widely accepted as a minimal requirement for risk measures .the constant on constants property is a standard technical assumption , whereas we will only need the weaker property of constancy on an aggregation function for an cbrm .typically conditional risk measures are also required to be monetary in the sense that they satisfy some translation invariance property which we do not require in our setting , see e.g. .much of the literature is concerned with the study of quasiconvex or convex conditional risk measures which in our setting implies that the corresponding risk - consistent conditional systemic risk measure will satisfy risk - quasiconvexity resp .risk - convexity , see . after introducing all objects and properties of interestwe are now able to state our decomposition theorem .[ t1 ] a function is a csrm if and only if there exists a caf and a cbrm such that is constant on ( ) and where the extended caf was introduced in .the decomposition into and is unique .+ furthermore there is a one - to - one correspondence between additional properties of the cbrm and additional risk - consistent properties of the csrm : * is risk - convex iff is convex ; * is risk - quasiconvex iff is quasiconvex ; * is risk - positive homogeneous iff is positive homogeneous ; * is risk - regular iff is constant on constants .moreover , properties on constants of the csrm are related to properties of the caf : * is convex on constants iff is concave ; * is positive homogeneous on constants iff is positive homogeneous .the proof of is quite lengthy and needs some additional preparation and is thus postponed to section [ sec : proof ] .note that it follows from the proof of that the aggregation rule in is deterministic if and only if .the decomposition can also be established without requiring the csrm to be risk - antitone , but to fulfill the weaker property notice , however , if we only require , then the cbrm in ( and also itself , see below ) might not be antitone anymore .an important question is to which degree csrm s fulfill the usual ( conditional ) axioms of risk measures on ( where these axioms on are defined analogously to the ones on in ) . in the followingwe will investigate the relation between risk - consistent properties and properties on constants on the one side and properties of on on the other .[ t2 ] let be a csrm .then * risk - antitonicity together with antitonicity on constants can equivalently be replaced by antitonicity of ( implies ) together with .moreover : * is risk - positive homogeneous and positive homogeneous on constants iff is positive homogeneous ; * if is risk - convex and convex on constants , then is convex ; * if is risk - quasiconvex and convex on constants , then is quasiconvex . as for we postpone the proof to section [ sec : proof ] .we have seen in that a property on of a csrm is implied by the corresponding risk - consistent property and the property on constants .the reverse is only true for the antitonicity and positive homogeneity .to see this we give a counterexample for the convex case .suppose that and }\right) ] such that for this measurable selection we can find an such that . hence there exists an such that for the last part of the proof let , then by definition there exists an such that .thus by setting and we have that moreover , since is -measurable , we obtain by a similar argumentation as above that there exists a -measurable with and .[ lgac ] let be a conditional aggregation function .then there exists a -nullset such that if satisfy it holds that where denotes the complement of .consider the sets and for by definition is a -nullset for all , but since has only countable many elements , the same holds true for the union . + now consider such that we can always find sequences such that and for .the isotonicity of yields a.s . ,thus for all .therefore we get for all that where we have used that is continuous for every .as a.s . implies a.s . and a.s . , the assertion follows .note that the -nullset in is universal in the sense that it does not depend on the pair .for the rest of the proof let .+ : + suppose that is a caf with extended caf , and that is a cbrm which is constant on .moreover , define the function first we will show that is antitone ( and thus in particular antitone on constants ) : to this end , let . as is isotone for all we know from thatalso the extended caf is isotone , i.e. . by the antitonicity of we can conclude that next we will show that there exists a realization of with continuous paths and which fulfills the risk - antitonicity . from andit can be readily seen that we can always find a realization of and a universal -nullset such that for all given this realization of we consider in the following the realization of given by the function has continuous paths ( a.s . ) because has continuous paths . as for the risk - antitonicity ,let a.s . by rewriting this in terms of the decomposition , i.e. + we realize by that note that our application of relies on the fact that the nullset in does not depend on .as is equivalent to , we conclude that where we used the antitonicity of .hence , we have proved that is a csrm .next we treat the special cases when and/or satisfy some extra properties .+ _ risk - regularity _ : suppose is constant on constants. then we have and thus we obtain for the realization that for all as above implies that for all _ risk - quasiconvexity / convexity _ : suppose that is quasiconvex .we show that is risk - quasiconvex . to this end , suppose there exist and an with such that then , as above , by using , it follows that hence the quasiconvexity of yields similarly it follows that is risk - convex whenever is convex .+ _ risk - positive homogeneity _ : finally , if is positively homogeneous , then it is straightforward to see that also is risk - positively homogeneous .+ _ properties on constants : _ suppose that is concave or positive homogeneous , then it is an immediate consequence of that is convex on constants or positive homogeneous on constants , resp .: + let denote a realization of the csrm such that has continuous paths and the risk - antitonicity holds .we define the function by we show that is a daf for almost all , i.e. that it is isotone and continuous .the continuity is obvious by . for the isotonicity consider the sets and for since is antitone on constants we obtain that is a -nullset .moreover , let denote the -nullset on which has discontinuous sample paths .consider such that , and let be a sequence which converges to for .then we get for all that and thus the paths are isotone a.s .the fact that the paths are concave ( positively homogeneous ) a.s .whenever is convex on constants ( positively homogeneous on constants ) follows by a similar approximation argument on the continuous paths which are concave ( positively homogeneous ) on .given the above considerations , we choose a modification of such that , is a ( concave / positively homogeneous ) daf for all .note that for relation is only valid a.s ., that is there is a -nullset such that for all and as and thus also for all ( note that ) , we have shown that is indeed a caf .next , we will construct a cbrm such that where is the extended caf of .for we define where is given by since the existence of such is always ensured . by and we obtain the desired decomposition if is well - defined . in order to show the latter ,let such that which by definition of in can be rewritten as by this can be restated in terms of as now the risk - antitonicity of yields so in is indeed well - defined . next we will show that is a cbrm . for this purpose , let in the following and be such that , ._ antitonicity _ : assume .then , by for almost every hence , risk - antitonicity ensures that .but by this is equivalent to _ constancy on _ : constancy on is an immediate consequence of - , since for hence , the decomposition is proved .+ _ uniqueness _ : let be cbrm s and be caf s such that and are constant on and resp . andit holds that then it follows from the constancy on the respective caf s that for all , i.e. note that by a similar argumentation as in the proof of holds true on a universal -nullset for all . in order to show that and are not only equal on constants let .then can be approximated by simple -measurable random vectors , i.e. there exists a sequence with -a.s . and for all , where and are disjoint sets such that and .denote by the -nullset on which does not converge . then by the continuity property of a caf and we have for all that and thus for all . finally for all there is an such that and hence next we consider the cases when fulfills some additional properties .+ _ constant on constants _ : let be risk - regular .then implies that for all and hence .let now .by the definition of and we know that there exists a such that .we thus obtain by that _ quasiconvexity / convexity _ : let be risk - quasiconvex .let with and set where , and are such that , .note that since is -conditionally convex , and thus there exists a with .then thus it follows by which in conjunction with the risk - quasiconvexity of results in similarly one shows that is convex if is risk - convex ._ positive homogeneity _ : let be risk - positively homogeneous .further let , with , and let with and .then there is also a with .moreover , a.s .hence , by in conjunction with the risk - positive homogeneity we obtain that .consequently , as is risk - antitone and antitone on constants , it is obvious that also fulfills . furthermore , we already showed , based on the antitonicity on constants and continuous paths requirements , in the proof of that has almost surely antitone paths .hence , we have for all with , that and thus the risk - antitonicity yields .hence we conclude that is antitone . + for the converse implication let be antitone and let be a realization with corresponding restriction which fulfills .the antitonicity on constants is an immediate consequence of the much stronger antitonicity on of . by reconsidering the proof of, we observe that we may replace the risk - antitonicity by when extracting the aggregation function .hence , is sufficient to construct a modification of and thus of such that has surely continuous and antitone paths .therefore , suppose that is already this realization .now let with according to with as in there are such that as the paths of are antitone , it can be readily seen that on .now set . then and a.s .hence it follows from and the antitonicity of that this completes the proof of the first equivalence in .let be risk - positive homogeneous and positive homogeneous on constants .since all requirements of are met , we also have that is almost surely positive homogeneous .therefore we obtain for all and with that and hence the risk - positive homogeneity implies which is positive homogeneity of .+ conversely , if is positive homogeneous , then it is also positive homogeneous on constants as well as for almost all paths of the realization . hence , if there exists and with such that then the right - hand - side equals a.s . using and the positive homogeneity of we conclude that let be risk - convex and convex on constants .first we will show that risk - convexity is equivalent to the following property : if for there exists a with such that on the one hand , it is obvious that implies risk - convexity . on the other hand ,let such that we know by that there is a such that by the risk - convexity we obtain that as risk - antitonicity implies , we conclude that risk - convexity and are equivalent .next we show the convexity of . to this endlet and with .once again we can reason as in the proof of that is almost surely convex , because has continuous paths and is convex on constants .thus we have that now implies that which is the desired convexity of .the other assertion concerning risk - quasiconvexity follows in a similar way .as already mentioned in the introduction a typical aggregation function when dealing with multidimensional risks is however , such an aggregation rule might not always be reasonable when measuring systemic risk .the main reason for this is the limited transferability of profits and losses between institutions of a financial system .an alternative popular aggregation function which does not allow for a subsidization of losses by other profitable institutions is given by where ; see . obviously , both and are daf s which are additionally concave and positive homogeneous . [ex : countercyclical ] risk charges based on systemic risk measures typically will increase drastically in a distressed market situation which might even worsen the crisis further .therefore one might argue that , for instance in a recession where also the real economy is affected , the financial regulation should be relaxed in order to stabilize the real economy , cf . . in our setupwe can incorporate such a dynamic countercyclical regulation as follows : let be a filtered probability space , where .let be the profits / losses of the financial system , where the first components are the profits / losses from contractual obligations with the real economy and are the profits / losses from other obligations .moreover let , be the gross domestic product ( gdp ) process with , , and .suppose that the regulator sees the economy in distress at time , if the gdp process is less than for some .we assume that in those scenarios the regulator is interested to lower the regulation in order to give incentives to the financial system for the supply of additional credit to the real economy .this policy might lead to the following dynamic conditional aggregation function from the perspective of the regulator where and for .obviously , is a caf with respect to which is positive homogeneous and concave .[ tbtf ] in this example we will consider a dynamic conditional aggregation function which depends on the relative size of the interbank liabilities .for instance , find that for the brasilian banking network there is a strong connection between the size of the interbank liabilities of a financial institution and its systemic importance .this fact is often quoted as too big to fail. let be a filtered probability space , where . moreover , let denote the sum of all liabilities at time of institution to any other banks .then is the relative size of its interbank liabilities .now consider the following conditional extension of an aggregation function which was proposed in : where .firstly , this conditional aggregation function always takes losses into consideration , whereas profits of a financial institution are only accounted for if they are above a firm specific threshold .secondly , profits are weighted by the deterministic factor and the losses are weighted proportional to the liability size of the corresponding financial institution at time .therefore losses from large institutions , which are more likely to be systemically relevant , contribute more to the total risk .+ is a caf which , however , in general is neither quasiconvex nor positively homogeneous as it may be partly flat depending on .[ ex : preference ] suppose that the regulator of the financial system has certain preferences on the distribution of the total loss amongst the financial institutions .for instance he might prefer a situation when a number of financial institutions face a relatively small loss each in front of a situation in which one financial institution experiences a relatively large loss .such a preference can be incorporated by the following aggregation function where and for .that is , if the losses of firm exceed a certain threshold , e.g. a certain percentage of the equity value , then the losses are accounted for exponentially .[ ex : stochasticdiscount ] suppose that is some -measurable stochastic discount factor .a typical approach to define monetary risk measurement of some future risk is to consider the discounted risks .consider any ( conditional ) aggregation function , which does not discount in aggregation , such as , , or , etc .then the discounted monetary aggregated risk is .if is positively homogeneous , then which is the aggregated risk of the discounted system .however , if is not positively homogeneous - such as or - then the discounted aggregated risk can only be formulated in terms of the conditional aggregation function [ ex : covar ] in this example we will consider the covar proposed in ; see . to this end , we first recall the ( conditional ) value at risk : we denote the value at risk at level by furthermore , the conditional var at level is defined as c.f .the conditional var is positive homogeneous , antitone , and constant on constants .thus it is a cbrm which is constant on every possible caf .note that , as is well - known for the unconditional case , the conditional var is not quasiconvex . by composing with a caf obtain a csrm which is risk - positive homogeneous and risk - regular .+ now we consider the case where represents a financial system and the caf in is .moreover consider the sub--algebra of , where for a fixed .then the csrm from evaluated in the event equals which is the covar proposed in .+ as we have already pointed out in the introduction , it is more reasonable to use an aggregation function which incorporates an explicit contagion structure .we will modify the covar in this direction in . [ex : coes ] the conditional average value at risk at level is given by },\quad { { f}}\in { l_{}^\infty({\mathcal{f}})},\ ] ] where is the set of probability measures on which are absolutely continuous w.r.t . such that and a.s . is a convex and positive homogeneous cbrm .notice that the conditional average value at risk can also be written as }+\operatorname{var}_q({{f}}|{\mathcal{g}}),\ ] ] cf . , where is discussed in . as in let with for a fixed and . using ,if then }{\mathbbmss{1}}_a \nonumber \\ &\hspace{0.5cm}+ { \mathds{e}_{{\mathds{p}}}\left[\left.-{{f}}\,\right|\,\{f\leq-\operatorname{var}_q({{f}}|a^c)\}\cap a^c\right]}{\mathbbmss{1}}_{a^c}. \label{eq : avar : ex } \end{aligned}\ ] ] therefore , evaluated in the event equals }.\ ] ] in other words , is the expected loss of the financial system given that the loss of institution is below and simultaneously the loss of the system is below its covar .corresponds to the conditional expected shortfall ( coes ) proposed in .now we change the point of view and consider the losses of a financial institution given that the financial system is in distress , that is if let . by composing the daf and the cbrm } ] , where is a risk neutral measure which is equivalent to .the resulting positive homogeneous and convex csrm evaluated in is given by },\quad { { y}}\in{l_{d}^\infty({\mathcal{f}})},\ ] ] which corresponds to the dip for .since the expectation is under a risk neutral measure it can be interpreted as the premium of an aggregate excess loss reinsurance contract .[ ex : cm ] in this example we want to specify an aggregation function that explicitly models the default mechanisms in a financial system and perform a small simulation study . for this purposewe will assume the simplified balance sheet structure given in table [ tbl : balancesheet ] for each of the financial institutions .let be the vector of equity values of the financial institutions after some market shock on the external assets / liabilities .moreover let be the relative liability matrix of size , i.e. the entry represents the proportion of the total interbank liabilities of institution which it owes to institution .we denote the -dimensional vector of the total interbank liabilities by .we observe that with an increasing the regulator is less willing to inject capital and thus the contagion effects increase which results in a higher risk in terms of the expectation and the value at risk .moreover without a regulator on average round about one financial institution defaults due to contagion effects . in the next stepwe want to investigate the systemic importance of the single institutions . for this purposewe modify the covar in , that is , instead of the summing the losses we use the more realistic caf .thus we define for a : where the difference between and is that losses in case of a default are only taken into consideration up to the total interbank liabilities of this institution , i.e. only the losses which spread into the system are taken into account .for example consider an isolated institution in the system which has a huge exposure to the outside of the system , then in order to identify systemically relevant institution it is not meaningful to aggregate the losses from those exposures , nevertheless from the perspective of the total risk of the system those losses should also contribute as it was done in our prior study .as for it can be easily seen that is a caf .the results for this risk - consistent systemic risk measures can be found in table [ tbl : sysimportance ] .we observe that the systemic importance is always a trade - off between the possibility of high downward shocks and the ability to transmit them .for instance institution 2 can transfer losses up to 227 , but it is also the institution which is the least exposed to the market , which makes it also the least systemic important institution .contrarily institution 4 is the most exposed institution , but does not have the ability to transmit those losses which also results in a low position in the systemic importance ranking .finally institution 5 or 8 are very vulnerable to the market and have the largest total interbank liabilities and are thus identified as the most systemic institutions .cont , r. , a. moussa , and e. b. santos ( 2013 ) .network structure and systemic risk in banking systems . in j .-fouque and j. a. langsam ( eds . ) , _ handbook on systemic risk _ , chapter 13 . , pp .327368 . cambridge university press .fllmer , h. and c. klppelberg ( 2014 ) .spatial risk measures : local specification and boundary risk . in _crisan , d. , hambly , b. and zariphopoulou , t. : stochastic analysis and applications 2014 - in honour of terry lyons_. springer .
we axiomatically introduce _ risk - consistent conditional systemic risk measures _ defined on multidimensional risks . this class consists of those conditional systemic risk measures which can be decomposed into a state - wise conditional aggregation and a univariate conditional risk measure . our studies extend known results for unconditional risk measures on finite state spaces . we argue in favor of a conditional framework on general probability spaces for assessing systemic risk . mathematically , the problem reduces to selecting a realization of a random field with suitable properties . moreover , our approach covers many prominent examples of systemic risk measures from the literature and used in practice . + * keywords : * conditional systemic risk measure , conditional aggregation , risk - consistent properties , conditional value at risk , conditional expected short fall .
the time needed for a particle contained in a confining domain with a single small opening to exit the domain for the first time , usually referred as _narrow escape time _ problem ( net ) , finds a prominent place in many domains and fields .for instance in cellular biology , it is related to the random time needed by a particle released inside a cell to activate a given mechanism on the cell membrane ( ) . generally speakingthe net problem is part of the so called _ intermittent processes _ , which are used to explain scenarios ranging from animal search patterns ( ) , through the solutions or melts of synthetic macromolecules ( ) , to the manufacture of self - assembled mono- and multi - layers ( ) . since the work of berg and purcell ( ) , research in the net problem area has experienced a steady growth over time and motivated a great deal of work ( ) . in , we have introduced an analytical markovian model that showed the impact of geometrical parameters and the interplay between surface and boundary paths in the studied confining domain , of a discrete and rectangular shaped nature , for the perfect trapping case . with `` perfect trapping '' we refer to the particle s impossibility of return to the system , i.e. once the particle reaches the narrow opening the escape becomes certain . in that work we presented a phase diagram which showed that some combinations of the geometrical parameter and the transport mechanism were required for the existence of an optimal transport ( a global minimum in the net ) . in this workwe consider the same confining domain and the transport properties that we dealt with in ref .however we eliminate the assumption of perfect escape introducing a finite transition probability at the narrow escape window .it is well known that systems description through the `` imperfect trapping case '' ( once in the trap site capture is not certain ) are suitable whenever the surface contains ` deep traps ' , capture and re - emission from a surface that contains sites with several internal states such as the ` ladder trapping model ' , proteins with active sites deep inside its matrix , etc .( ) . under this new assumption, we have discovered some very interesting results .particularly we show that the existence of a global minimum in the net depends on the existence of an imperfection in the trapping process .the aim of this work is to study the influence of the ` imperfection ' in the passage through the escape window on the effective diffusion process , and its effect on the net problem . for that purposewe calculate the time required by the particle to escape from the system ( met ) . in order to perform our researchwe exploit dyson s theory ( ) , and the notions of _ absorption probability density _( apd ) ( ) .the outline of this paper is as follows . in the following section we introduce some general results regarding imperfect trapping process as well as our model , and provide the basic definitions and concepts .we also describe the proposed analytical approach and present the main results .section iii depicts several assorted illustrations for the met to the narrow opening for different configurations of the system through a comparison between our analytical framework and monte carlo simulations . in sectioniv we discuss our conclusions and perspectives . finally , in the appendix we present the calculations indicated in section ii .let us consider the problem of a walker making a random walk in some finite domain with a trap or sink present in the system .we will follow the walker evolution through the system considering the ` unrestricted ' conditional probability , that is , the probability that a walker is at at time given it was at at . by ` unrestricted 'we identify a situation with no traps / sinks present in the system . .] as we are interested in the _ trapping process _ ,let us define as the _ absorption ( trapping ) probability density _( apd ) through the site at time , given that the walker was at at time , i.e. , gives the trapping probability of the walker , through , between and given that it started at from .it is worth mentioning that the _ first passage time _approach is not fully applicable since an ` excursion ' to the trapping site does not necessarily ends the process .however we show in the following lines that an interesting relation could be established between latexmath:[ ] .however , as we are interested in the calculation of ( [ tdef ] ) , we only need to perform the inverse fourier transform on , i.e. we need the elements {0,m_0}\right\} ] : {0,m_0}\!\!=\!\!\frac{\eta^{m_0}+\eta^{\tilde{m}-m_0 } } { \delta(1-\eta)(1-\eta^{\tilde{m}-1})+(u - a_1(k))(1+\eta^{\tilde{m}})}\ , , \\\end{aligned}\ ] ] where , and .the inverse _ fourier _ transform on {0,m_0} ] could be found , {m , m_0}&=&\left[\mathbb{p}^0(k , u)\right]_{m , m_0}+\left[\mathbb{p}^0(k , u)\right]_{m,0}\left[\mathbb{p}^0(k , u)\right]_{0,m_0}\cdot\frac{\delta_1}{1-\delta_1\left[\mathbb{p}^0(k , u)\right]_{0,0}}\\ & & + \frac{\left[\mathbb{p}^0(k , u)\right]_{0,m0}\cdot\delta_2}{1-(\delta_1+\delta_2)\left[\mathbb{p}^0(k , u)\right]_{0,0}}% \left(\left[\mathbb{p}^0(k , u)\right]_{m,1}+\frac{\left[\mathbb{p}^0(k , u)\right]_{m,0}\left[\mathbb{p}^0(k , u)\right]_{0,1}\cdot\delta_1}{1-\delta_1\left[\mathbb{p}^0(k , u)\right]_{0,0}}\right)\end{aligned}\ ] ] where , {m , m_0}=\frac{\eta^{|m - m_0|}+\eta^{\tilde{m}-(m+m_0 ) } } { 2\gamma(1-\eta)+\tilde{u}}\ , , \\\ ] ] , and . from ( [ pmm0 ] ) , the probability that a walker is at site at time given it was at at , is derived by using the inverse _ laplace _ transform on and the inverse _ fourier _ transform on ( for the coordinate ) for each matrix element {m , m_0} ] . in this case expression ( [ pmm0 ] ) reduces to , {0,m_0}\!\!=\!\!\frac{\eta^{m_0}+\eta^{\tilde{m}-m_0 } } { \delta(1-\eta)(1-\eta^{\tilde{m}-1})+(u - a_1(k))(1+\eta^{\tilde{m}})}\\\end{aligned}\ ] ] the inverse _ fourier _ transform on {0,m_0}12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physreve.79.051901 [ * * , ( ) ] link:\doibase 10.1073/pnas.0903293106 [ * * , ( ) ] http://stacks.iop.org/0295-5075/84/i=3/a=38003 [ * * , ( ) ] http://stacks.iop.org/1751-8121/42/i=43/a=430301 [ * * , ( ) ] \doibase doi : 10.1016/0001 - 8686(87)85003 - 0 [ * * , ( ) ] link:\doibase 10.1126/science.262.5142.2010 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.74.1795 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1016/s0006 - 3495(77)85544 - 6[ * * , ( ) ] link:\doibase 10.1021/bi00527a028 [ * * , ( ) ] link:\doibase 10.1016/s0006 - 3495(86)83463 - 4 [ * * , ( ) ] link:\doibase 10.1063/1.460427 [ * * , ( ) ] http://link.aip.org/link/?jcp/116/9574/1 [ * * , ( ) ] * * , ( ) http://dx.doi.org/10.1007/s10955-005-8027-5 [ * * , ( ) ] link:\doibase 10.1103/physreve.74.020103 [ * * , ( ) ] link:\doibase 10.1103/physreve.75.041915 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.168105 [ * * , ( ) ] link:\doibase 10.1063/1.3442906 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.150606 [ * * , ( ) ] http://dx.doi.org/10.1007/s10955-011-0138-6 [ * * , ( ) ] link:\doibase 10.1103/physreve.84.021117 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.156102 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.051111 [ * * , ( ) ] \doibase doi : 10.1016/0370 - 1573(87)90005 - 6 [ * * , ( ) ] link:\doibase 10.1103/physreve.61.1110 [ * * , ( ) ] link:\doibase 10.1063/1.471810 [ * * , ( ) ] link:\doibase 10.1529/biophysj.104.045773 [ * * , ( ) ] link:\doibase 10.1063/1.3682243 [ * * , ( ) ] http://stacks.iop.org/0953-8984/17/i=49/a=012 [ * * , ( ) ] link:\doibase 10.1103/physreve.54.2248 [ * * , ( ) ] \doibase doi : 10.1016/j.physa.2010.04.025 [ * * , ( ) ]
we present a master equation approach to the _ narrow escape time _ ( net ) problem , i.e. the time needed for a particle contained in a confining domain with a single narrow opening , to exit the domain for the first time . we introduce a finite transition probability , , at the narrow escape window allowing the study of the imperfect trapping case . ranging from to , allowed the study of both extremes of the trapping process : that of a highly deficient capture , and situations where escape is certain ( `` perfect trapping '' case ) . we have obtained analytic results for the basic quantity studied in the net problem , the _ mean escape time _ ( met ) , and we have studied its dependence in terms of the transition ( desorption ) probability over ( from ) the surface boundary , the confining domain dimensions , and the finite transition probability at the escape window . particularly we show that the existence of a global minimum in the net depends on the ` imperfection ' of the trapping process . in addition to our analytical approach , we have implemented monte carlo simulations , finding excellent agreement between the theoretical results and simulations .
there are a number of approaches to understanding and describing processes in human mind .they belong to different levels of abstraction , ranging from neural and biochemical processes in the brain up to philosophical constructions , and study its different aspects . in the present work we focus our attention on the phenomenological ( psychological ) description of human memory dealing with it as a whole , i.e. , without reducing the corresponding mental functions to the real physiological processes implementing them . a review of advances made in this scope during the last decades can be found in a monograph by who inspired the development of the act - r concept in cognitive science , a modern theory about how human cognition works .the act - r theory operates with three types of human memory , sensory , short - term , and long - term ones and accepts , in particular , the following basic postulates . _first _ , the declarative ( long - term ) memory is organized in chunks , certain cognitive units related to some information objects . at the first approximation the learning , memorizing , and retrieval of a given object proceeds via the creation and evolution of the corresponding chunk .naturally , chunks can interact with one another , in particular , forming larger composed chunks and , finally , their hierarchical network .the notion of chunk is general , therefore , it is rather problematic to define it more precisely , for discussion and history see , e.g. , a review by . _second _ , each chunk individually is characterized by its strength which determines also the information retention , namely , the probability of successful retrieval of the corresponding information from a given chunk . since the classical experiments of a rather big data - set about the retention ability of human memory has been accumulated for time scales from several minutes up to a few weeks .it has been figured out that the memory strength decays with time according to the power - law , i.e. , exhibits the asymptotic behavior where the exponent is a certain constant .it should be noted that , in general , this dependence meets the second jost s law , the increment of the strength decay becomes weaker as time goes on ( see , e.g. , a review by ) .appealing at least to the data - set collected by and analyzed by as well as one collected and studied by the exponent seems to be rather universal and can be estimated as .the _ third _ postulate concerns the multiple - trace arrangement of human memory .it assumes that each attempt of learning and memorizing some information fragment produces a separate trace in human memory .so the corresponding cognitive unit , a chunk , is actually a collection of many memory traces and its strength is the sum of their individual activation levels evidence collected currently in physiology ( see , e.g. , work by and references therein ) partly supports this concept . its implementation in physiological terms is reduced to the multiple trace theory ( mtt ) developed by appealing to the role of the hippocampus in the encoding of new memory traces as well as the retrieval of all the previous traces , including remote ones . the preceding alternative of mtt is the standard model of systems consolidation ( smsc : ) .it assumes the hippocampus to `` teach '' the cortex a memory trace strengthening the connectivity between its individual elements over time and , finally , consolidating the memory .recently have proposed a competitive trace theory ( ctt ) combining elements smsc and mtt .it suggests that when a memory is reactivated by a new cue , the hippocampus acts to re - instantiate the original memory traces , recombine their elements in the episodic memory , and add or subtract individual contextual features . as a result ,a new memory trace overlapping with the original ones is created and ready to be stored in the neocortex . however , in contrast to mtt , ctt supposes that the memory traces are not stored in parallel but compete for representation in the neocortex .two relative phenomena occur here : consolidation and decontextualization .first , overlapping features in the memories should not compete for representation and thus are strengthened , i.e. , consolidated .second , non - overlapping features should compete with one another resulting in mutual inhibition and , as a result , memories become decontextualized . proposed the reactivation of memory traces to strengthen also the links between the traces too .the concept of such a multi - trace consolidation can be regarded as the _ fourth _ postulate of the act - r theory . as the _ fifth_ postulate we note the following .the hippocampus is involved in the `` reconstruction '' rather than the `` retrieval '' of the memory .so , as ctt states , new memory traces are only partially but not completely overlap with the original traces .it is due to the hippocampus capability of supporting rapid encoding of unique experiences by orthogonalizing incoming inputs such that their interference is minimized , which is termed pattern separation ; the available evidence for this feature was recently discussed by .this pattern separation together with the corresponding pattern completion via creating new memory traces endows our episodic memory system with richness , associativity , and flexibility . _ finally_ , the act - r theory accepts an important generalization about expansion .it assumes that the individual activation levels of memory traces decreases with time also according to the power law and , after formation , their individual dynamics is mutually independent .thereby the strength of the corresponding chunk is the superposition where are some constants and is the time moment when the chunk was created .it should be noted that expression does not take into account the chunk interaction .our following constructions will be based on these postulates . in the present workwe will confine our consideration to the dynamics of a single chunk and ignore the effects of memory consolidation which are likely to be crucial only on relatively large time scales .mathematical description of the chunk interaction and the memory consolidation are challenging problems on their own and require individual investigation .a chunk is considered to be a collection of traces created in the working memory at time moments and stored in the declarative ( long - term ) memory .these traces will be also called slides for reasons apparent below .let us assume that the chunk as a whole contains a certain fragment of information , a pattern , so all its slides retain different fragments of this pattern .the time evolution of the chunk as a whole unit is described in terms of its strength , the measure of the relative volume of the pattern pieces that are retrievable at the current moment of time .the individual evolution of the slides is characterized by similar quantities depending on the current time and the time when the corresponding slide was created and stored .the chunk slides are assumed to be created according to the following scenario illustrated in fig .[ fig.1 ] .memory continuously looses some fragments of the pattern .so when at the current moment of time the chunk as a whole is retrieved from the declarative memory only some its fragment can be retrieved , which is characterized by the value .then addition practice or learning is necessary to reconstruct the initial pattern .therefore a new slide to be created during this action has to contain , at least , the fragment . in principle, the pattern fragment to be stored in can include other fragments of the initial pattern .so in a more general case the condition may hold , which is worthy of individual investigation . in the present work we confine our consideration to the limit casewhere [ mod:1 ] if the current learning action is enough to create the fragment containing .however , there could be a situation when the time interval of the learning process before the slide to be transfered to the declarative memory is not enough to do this .under such conditions we assume that before transferring the slide to the declarative memory it is cut off , i.e. , its capacity for new information is reduced and the saved pattern fragment meets the condition in both the cases it is reasonable to measure the capacity of the new slide based on the current strength of the chunk .namely , in case we set [ mod : c1 ] \left\{1-\exp\left[-\frac{w\tau}{t(f)}\right]\right\ } \end{aligned}\ ] ] is used , where is the time scale characterizing the process of learning the pattern and given by the expression here the scale characterizes the time interval required for the working memory to create one slide and the dependence of the quantity on reflects the fact that the higher the current value , the less the time necessary to learn the pattern completely , the exponents and specify this dependence .the parameter characterizes the duration of initial creation of the pattern in the working memory .if we retrieve the chunk immediately after this action its achievable pattern is the strength of the slide created and saved just now is set equal to unity , which is related directly to the assumption about the reduction of the slide capacity at the moment of its creation .as time goes on , the strength of all the slides decreases and without addition learning the strength of the chunk as whole is written as the sum of all the slides created previously equalities , , and may be treated as the bayesian approximation of the memorizing process .the given model assumes the slides created previously not to be affected by learning at the current moment of time . in other words , after creation their evolution is governed only by some internal mechanisms . keeping in mind the results to be obtainedlet us write the equation governing the evolution of a slides in the form where and are some parameters .equality is actually the initial condition imposed on the function .its solution is ^{-d } \qquad\text{for }\,,\ ] ] where we have introduced the new parameters and .the substitution of into yields ^{-d}\,.\ ] ] it should be noted that this governing equation of the individual trace dynamics is fair similar to the mathematical model for a single trace memory proposed by .expression can be regarded as a solution of the equation subject to the initial condition where the values and are treated as constants , the variable describing the attention to the subject of current learning is assumed to be a smooth function on time scales about .it enables us to represent the value of , exp ., as the cumulative result of infinitesimal increments of a certain continuous process , where this expression would lead exactly to formula if the time scale were independent of .however , ansatz has been chosen rather arbitrary keeping in mind only the basic features it should possess .so we are free to replace it by the expression stemming from model .the last equality in model formally coincides with formula except for the fact that the value has to decrease additionally due to time evolution of temporal elements created previously . however , all the `` microscopic '' time scales , in particular , and , are related directly to the interval within that a new slide is created in the working memory and , then , transfered to the declarative memory .it enables us to ignore this decrease in the value during the time interval . as a result expressioncan be reduced to the following integral \frac{w(t')}{t[f(t')]}\left[1+\frac{(t - t')}{\tau_0}\right]^{-d}dt'\,.\ ] ] moreover , due to the integral converging at the lower boundary for the exponent we can replace kernel by the corresponding power - law kernel ^{-d}\quad \rightarrow\quad \frac{\tau_0^d}{(t - t')^d } \,.\ ] ] after this replacement expression reads [ conmod:6 ] \frac{w(t')}{t[f(t')]}\frac{\tau_0^d}{(t - t')^d}dt'\,.\ ] ] or using ansatz ^\alpha [ 1-f(t')]^\beta w(t')\frac1{(t - t')^d}dt'\,.\ ] ] in deriving expression we have aggregated the ratio into the quantity .so , first , the integral equation contains only one microscopic time scale regarded as a certain model parameter .second , the dimensionless quantity describes the attention to the subject during the learning process . if then the given pattern can be learned completely for a time interval about . finalizing the given constructionwe will assume that the learning process was initiated at time and before it no information about the pattern was available , i.e. , for the value . in this casethe integral equation can be rewritten in the form of the following differential equation with time fractional derivative of the caputo type it is the desired governing equation for learning and forgetting processes . in order to avoid some artifacts in numerical simulation we will accept an additional assumption that it is not possible to get strictly the limit value of the chunk strength by learning a subject . indeed , the closer the chunk strength to unity , the more attention is necessary for a human to recognized which piece of information is unknown for him . as a result , we introduce a certain critical value such that , when the chunk strength exceeds it , , a human considers the success of learning to be achieved and it attention to the learned subjects disappears , i.e. , for .it is illustrated in fig .[ fig.2 ] .the characteristic features of the system dynamics were studied numerically using the explicit 2-flmm algorithm of second order for solving equation .figure [ fig.speff1 ] presents some of the obtained results .plot i ( fig .[ fig.speff1 ] ) shows the forgetting dynamics under the `` basic '' conditions matching the following hypothetical situation . at the initial moment of time , a subject starts to learn continuously an unknown for him information pattern being retained in a single chunk and at time ends this process when the chunk strength gets its limit value . as time goes on ,the chunk strength decreases , which specifies the decay of retrievable information . as should be expected ,the asymptotics of is of the power law and looks like a straight line on the log - log scale plot .naturally , in a certain neighborhood of the time moment this asymptotics does not hold .however , for small values of the exponent ( for in plot i ) this neighborhood is narrow and becomes actually invisible in approximating experimental data even with weak scattering .plot ii exhibits the learning dynamics under the same `` basic '' conditions .the growth of the chunk strength is visualized again in the log - log scale for various values of the parameters determining how the learning rate changes during the process ( they are given in the inset ) .as seen , the function strictly is not of the power law .however , if it is reconstructed from some set of scattered experimental points as the best approximation within a certain class of functions , a power law fit ( linear ansatz in the log - log scale ) may be accepted as a relevant model .it allows us to introduce an effective exponent of the approximation .appealing again to plot ii , we draw a conclusion that this effective exponent depends not only on the `` forgetting '' exponent but also on the other system parameters . thereby , in trying to determine the set of quantities required for characterizing human long - term memory , the `` forgetting '' and `` learning '' exponents , and , may be regarded as independent parameters .plots iiia and iiib illustrate the found results in the case mimicking the discontinuous learning process .it again assumes a subject to start learning an initially unknown information pattern being retained in one chunk during the process , however , now he does not do this continuously .instead , the learning proceeds via a sequence of trials of a fixed duration that are separated by some time gap ( spacing ) until the subject gets the success at a certain time moment .naturally , the longer the spacing , the longer the total time as well as the larger the number of trials required for this .so , in order to compare their characteristic properties let us renormalize the time scale in such a way that the learning process end at in dimensionless units , in other words , the time is measured in units of . in this case , as seen in plot iiia and iiib , the main characteristics of the shown processes become rather similar with respect to the dynamics of learning and forgetting .this result poses a question about optimizing a learning process by dividing it into rather short trials separated by relatively long time intervals .this effect is also called the distributed practice , an analysis of available experimental data can found , e.g. , in review by and as well .at least , within the framework of the present fair simple model an increase of the time spacing gives rise , on one hand , to the growth of the learning duration but , on the other hand , enables one to remember this information for a longer time without its considerable lost . as far as the theoretical aspects of the present research are concerned, we note appealing to the obtained results that the multiple trace concept of memory architecture requires an individual mathematical formalism irreducible to the classical notions created in physics .in particular , even at the `` microscopic '' level dealing with slides ( traces ) the system dynamics is not reduced to the motion in a certain phase space but continuous generation of such phase spaces .their interactions with one another become a key point of the corresponding theory .besides , the governing equation admits the following interpretation .its left - hand side describes `` internal '' evolution of human memory on its own , whereas the right - hand side plays the role of `` sources '' generating new elements of memory .this approach can enhance the development of human memory theory by separating the phenomena to be addressed into different categories .the work was supported by jsps grant 245404100001 ( `` grants - in - aid for scientific research '' program ) .
we propose a single chunk model of long - term memory that combines the basic features of the act - r theory and the multiple trace memory architecture . the pivot point of the developed theory is a mathematical description of the creation of new memory traces caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time . using the available psychological and physiological data these constructions are justified . the final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the caputo type fractional time derivative . several characteristic situations of the learning ( continuous and discontinuous ) and forgetting processes are studied numerically . in particular , it is demonstrated that , first , the `` learning '' and `` forgetting '' exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters . second , as far as the spacing effects are concerned , the longer the discontinuous learning process , the longer the time interval within which a subject remembers the information without its considerable lost . besides , the latter relationship is a linear proportionality . * keywords : * human memory ; memory trace ; chunk ; forgetting ; learning ; practice ; spacing effects ; power law ; fractional differential equations .
the current conceptual map of protein folding kinetics is dominated by the coexistence of several apparently distinct approaches .they may be categorised loosely into `` energy landscape '' ( bryngelson et al . , 1995 ;onuchik , 1995 ) , `` diffusion - collision '' ( karplus and weaver , 1976 , 1994 ) , `` nucleation - condensation''(fersht , 2003 ) and `` topomer search '' ( makarov and plaxco , 2002 ) models .each of these has its own way of visualising how the collapse of a random coil to a native globule can ever be accomplished in observable time scales , a problem pointed out long ago ( karplus , 1997 ) .each has advantages and drawbacks , but it is not clear whether each applies to a restricted subset of real cases , or whether all might have something to say about the folding of any one protein .the `` folding - funnel '' picture of the energy landscape has the advantage of visualising both guided folding and the emergence of on - pathway and off - pathway intermediate states ( dinner et al . 2000 ) .yet it is hard to escape from the deceptive simplicity of low - dimensional projections of folding funnels that appear necessarily in all graphical portrayals of it . in practice of course, the dimensionality of the folding space is enormous .even small ( residue ) proteins have a configurational space dimensionality of several hundred ( think of the bond angles along the polypeptide main chain alone ) . in such high - dimensional spaces ,qualitatively new features may arise , such as energetically - flat domains that nonetheless are extremely difficult to escape from and so behave as kinetic traps . a second feature is the potential for high cooperativity of structure in several simultaneous dimensions .this corresponds to the existence of narrow gullies in the hypersurface that are hard to find . in more biochemical languagethese structures might be exemplified by cooperative secondary structure formation alongside native or near - native distant contacts in -helix bundle proteins ( myers and oas , 2001 ) , or simultaneous folding and anion binding ( henkels at al 2001 ) .the `` diffusion - collision '' approach , on the other hand , is supported by strong experimental evidence that folding rates are controlled by the rate of diffusion of pieces of the open coil in their search for favourable contacts , rather than a driven collapse along some continuous energy surface ( jacob et al , 1999 ; plaxco and baker , 1998 ; goldberg and baldwin , 1995 ) .pre - formed units of secondary structure diffuse hydrodynamically and merge .larger proteins may do this in an increasingly hierarchical way .the importance of diffusive searches is unsurprising , since under biological conditions , all candidates for energetic interactions , including electrostatics , are _ locally _ screened to a few angstroms : much smaller than the dimensions over which sections of protein must move to find their native configurations .put another way , the vast majority of the space covered by the energy landscape must actually be _ flat _ ( on a scale of ) rather than funneled .simple versions of these models have indeed been able to account rather well for folding rates as a function of secondary structure formation ( myers and oas , 1999 , 2001 ) .however , it is not clear how applicable this approach is to cases in which secondary structure forms _ within _ a collapsed globule or cooperatively with it .an attempt to articulate a range of scenarios in which partial ordering of secondary and tertiary structures mutually enhance a favourable folding pathway has been presented under the label of `` nucleation - condensation '' ( dagett and fersht , 2003 ) .originally conceived as a kinetic theory in which a nucleus of native structure corresponds to the transistion - state for folding , the picture now also encompasses the hierarchical folding routes of the diffusion - collision model .a challenge faced by all these models is that the most successful search for inherent features of tertiary structure that correlate with folding rates has found that the topological measure of `` contact order '' is far more closely related than , for example , molecular weight itself in the case of `` two - state '' folders ( plaxco et al 2000 ) .rationalisation of this observation has given rise to a third view of the critical pathway of protein folding , the `` topomer search '' model ( plaxco and gross 2001 ) . the rate determining step is not the rapid formation of local secondary structure , nor the diffusion of subdomains _ per se _ , but the organisation of large pieces of secondary structure into the same topological configuration as the native state , which is thereafter is able to form rapidly .this suggests a partition of the folding space into `` rapid '' dimensions representing the local formation of secondary structure , and `` slow '' dimensions representing the topomer search .however , a quantitative relation between the topomer search space and contact order is still unclear , since no native contacts are actually required to form at a purely topologically - defined transition state at all ( although many are to be expected from the patial ordering at the secondary level at least ) . furthermore , information on the effect on folding rate of replacing specific residues _ via _ mutation or `` -value '' analysis ( fersht , 2000 ) needs to be taken together with correlations of contact order .these four approaches have one important aspect in common : they all effectively reduce the dimensionality of the search - space by assumption , rather than in a derived way .this is both natural and necessary , since data from kinetic experiments do just the same , but there is a danger in overlooking aspects of folding that rely essentially on the presence of many degrees of freedom .our aim in this work is to take a fresh look at the issue , embracing many simplifications but on this occasion _ not _ that of a low dimension of configurational search space .we find in the next section that quite general conclusions may be drawn about the topology of this search space if the dimensionality is kept high .some general predictions follow which we work out in more detail in the case of three - helix proteins .the approach will additionally allow us to see how the existing apparently - distinct paradigms for protein folding are related , and suggest places to look for the information content of the `` kinetic code '' within proteins that encodes the folding search path , as distinctfrom the native structure itself .we start with a very simple and abstract model for protein folding , but one that explicitely retains a very large number of degrees of freedom .the total search space is modelled as the interior of a hypersphere of dimension and radius , and the native ( target ) state as a small sphere of radius at the origin of the space .the entire configuration of the protein corresponds to a single - point particle executing a random walk in the hypersphere . the ratio of to the typical localisation on folding in the values of a degree of motional freedom .if the degree of freedom is spatial the appropriate scales are the size of a molten globule and the radius of gyration of a denatured protein .if it is angular , then they are the angle of libration of a bond fluctuating in one local minimum as a fraction of . in either casethe appopriate order of magnitude estimate is which is .bicout and szabo ( bicout and szabo 2000 ) introduced this very general framework for discussing flat and funneled landscapes , but then restricted themselves to three - dimensional spaces , a simplification that we shall try to avoid . to explore the timescales of the search forthe target space ( on which the diffuser will `` stick '' ) we write down the time - dependent diffusion equation for a particle , restricting ourselves to the case of a flat potential at first .the most convenient function to use is the probability density , that the system is a radial distance from the centre of the hypersphere at time , which obeys : supplemented by the absorbing boundary condition , signifying the stability of the native state .the timescale for the search steps is set by the effective diffusion constant .the mean passage time from the unfolded ensemble to the native state can be calculated by introducing a uniform current of diffusers ( representing a population of folding proteins ) on the boundary of the hypersphere at , as the other boundary condition , and finding the consequent steady state solution to ( highddiff ) .the mean time to pass from to over the ensemble of systems is then just the total number of diffusers at steady state normalised by the current , leading to expression indicates how very much _ qualitatively _ longer the mean search time is in _ high _ dimensions ( , than the low - dimensional estimation of the characteristic time , which replaces ( [ search time ] ) in and .this fundamental time is scaled up by the denatured system size ( measured in units of the target size ) to the power of the number of effective dimensions greater than 2 .an analysis of the eigenmode structure of the problem indicates why this is so : for large nearly all the diffusers exist in the lowest eigenfunction of the diffusion operator , that is in turn localised to the exponentially large surface of the hyperspherical search space .single - exponential kinetics are also a general property of such high- search spaces .the central result of ( [ search time ] ) depends on two key physical assumptions : ( 1 ) the dimensionality of the space is of realistic values for protein folding - of the order of a hundred or more , and ( 2 ) the stability of the folded state is governed by local interactions in the native state only . with these assumptions alone , the model of high dimensional diffusion we have described is inevitable , and the timescales unreasonably long .the exponentially large search times arise transparently from the factor in equation ( [ search time ] ) .this is of course a restatement of levinthal s paradox ( karplus , 1997 ) , but a helpful one , in that the two necessary assumptions for the paradox to arise are clearly seen .the first just gives the large dimensionality of hypersphere , the second the flat diffusive landscape .put this way , there are two ways of circumventing the problem. one may drop the assumption of local forces and allow the protein to `` fall '' towards the single native state down a `` funnel '' created by forces whose range permeate the entire volume . as we have remarked above , however, candidates for such long range forces do not present themselves .without recourse to a continuous funnel - shaped landscape , there is only one other possibility : _ all diffusive searches take place in low dimensional subspaces _ of the full configurational space . to see how this works ,we suppose at first that the dimensions of the full folding space are now arranged sequentially so that diffusive searches in one dimension at a time allow the protein to find `` gateways '' into the next subspace ( we will see how this may arise naturally in a physical way below ) . for simplicitywe assume that the kinetics of each diffusive search is single exponential with characteristic time . since the diffusion is always maintained in some low - dimensional subspace of the full folding space , for each subspace , so that rather than the exponentially larger .this clearly reduces the folding time enormously , signifying that only a tiny fraction of possible states is visited in the search ( dinner et al . ,2000 ) .how has such a remarkable reduction in folding time been achieved without the use of a `` funnel '' energy landscape ?of course energetic interactions have been implied , but these have not been of the spatially - extended `` funnel '' type . instead they have served just to keep the diffusive search within the smaller or dimensional space , once the first sub - dimensional search is over , then within a or dimensional space after the subsequent successful `` adsorption '' into the still - smaller subspace , and so on .so , when the high - dimensionality of the search space is retained , the energy landscape looks less like a funnel , and more like a series of high - dimensional _ gutters _ ( figure 1 ) . the diffusing particle ( representing of course the random search of the protein through its available conformations ) does not have to search simultaneously through both the dimensions of the figure .instead , it exploits the lower energy state of the entire dimensional subspace to reach it _ via _ a _one_-dimensional diffusion in the dimension , which it performs first . by partitioning the configurational space in this way , and by providing an attractive `` gutter '' , relying on _ local _ forces alone , to connect one diffusive subspace to the next , all the advantageous consequences of a funnel landscapemay be acquired without the requirement of long - range potentials .of course , if the high - dimensional structure is projected into a 1 or 2 dimensional diagram then the many discrete steps of potential energy that arise from the sequence of `` hypergutters '' appear artificially close , and serve to create a funnel - like projected energy landscape .the disadvantage of the projection is that the subtle origin of the directed search is obscured . in detailthe folding energy landscape will look more like a series of low - dimensional _ terraces _( inset to figure 1 ) nested within the full high - dimensional search space .how big do the attractive potentials creating the gutters need to be , and what physical interactions might be enlisted to provide them ?their scale is familiar : these potential steps are just the energies required to counterbalance the entropy - loss associated with reduction of the configuration space by one dimension , or degree of freedom .the associated translational space reduces from the order of to the order of on restriction to the gutter subspace .completely reversible folding along the route connecting the gutters is produced by rendering the free energy change on entering the gutter zero .this is in turn the case if the binding energies to the gutters are of the order of the entropic free energy gain on making such a restriction to a degree of freedom : to quantify therefore needs just an estimate of the order of magnitude of the ratio , the dimensionless ratio of the sizes of space enjoyed by a degree of freedom in and out of a restricting gutter .as discussed above , a realistic order or magnitude estimate is , giving a value for of the order of a few ( 2 - 4 ) ( or of order 4 - 8 kjmol ) for realistic proteins .the energy scale of a few is highly suggestive : we note that the relatively weak , non - native like interactions between residues are candidates for these gutter - stabilising interactions , and that it is not necessary to invoke the strength of native contacts during the diffusive search .this is good news , since there is no guarantee that significant native interactions will form during at least the early phases of search , if at all , and experimental evidence of strong `` co - operativity '' is to the contrary ( flanagan et al . , 1992 ) .of course we do not assume that the energy - entropy balance is exact at each step - indeed it is the mismatches in this picture that give rise to roughness in the landscape , but matching within a few is necessary in most dimensional reductions to avoind unrealistically long folding times .furthermore , the evolutionary tailoring of non - native interactions provides additional `` design - space '' within which a pathway to the folded state may be coded , but without compromising the stability of the final , native state . for proteins containing residues ,there are of order non - native interactions that may be encountered during a diffusive search , but only of order interactions that define the native state .a second consequence of this high - dimensional viewpoint is therefore the general expectation of tuned _ _ _ _ but _ _ non - native , interactions _ _ between sections of partially structured chain that stabilise__intermediate search spaces _ _( which may or may not be identified as intermediate states , depending on their occupancy lifetime ) .we need to articulate carefully what is meant in this context by `` non - native '' , for this term is sometimes used to refer to indiscriminate interactions . in that sense , the role of non - native interactions in determining the type and rate of folding pathways is not a new idea ( zhou and karplus , 1999 ) .but such previous studies have not introduced any specificity , or evolutionary refinement , into the non - native interactions , and find , significantly , that increasing the strength of such indiscriminate interactions actually slows folding .our suggestion is that a _ discriminating design _ of key non - native interactions may significantly speed the search for the native state .it is also likely that a significant proportion of such tailored non - native interactions that we envisge guiding the search will be increasingly near - native as the search proceeds. this will be the more likely as secondary structure forms , as we shall see by the example of a three - helix buundle below .gutter - like landscapes have appeared in the literature , and are sometimes apparent even in the 2-dimensional representations of projected folding surfaces .reference ( karplus and weaver , 1994 ) , for example , shows a fast folding route of hen lysozyme in which the early formation of -sheet structure permits the final approach to the native state to proceed in a subspace of reduced dimension . in this case the gutter - like structure survives a projection onto just two dimensions of folding space . in this casethe mutual diffusion of the helical and beta - sheet portions of the protein is the dynamical process responsible for the gutter - like feature on the reduced folding surface .this example serves also to indicate an important qualification - some dimensions clearly _ do _ possess funnel - like landscapes even without a projection onto low dimensional spaces .those involved with the formation of a local -helix or structures , for example , create subspaces that have real funnel - like features , directed towards the point in the subspace representing the formation of the complete local secondary structure .however , higher - dimensional hypergutters must already have been visited at higher levels in the regions of locally and secondary structure .we now take a much simpler fold as an example .a clean example of a `` hypergutter '' structure is furnished by the well - studied triple - helix proteins such as the b - domain of staphylococcal protein a ( bdpa ) ( myers and oas , 2001 ) ( and see figure 2 ) . in this case , the division of the folding landscape is clearly suggested by the formation of the helices ( fast `` funneled '' or `` zipper '' dimensions ( fiebig and dill , 1993 ) , and by the diffusive search of the helical domains for their native juxtaposition .note that we do not require the helix formation to be complete before the diffusive search begins - indeed the formation of native or non - specific contacts and secondary structure stability will in general be highly cooperative ( fersht , 2000 ) .all that is required is that the zipper dimensions are explored at much faster timescales than the diffusive dimensions .a very simple model has been successful in describing the kinetics of this protein ( myers and oas , 2001 ) , using the physical abstraction normal to `` diffusion - collision '' models of the real protein as spherical domains executing a spatial search .such models have recently been extended to a family of three - helix bundle proteins ( islam et al . , 2002 ) .however , in the light of our expectation that fast - folding proteins find their native state _ via _ a sequence of stabilised subspaces , the diffusive degrees of freedom of a three - helix bundle might be more accurately represented by angular coordinates defined at the two turns connecting the three helical sections .in fact the diffusive space of internal angles thus defined is exactly three dimensional : between helix 1 and 2 only one angle needs be specified , while between helices 2 and 3 we need two more .this construction is illustrated as in figure 2 .the three angular diffusive degrees of freedom are labelled with . since the diffusive coordinates are angles , they exhibit periodicity , and the search space is itself a periodic 3-d lattice . in practicethe continuously varying angular co - ordinate may model a more discrete set of more or less favourable packings ( chothia et al . , 1981 ) , but the coarse - grained structure of the search space will be the same . in the figure we illustrate periodicity in the dimension only . the region of configuration space in which the first two helices are both in contact with the third is shaded , and the native state is represented by the periodic lattice of small spheres . if the shaded `` helical contact '' region is enhanced by a weak attraction ( it becomes a `` gutter '' for diffusion in the coordinate ) , then the search for the native state will typically proceed by diffusion in the one - dimensional manifold of ( without contact between helices ) , followed by diffusion in the two - dimensional manifold of and ( now with helices 1 and 3 in contact ) . as calculated in the last section , the non - native binding potential of the third helix to the gutter sub manifold needs to be of the order of . providing that the gutter is as attractive as this , then the predicted mean search time ( including prefactors and a weak logarithmic term ) for the native state is\label{tau12}\]]rather than the much longer time for the full 3-d search without the gutter subspace of of experimental evidence for staged diffusive searches in simple proteins has also been observed in the case of cytochrome c , lysozyme ( bai , 2000 ) , and in the b1 domain of protein g ( park et al . , 1999 ) .the 3-helix example illustrates our general conclusion that searches within diffusional subspaces in protein folding may be accelerated by__local , but not necessarily native , interactions _ _ between sections of partially structured chain . in the context of the three - helix proteinthe necessary non - specific interactions are those that keep the third helix in contact with the other two .this permits the final diffusive search for the native state to take place in a 2-dimensional space , rather than the full 3-dimensional search configuration space of the diffusive degrees of freedom .remarkably , just this conclusion was reached very recently by experiments on the helical immunity protein im7 ( capaldi et al . , 2001 ) ,in which an on - pathway intermediate state was shown by careful mutation studies to be stabilised by non - native interactions between two of the helices .an additional example of tuned non - native interactions guiding a folding pathway occurs in the rather larger phage 22 tailspike protein ( robinson and king , 1997 ) , where a non - native disulphide bond controls the folding search .we remark that in both these cases , the stabilised hypergutter provides an arena in which `` diffusion - collision '' calculations can operate within a molten globule , so constituting a significant generalisation of that model to non - spatial degrees of freedom ( zhou and karplus , 1999 ) .we have identified two general predictions of this high - dimensional view of folding : ( 1 ) the sequential diffusive exploration of low - dimensional subspaces favoured by fast folding and ( 2 ) the stabilisation of these subspaces by discriminate but non - native ( or near - native ) interactions , without recourse to long - range guiding forces .but it has other things to say concerning common experimental measures of even the deceptively simple `` two - state '' folders .we derive here three further consequences : ( 3 ) early - time structure in kinetics , ( 4 ) temperature and denaturant dependence and the free - energy structure of the folding pathway , ( 5 ) non - native contributions to -values .we first take a very simple case : if the non - native gutter - stabilising interactions are perfectly balanced with the entropy changes at each stage of the dimension reduction , then the free - energy profile is itself flat , and the diffusive dimensions form an effective one dimensional path along which the folding takes place .this is not , of course , to suggest that the path is unique , since : ( i ) a large fraction of each sub - dimension may be explored , ( ii ) the path is at each stage reversible and ( iii ) the non - diffusive `` zipper '' dimensions describing the local folding of secondary structure are perpetually exploring their own configurational space rapidly and cooperatively with the slow dimensions . nonetheless , casting the high - dimensional problem into this form shows that a nave `` reaction co - ordinate '' picture can actually emerge from the concatenation of the sequentially - stabilised hypergutters .an effective 1-dimensional coordinate , arises from such concatenation of the gutter dimensions of a very high dimensional space , whose initial condition ( for a quenching experiment ) will favour the high entropy of the early dimensions : every initial state is completely disordered , and the resulting one - dimensional diffusion equation will be supplemented by the approximate initial condition .if the native state is representes by a sink for diffusers at , it is straightforward to calculate the fraction of unfolded proteins after a quench as: we plot as a solid line in figure 3 . for most of its trajectory , this function mimics a single exponential , but with an effective _ delay _ from the moment of quench .this arises from the time it takes for the higher subspaces to be filled - at first the native configuration is `` screened '' by virtue of being buried in a cloud of states of low entropy .the apparent delay would be noticed only in experiments able to capture the very fastest kinetics after a quench .it is very common to represent diagrams of the folding and unfolding rates of proteins plotted against the concentration of denaturant , the so - called `` chevron plot '' . a more challenging experiment is represented by the eyring plot ( with ) .these plots do not typically show the simple linear form characteristic of chemical reactions with a transition state free energy that is itself independent of temperature . in the case of protein folding reactions they are generally negatively curved , and may contain discontinuities of gradient ( oliveberg et al . , 1995 )a common interpretation is to claim that the local gradient of the eyring plot gives the activation enthalpy at that temperature , and therefore that curvature implies a change in the enthalpy of the transition state .possible causes suggested have included melting of the differently - sized hydration shells in the unfolded and transition states .such a curvature in eyring plots is , however , a natural consequence of the hypergutter model .low - dimensional diffusive subspaces of progressively lower entropy are stabilised by increasingly negative non - specific binding energies .on average the energies ( enthalpies ) of the subspaces towards the native state must have large negative values , since the folded native state has such a low entropy compared to the fully denatured state .the enthalpies of the diffusive subspace must attempt to counterbalance their increasingly large ( negative ) entropies . without any further information on the implicit optimisation of these sets of energies and entropies , but with only one overall constraint that the total free energy to fold be a fixed value ( at some temperature , zero at the folding temperature ) , we can define , as in the last section , a free energy pathway as one - dimensional random walk through the diffusive subspaces ( see figure 4 ) .the transition state is the diffusive subspace with the highest free energy . from the figure ,it is clear that this corresponds to the greatest positive excursion of the random free energy trajectory .it is also clear that , at the folding temperature ( figure 4(b ) ) this tends to occur midway through the walk , at the point least controlled by the boundary conditions at the endpoints .well into the folding regime of , however , this maximum excursion is much more likely to be near the unfolded state , .calculations of the mean excursions of a random walk of steps of an average energy - difference and constrained to a total ( negative ) free energy change , can be parameterised by the dimensionless quantity ( see appendix [ randomtsapp ] ) . in terms of ,the expectation value of the transition state free energy is the surprisingly universal form form may be recast as an eyring plot using , together with the assumption of a linear dependence on temperature for the depth of the native state energy near the folding temperature itself , so that , with a dimensionless number .the dimensionless eyring plot for the parameter is given in figure 5 .the curvature of the plot comes from the mean shift of the transition state to more denatured diffusive subspaces along the folding pathway .alternatively , it can be interpreted as a particular prediction for the hammond shift of the transition state for high - dimensional protein folding with only the minimal requirements of overall entropy and enthalpy balance at each entry down the sequence of gutters .qualitatively , the effect is close that seen in several cases , and with a comparable magnitude ( oliveberg et al . , 1995 ) .there are , of course , several _ caveats _ attached to such a general calculation. clearly this crudest form can not pick up specific and discontinuously large shifts of the transition state , which in small proteins will often dominate particular cases ( see , for example , the two regimes of temperature for which continuously - curved eyring plots hold in the case of barnase ( oliveberg et al . , 1995 ) ) . nordoes it anticipate specifically evolved favourable departures from the random imbalances of entropy and energy assumed here , which certainly arise and roughen the landscape , nor does is account for specific behaviour from hydration shells .nonetheless , we can see that the qualitative features of temperature - dependent folding do arise without any special assumptions of these kinds .furthermore , it suggests a rather general structure for the free - energy along a folding pathway , in which successive fluctuations in entropy and energy create a sequence of intermediate states . this type of structure has been investigated theoretically ( wagner and kiefhaber , 1999 ) and evidence forits rather general emergence has arisen experimentally very recently ( pappenberger et al . , 2000 ;sanchez and kiefhaber , 2003 ) . as a final general prediction , and as an example of the specific calculations possible with the model, we examine the important question of -value analysis and its interpretation .when the mutation of a residue gives rise to a value of close to , it means that the change in folding rate arising from the mutation is consistent with comparable changes in the transition state and native state energies .this is usually interpreted to mean that the residue in question enjoys the majority of its native contacts at the transition state ( fersht , 2000 ) .however , this model suggests another physical source of positive values for , since it identifies _ non - native _ interactions as potentially crucial in establishing folding rates . for if a residue contributes _ via _ non - native interactions to the stable hypergutter _ that concludes the dominant ( longest ) diffusive search _ ,then mutations to that residue will affect the folding rate , even though it does not necessarily possess any native contacts at the transition state . in the hypergutter model ,the `` transition state '' is , by definition , the subspace following that which takes the longest time to search - the rate - determining step . to make this more precise, we return to the case of the three - helix bundle and calculate the dependence of the total folding rate on the non - native potential that stabilises the 2-dimensional `` gutter '' of the final search . defining a `` fugacity '' , where is the stabilising energy of the 2-d gutter and , the measure of the relative sizes of the two spaces , we expect that as is increased ( by increasing ) we take the system from the the slower 3-dimensional search to the accelerated `` 1 + 2 '' dimensional search . by adding the currents of diffusers that find the native state from the 2 and 3 dimensional spaces separately, we find an approximate cross - over formula for the folding rate ( ignoring weak logarithmic factors): and the slow rate .the expression ( [ rate crossover ] ) also contains , by implication , a prediction of the contribution of the _ non - native _ interactions to the -values of the residues that contribute to it . for, a mutation of any residue will change its contribution to the gutter potential , so is the number of residues that share the burden of providing the non - native gutter potential .we have also assumed in the derivation of ( [ gutter phis ] ) that is also the scale of a _ single _ residue s contribution to the stability of the native state - but other reasonable assumptions will only introduce an order- prefactor .the functional dependence of on the fugacity is actually a rather weakly varying function once the gutter is large enough to produce a reasonable fraction of the maximum acceleration of the folding rate , and is close to the value ( it possesses a broad maximum of this value when , see figure 6 , which shows how both folding rate and depend on the gutter potential ) .this non - native contribution to will naturally be weak in the two limits of vanishing gutter - potential ( when all searches are high - dimensional ) and very high gutter potential( when they are always low - dimensional ) .again , a rather general result emerges that may be compared with rate measurements on selectively - mutated systems .for the three - helix bundles , contributions to the stabilising potential that encourages the terminal helices to diffuse in contact with each other will arise typically from one residue per helical turn , so that ( by counting about two residues per turn on the contact face of a 5-turn helix ) . since the total predicted non - native contribution to or order ( from the dimensionless function of ( [ gutter phis ] ) plotted in figure 6 ) , this means in turn that mutating these residues might generally give non - native contributions to their apparent individual -values of order 0.1 .the inset to figure 6 displays the expected pattern of such enhanced -values against residue index .remarkably , this is precisely what is seen , again in very recent experiments , on the immunity family of helical proteins im7 and im9 : the member of the family with an on - pathway intermediate ( im7 ) also exhibits increased -values in the appropriate region of the helices 1 , 2 and 4 ( helix 3 only forms co - operatively with the native state ) by just this amount , relative to the protein without the intermediate , im9 ( friel et al . , 2003 ) . the magnitude of incremental contributions to from the gutter potentialsis restricted to these low values only in very simple topologies such as the three - helix bundle .when key stabilising interactions succeed in reducing the dimensionality of the search space more drastically , much higher values can result ( from differentiation of the higher - dimensional analogues of ( [ rate crossover ] ) ) .in more complex spaces of mutual diffusion of helices and -turns , values greater than are not unexpected .this approach suggests a natural interpretation of -values greater than one , such as recorded in acylphosphatase ( chiti et al . , 1999 ) , but which do not bear an interpretation in terms of native structure ( fersht , 1999 ) .the interpretation of non - classical -values outlined here is closely - related to a recent suggestion arising from some simple lattice go - type simulations ( ozkan et al . , 2001 ) .the simulation also found that kinetic properties are more closely connected with than local `` degree of nativeness '' .it shares with the present treatment the essential departure from a one - dimensional projection of a transition state , and an identification of the number of permissable pathways , or transition entropy , in controlling the rate of folding .the model of high - dimensional diffusive hypergutters is not incompatible with the frameworks or results of the other models discussed in the introduction , but rather serves to show how the apparently alternative models are related .each emerges from the high - dimensional hypergutter picture when a _different _ projection into a low - dimensional space is applied .when the flat , diffusional , freedom are projected away onto a reaction co - ordinate pair such as and , then a folding funnel appears , and does so without the presence of any long - range forces .the difference is that , on close examination , the funnel is discrete , or terraced , rather than continuous .furthermore , it appears when the interactions generating coil - collapse are projected along an ordinate of sequential sub - spaces , rather than along a spatial co - ordinate .but when there are many sequential subspaces , an apparently continuous folding funnel appears with all of the features of intermediate states , multiple pathways _ etc ._ ascribed to it ( brygelson et al . , 1995 ; dinner et al . , 2000 ) arising in a natural way .another example of this projection is found in the master - equation approach ( zwanzig , 1995 ) , in which the smoothly funneled high dimensional energy landscape implies some shaping of the non - native contacts of the underlying model .when the projection is _ orthogonal _ to one of the later diffusional subspaces , on the other hand , then the same system will appear to map onto a diffusion - collision model . in this casethe projection concentrates on the diffusive degrees of freedom one by one , rather than projecting them away into a funnel . the advantage of the hypergutter approach , however , is that it identifies diffusive subspaces in cases where the standard diffusion - collision model does not .the case of diffusion in mutual angular space of helical bundles discussed above is an example , since this occurs within a globule , rather than in the collisional formation of a globule .it also recognises intermediate cases in which diffusional searches occur simultaneously in high and low - dimensional spaces , such as a partially - stabilised 2-d gutter in the three - helix case , and provides a structure for introducing tailored , rather than indiscriminate , non - native interactions .the interesting and unexpected prediction of non - native and positive -values emerges in just this case . of particular interestis the relationship of the hypergutter picture to the topomer search model .this is because the rate - determining diffusive searches will in general be completed only when a topological , as well as a spatial , constraint in the final native state is satisfied for the first time .it is also to be expected since this model also seeks to understand folding rates without recourse to their dependence on native interactions .again , the three helix bundle serves as a model example- restriction from the 3-d helical angular space to the faster 2-d space with helices 1 and 3 in contact occurs when the topological orientation of the helices is satisfied .similar conclusions would emerge if the slow - searches were between a helix and a -sheet , or between two or more -turns .retaining all the diffusional degrees of freedom leads to a close relationship between the contact order of the topomer search model , and the number of diffusional dimensions of the space of hypergutters in the rate - determining diffusive search . for we can identify the exponent in the rate expression in the topomer search model ( makarov and plaxco , 2003) that in the diffusive - search result ( [ search time ] ) above to find , formally at the level of the exponent that where here is the dimension of the largest diffusive search .we note , however , that departures from the correlation of folding time with contact order might be expected when non - native interactions are tuned to speed up folding in the way we have outlined .this is because such a strategy can reduce the effective dimension of the search without changing the topology of the final state .strong outlying behaviour in the correlation of folding time with contact order may be connected with the potential variability in the efficiency of hypergutter stability portrayed in figure 6 .we have discussed a conceptual approach to the protein landscape problem that attempts to remain faithful to the high dimensionality of the system . rather than invoking a continuous energy landscape with long - range forces giving rise to a funneled landscape , we use rather general considerations to point to a high dimensional structure of `` hypergutters '' .these structures describe the search for the native state as a sequence of relatively low - dimensional diffusive subspaces . only spatially - local interactions are required to direct the folding towards the native state in reasonable times . as a by - product, this procedure also draws together into a single picture the apparently divergent views of the folding funnel , collision - diffusion , nucleation - condensation and topomer search models .the rate - determining `` gutter '' dimensions lie orthogonal to other `` zipper '' dimensions describing the local formation of secondary structure , that _ are _ characterised by a continuous folding funnel. looked at another way , our structure is a more detailed examination of the sort of dynamic processes that must be occurring within the `` molten globule '' phase of protein folding .the formation of the globule itself from the denatured state corresponds in this picture to the first hypergutter in a series .it is clearer in experiment than the subsequent dimensional reductions because it is the only one that makes significant changes to the radius of gyration of the protein .we might note that such a pattern of non - specific binding in diffusive searches is a common motif in biology , appearing for example in the search of dna - binding repressors for their operons ( winter et al . , 1981 ) . in this case, the slow search for a specific binding site in is substituted for a much more rapid diffusive search in ( along the dna ) by non - specific binding of the repressor proteins . in this process too , there is strong evidence that the non - specific interactions are themselves subtly coded to further speed the search for the binding target .several general predictions follow .the first is that special tuning of non - native interactions may contribute significantly to rapid folding ; they stabilise the hypergutter - potentials that keep diffusive search dimensions under control . in the case of helical proteins ,candidates for the structure of the gutters are non - specific contacts of the helices , and the angular , rather than translational , degrees of freedom describing their mutual configurations . in proteins with more complex structures ,other candidates suggest themselves , such as the orientation of helices with respect to -sheets with which they are in contact in proteins , and the relative orientations of -turns and partially - folded -sheets in all- elements .very recently , the role of non - native interactions in stabilising an on - pathway intermediate , together with a diffusion - collision kinetic route , has been experimentally verified in the case of the immunity protein im7 ( capaldi et al . , 2002 ) .this precisely exemplifies the general mechanism we have suggested , with the additional feature that one of the hypergutters has become so stabilised ( and therefore so populated ) that it attracts the label of `` intermediate state '' .other examples of non - natively stabilised folding pathways are emerging ( robinson and king 1997 ) . perhaps the most remarkable example is the determination by fast kinetic experiments that -lactoglobulin employs an transient helical motif that is entirely non - native ( park et al . , 1999 ) . by stabilising a -turn that otherwise relies on highly non - local , and late - forming , structure for its stability, this temporary helix reduces the dimension of the search space for the non - local contacts . of course , it is not impossible to achieve the dimensional reduction we have outlined by using fully native , rather than non - native interactions .such proteins would present a highly bimodal distribution of -values , clustering closely to and .a candidate would be acylphosphatase ( paci et al . , 2002 ) in which the `` transition state '' strongly constraints the environment of just three residues and the immediate neighbours . in the hypergutter picture ,the nine - dimensional search space of these critical regions separately is reduced to three sequential three - dimensional searches by the long lifetimes of the native regions when the remainder of the protein is disordered .one possible advantage of using broadly distributed non - native interactions , rather than a few local native ones , to stabilise hypergutters , is that the intermediate states are thereby more tightly confined .this is turn may assist in suppressing the pathway to aggregated states , or amyloid formation ( dobson , 2002 ) .this suggestion has recently been made from observations of competing folding pathways in a -sandwich protein as both the sensitivity and time - resolution of kinetic experiments increases , finer details of the intermediate diffusional subspaces in this and other proteins should become equally transparent .another recent , theoretical , contribution has pointed out that fine - structure of a few within the transition state on a reaction pathway can accelerate folding ( wagner and kiefhaber , 1999 ) . as an example of the type of fine - structure predicted, the model contains a natural explanation of the generic curvature seen in eyring plots of the temperature dependence of folding rates .the pseudo - random differences in the enthalpy and entropy of the transitions from one diffusive subspace to the next result in the transition state energy becoming typically more and more negative as the quench depth increases .this hammond - like behaviour will contribute at least in part to the experimental signals , and suggest the gathering of a wider data sets of this type .very recent findings suggest that a structure very similar to this predicted one is indeed rather general ( sanchez and kiefhaber , 2003 ) .features of the time - dependent folding curves as functions of temperature or denaturant also follow from the model , including the possibility of an apparent delay before single exponential kinetics set in .it is also possible that `` kinetic traps '' arise not just from low - energy intermediate states , but from intermediate diffusive subspaces of higher dimension than 2 , for which the control of dimensionality has been incomplete .this is significant for the topomer search model : we might expect to find departures from the folding time / contact order correlation when , in spite of sharing the same topology of fold , one protein in a pair has an important diffusional subspace stabilised while the other does not .alternatively , our picture suggests ways of increasing folding times greatly by selective mutations that retain topology and stability of the native state , but destabilise one or more of the on - pathway diffusional subspaces so that intermediate searches are required in .the model provides an alternative interpretation of the results of protein engineering analysis , and suggests that not all contributions to measured -values at the transition state arise from native - like interactions .it suggests interpretations for -values of order 0.1 - 0.2 , but also indicates that contributions to larger values ( including the non - classical range ) may arise from non - native interactions with that residue that serve to restrict the folding space .more detailed predictions of non - native contributions to for the family of bacterial immunity proteins and their mutants are in accord with very recent experiments .finally it suggests that the `` kinetic code '' that informs the search for the native state may be found in evolved selection of some of the non - native interactions .the large number of these , of order times greater than the number of native interactions , make them a likely candidate for information storage , as well as their natural propensity for kinetic control .the framework and specific examples discussed here suggest useful coarse - grained models of other families of proteins that may be simulated very efficiently , or even approached analytically , as we have done with the 3-helix bundles .strong experimental evidence is currently emerging that supports all of the main predictions of the approach ; other experimental tests of the more surprising conclusions are awaited .we consider the free - energy as a directed one - dimensional walk in energy space of steps each of mean ( free ) energy .we first consider the case of return to the origin , which corresponds to the folding temperature at which . to find the typical excursion of the walk we may map the problem onto a one - dimensional polymer ( so that the free - energy walk itself has a free energy ) . at a coordinate ,corresponding to the diffusional subspace of the total , this free energy of the walk is composed of the segment before and the segment after .if is the free energy at coordinate , the free energy of the whole walk , integrating over all other possible energies for the individual subspaces is the sum of the two `` entropic elastic '' contributions from these pieces: the probability for energy excursions is proportional to , the distribution of at each point is gaussian , with mean peaks , of course , at .now taking the directed random walk , the expected maximum value of the free energy at is just the random excursion calculated above superimposed on a steady drift towards the native state at the native free energy .taking out constants with dimensions we write: the parameter is a measure of the quench depth .it is a simple matter to maximise this function over , and then to find the maximum value .the result is equation [ tsmax ] .chiti , f. , n. taddei , p.m. white , m. bucciantini , f. magherini , m. stefani , and c.m .mutational analysis of acylphosphatase suggests the importance of topology and contact order in protein folding .nature struct .6:1005 - 1009 ferscht , a. r. , transition state structure as a unifying basis in protein folding mechanisms : contact order , chain topology , stability and the extended nucleus mechanism . 2000 .usa 97:1525 - 1529 friel , c.t .capaldi , and s.e . radford .2003 . structural analysis of the rate - limiting transition states in the folding of im7 and im9 : similarities and differences in the folding of homologous proteins .326:293 - 305 kuwata , k. , m.c.r .sashtry , h. cheng , m. hoshino , c.a .batt , y. goto , and h. roder .structural and early kinetic characterisation of early folding events in -lactoglobulin , nature struct .8:151 - 155 pappenberger , g. , c. saudan , m. becker , a.e .merbach , and t. kiefhaber . 2000 .denaturant - induced movement of the transition state of protein folding revealed by high - pressure stopped - flow measurements .usa 97:17 - 22 ( 2000 ) winter , r.b . ,berg , and p.h .von hippel .diffusion - driven mechanisms of protein translocation on nucleic acids .the e. coli lac repressor - operator interaction : kinetic measurements and conclusions .biochemistry , 20:6961 - 6977 i would like to thank sheena radford , kevin plaxco and joan shea for helpful discussions and comments on the manuscripts , and the kavli institute for theoretical physics for the hospitality of its 2002 programme `` dynamics of complex and macromolecular fluids '' , where the bulk of this work was done ( preprint number nsf - itp-02 - 55 ) .part of the -dimensional folding space containing a diffusive hypergutter projected onto dimensions .the diffusing particle ( representing the random search of the protein through its available conformations ) does not have to search simulataneously through both the dimensions of the figure .instead , it exploits the lower energy state of the entire diffusive subspace of the subspace to reach it _ via _ a one - dimensional diffusion in the dimension .the 3-helix bundle ( bdpa on the left ) is coarse - grained to a system of three rods .the three angles constituting the diffusive subspaces are labelled for .the folding space then looks like the periodic cubic lattice on the right ( only the direction is shown periodic , for clarity ) .the attractive gutter is the 2-d space spanned by and once -diffusion has brought the third helix into contact with the other two .but for small angles is a large topological barrier between the `` correct '' and `` incorrect '' sides of attachment of the third helix onto the bundle formed by the other two , and identical with the rapid diffusional subspace of and .log - linear plot of three relaxation functions .dashed is the single exponential .dotted is the decay to the native state for 1-dimensional diffusion with a uniform distribution of initial states .solid curve is the decay of an effective 1-dimensional folding path created from a high dimensional landscape with flat free energy .the one - dimensional folding free - energy in terms of sequential diffusive subspaces .( a ) ; the landscape is a directed random walk - the maximum excursion ( transition state ) lies towards the start of the walk .( b ) ; the walk returns to the origin of free energy - the maximum lies near the middle of the trajectory .dimensionless eyring plot of the universal form in the version of the gutter - landscape model in which only the energies of native and denatured states are specified ( to convert to numbers on an experimental arrhenius plot , the figures on the ordinate should be multplied by the square root of the number of diffusive dimensions , or contact order , of the protein ) .the inset shows experimental results on the protein ci2 , from .predictions of the folding rate ( solid line ) relative to the rate of the optimal 1 + 2 dimensional search path , and sum of non - natively generated -values from residues contributing to a 2-d diffusive hypergutter ( 3 helix bundle ) ( dashed line ) .the ordinate is the `` fugacity '' measure of the attractive potential .the value assumed for the spatial reduction , is .the inset contains the expected magnitude of increase in -values with residue index ( dashed lines ) in a 3-helix protein with a 2-d kinetic intermediate gutter ( im7 ) , relative to one without ( im9 ) .we expect the modifications to be concentrated onto helices 1 and 4 , whose mutual contacts stabilise the 2d search space .
we explore the consequences of very high dimensionality in the dynamical landscape of protein folding . consideration of both typical range of stabilising interactions , and folding rates themselves , leads to a model of the energy hypersurface that is characterised by the structure of diffusive `` hypergutters '' as well as the familiar `` funnels '' . several general predictions result : ( 1 ) intermediate subspaces of configurations will always be visited ; ( 2 ) specific but _ non - native _ interactions are important in stabilising these low - dimensional diffusive searches on the folding pathway ; ( 3 ) sequential barriers will commonly be found , even in `` two - state''proteins ; ( 4 ) very early times will show charactreristic departures from single - exponential kinetics ; ( 5 ) contributions of non - native interactions to -values are calculable , and may be significant . the example of a three - helix bundle is treated in more detail as an illustration . the model also shows that high - dimensional structures provide conceptual relations between the `` folding funnel '' , `` diffusion - collision '' , `` nucleation - condensation '' and `` topomer search '' models of protein folding . it suggests that kinetic strategies for fast folding may be encoded rather generally in non - native , rather than native interactions . the predictions are related to very recent findings in experiment and simulation .
matlab is the dominant interpreted programming language for implementing numerical computations and is widely used for algorithm development , simulation , data reduction , testing and system evaluation .the popularity of matlab is driven by the high productivity that is achieved by users because one line of matlab code can typically replace ten lines of c or fortran code .many matlab programs can benefit from faster execution on a parallel computer , but achieving this goal has been a significant challenge .there have been many previous attempts to provide an efficient mechanism for running matlab programs on parallel computers .these efforts of have faced numerous challenges , such as limited support of matlab functions and data structure or reliance on machine specific 3rd party parallel libraries and language extensions . in the world of parallel computing the message passing interface ( mpi ) is the de facto standard for implementing programs on multiple processors .mpi defines c and fortran language functions for doing point - to - point communication in a parallel program .mpi has proven to be an effective model for implementing parallel programs and is used by many of the worlds most demanding applications ( weather modeling , weapons simulation , aircraft design , and signal processing simulation ) .matlabmpi consists of a set of matlab scripts that implements a subset of mpi and allows any matlab program to be run on a parallel computer .the key innovation of matlabmpi is that it implements the widely used mpi `` look and feel '' on top of standard matlab file i / o , resulting in a `` pure '' matlab implementation that is exceedingly small ( lines of code ) .thus , matlabmpi will run on any combination of computers that matlab supports .the next section describes the implementation and functionality provided by matlabmpi .section three presents results on the bandwidth performance of the library from several parallel computers .section four uses an image processing application to show the scaling performance that can be achieved using matlabmpi .section five presents our conclusions and plans for future work .the central innovation of matlabmpi is its simplicity .matlabmpi exploits matlab s built in file i / o capabilities , which allow any matlab variable ( matrices , arrays , structures , ... ) to be written and read by matlab running on any machine , and eliminates the need to write complicated buffer packing and unpacking routines which would require ,000 lines of code to implement .the approach used in matlabmpi is illustrated in figure [ fig : filebasedcomm ] .the sender saves a matlab variable to a data file and when the file is complete the sender touches a lock file .the receiver continuously checks for the existence of the lock file ; when it is exists the receiver reads in the data file and then deletes both the data file and the lock file .an example of a basic send and receive matlab program is shown below _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... mpi_init ; % initialize mpi .comm = mpi_comm_world ; % create communicator .comm_size = mpi_comm_size(comm ) ; % get size .my_rank = mpi_comm_rank(comm ) ; % get rank .source = 0 ; % set source .dest = 1 ; % set destination .tag = 1 ; % set message tag .if(comm_size = = 2 ) % check size .if ( my_rank = = source ) % if source .data = 1:10 ; % create data .mpi_send(dest , tag , comm , data ) ; % send data .end if ( my_rank = = dest ) % if destination .data = mpi_recv(source , tag , comm ) ; % receive data .end end mpi_finalize ; % finalize matlab mpi .exit ; % exit matlab .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the structure of the above program is very similar to mpi programs written in c or fortran , but with the convenience of a high level language .the first part of the program sets up the mpi world ; the second part differentiates the program according to rank and executes the communication ; the third part closes down the mpi world and exits matlab . if the above program were written in a matlab file called sendreceive.m , it would be executed by calling the following command from the matlab prompt : .... eval ( mpi_run('sendreceive',2,machines ) ) ; .... where the machines argument can be any of the following forms : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ machines = \ { } ; : : run on the local host .machines = \{machine1 machine2 } ; : : run on a multi processors .machines = \{machine1:dir1 machine2:dir2 } ; : : run on a multi processors and communicate using dir1 and dir2 , which must be visible to both machines ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the mpi_run command launches a matlab script on the specified machines with output redirected to a file . if the rank=0 process is being run on the local machine , mpi_run returns a string containing the commands to initialize matlabmpi , which allows matlabmpi to be invoked deep inside of a matlab program in a manner similar to fork - join model employed in openmp .the sendreceive example illustrates the basic six mpi functions ( plus mpi_run ) that have been implemented in matlabmpi _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mpi_run : : runs a matlab script in parallel .mpi_init : : initializes mpi .mpi_comm_size : : gets the number of processors in a communicator .mpi_comm_rank : : gets the rank of current processor within a communicator .mpi_send : : sends a message to a processor ( non - blocking ) .mpi_recv : : receives message from a processor ( blocking ) .mpi_finalize : : finalizes mpi ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for convenience ,three additional mpi functions have also been implemented along with two utility functions that provide important matlabmpi functionality , but are outside the mpi specification : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mpi_abort : : function to kill all matlab jobs started by matlabmpi .mpi_bcast : : broadcast a message ( blocking ) .mpi_probe : : returns a list of all incoming messages .matmpi_save_messages : : matlabmpi function to prevent messages from being deleted ( useful for debugging ) .matmpi_delete_all : : matlabmpi function to delete all files created by matlabmpi ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ matlabmpi handles errors the same as matlab , however running hundreds of copies does bring up some additional issues .if an error is encountered and the matlab script has an `` exit '' statement then all the matlab processes will die gracefully .if a matlab job is waiting for a message that never arrives then it needs to be killed with the mpi_abort command .in this situation , matlabmpi can leave files which need to be cleaned up with the matmpi_delete_all command . on shared memory systems ,matlabmpi only requires a single matlab license since any user is allowed to launch many matlab sessions . on a distributed memory system , matlabmpi requires one matlab license per machine .because matlabmpi uses file i / o for communication , there must also be a directory that is visible to every machine ( this is usually also required in order to install matlab ) .this directory defaults to the directory that the program is launched from .the vast majority of potential matlab applications are `` embarrassingly '' parallel and require minimal performance out of matlabmpi .these applications exploit coarse grain parallelism and communicate rarely ( if at all ) .never - the - less , measuring the communication performance is useful for determining which applications are most suitable for matlabmpi .matlabmpi has been run on several unix platforms .it has been benchmarked and compared to the performance of c mpi on the sgi origin2000 at boston university .these results indicate that for large messages ( mbyte ) matlabmpi is able to match the performance of c mpi ( see figure [ fig : bu_o2000_bandwidth ] ) . for smaller messages ,matlabmpi is dominated by its latency ( milliseconds ) , which is significantly larger than c mpi .these results have been reproduced using a sgi origin2000 and a hewlett packard workstation cluster ( connected with 100 mbit ethernet ) at ohio state university ( see figure [ fig : osu_hp_o2000_bandwidth ] ) .the above bandwidth results were all obtained using two processors engaging in bi - directional sends and receives .such a test does a good job of testing the individual links on an a multi - processor system . to more broadly test the interconnect the send receive benchmark is run on an eight node ( 16 cpu ) linux cluster connected with gigabit ethernet ( see figure [ fig : linux_bandwidth ] ) .these results are shown for one and 16 cpus .matlabmpi is able to maintain high bandwidth even when multiple processors are communicating by allowing each processor to have its own receive directory . by crossmounting all the disks in a cluster each node only sees the traffic directed to it , which allows communication contention to be kept to a minimum .to further test matlabmpi a simple image filtering application was written .this application abstracts the key computations that are used in many of our dod sensor processing applications ( e.g. wide area synthetic aperture radar ) .the application executed repeated 2d convolutions on a large image ( 1024 x 128,000 gbytes ) .this application was run on a large shared memory parallel computer ( the sgi origin2000 at boston university ) and achieved speedups greater than 64 on 64 processors ; showing the classic super - linear speedup ( due to better cache usage ) that comes from breaking a very large memory problem into many smaller problems ( see figure [ fig : bu_o2000_speedup ] ) . to further test the scalability , the image processing application was run with a constant load per processor ( 1024 x 1024 image per processor ) on a large shared / distributed memory system ( the ibm sp2 at the maui high performance computing center ) . in this test ,the application achieved a speedup of on 304 cpus as well achieving % of the theoretical peak ( 450 gigaflops ) of the system ( see figure [ fig : mhpcc_ibmsp2_speedup ] ) .the ultimate goal of running matlab on parallel computers is to increase programmer productivity and decrease the large software cost of using hpc systems .figure [ fig : prod_vs_perf ] plots the software cost ( measured in software lines of code or slocs ) as a function of the maximum achieved performance ( measured in units of single processor peak ) for the same image filtering application implemented using several different libraries and languages ( vsipl , mpi , openmp , using c++ , c , and matlab ) .these data show that higher level languages require fewer lines to implement the same level of functionality . obtaining increased peak performance ( i.e. exploiting more parallelism ) requires more lines of code .matlabmpi is unique in that it achieves a high peak performance using a small number of lines of code .the use of file i / o as a parallel communication mechanism is not new and is now increasingly feasible with the availability of low cost high speed disks .the extreme example of this approach are the now popular storage area networks ( san ) , which combine high speed routers and disks to provide server solutions .although using file i / o increases the latency of messages it normally will not effect the bandwidth .furthermore , the use of file i / o has several additional functional advantages which make it easy to implement large buffer sizes , recordable messages , multi - casting , and one - sided messaging .finally , the matlabmpi approach is readily applied to any language ( e.g. idl , python , and perl ) .matlabmpi demonstrates that the standard approach to writing parallel programs in c and fortran ( i.e. , using mpi ) is also valid in matlab .in addition , by using matlab file i / o , it was possible to implement matlabmpi entirely within the matlab environment , making it instantly portable to all computers that matlab runs on .most potential parallel matlab applications are trivially parallel and do nt require high performance .never - the - less , matlabmpi can match c mpi performance on large messages . the simplicity and performance of matlabmpimakes it a very reasonable choice for programmers that want to speed up their matlab code on a parallel computer .matlabmpi provides the highest productivity parallel computing environment available .however , because it is a point - to - point messaging library , a significant amount code of must be added to any application in order to do basic parallel operations . in the test application presented here ,the number of lines of matlab code increased from 35 to 70 .while a 70 line parallel program is extremely small , it represents a significant increase over the single processor case .our future work will aim at creating higher level objects ( e.g. , distributed matrices ) that will eliminate this parallel coding overhead ( see figure [ fig : layeredarch ] ) .the resulting `` parallel matlab toolbox '' will be built on top of the matlabmpi communication layer , and will allow a user to achieve good parallel performance without increasing the number lines of code .the authors would like to thank the sponsorship of the dod high performance computing modernization office , and the staff at the supercomputing centers at boston university , ohio state university and the maui high performance computing center ..... mpi_init ; % initialize mpi .comm = mpi_comm_world ; % create communicator .comm_size = mpi_comm_size(comm ) ; % get size .my_rank = mpi_comm_rank(comm ) ; % get rank .% do a synchronized start .starter_rank = 0 ; delay = 30 ; % seconds synch_start(comm , starter_rank , delay ) ; n_image_x = 2.^(10 + 1)*comm_size ; % set image size ( use powers of 2 ) .n_image_y = 2.^10 ; n_point = 100 ; % number of points to put in each sub - image .% set filter size ( use powers of 2 ) .n_filter_x = 2.^5 ; n_filter_y = 2.^5 ; n_trial = 2 ; % set the number of times to filter .% computer number of operations .total_ops = 2.*n_trial*n_filter_x*n_filter_y*n_image_x*n_image_y ; if(rem(n_image_x , comm_size ) ~= 0 ) disp('error : processors need to evenly divide image ' ) ; exit ; end disp(['my_rank : ' , num2str(my_rank ) ] ) ; % print rank .left = my_rank - 1 ; % set who is source and who is destination .if ( left < 0 ) left = comm_size - 1 ; end right = my_rank + 1 ; if ( right > = comm_size ) right = 0 ; end tag = 1 ; % create a unique tag i d for this message .start_time = zeros(n_trial ) ; % create timing matrices .end_time = start_time ; zero_clock = clock ; % get a zero clock .n_sub_image_x = n_image_x./comm_size ; % compute sub_images for each processor .n_sub_image_y = n_image_y ; % create starting image and working images ..sub_image0 = rand(n_sub_image_x , n_sub_image_y).^10 ; sub_image = sub_image0 ; work_image = zeros(n_sub_image_x+n_filter_x , n_sub_image_y+n_filter_y ) ; % create kernel . x_shape = sin(pi.*(0:(n_filter_x-1))./(n_filter_x-1)).^2 ; y_shape = sin(pi.*(0:(n_filter_y-1))./(n_filter_y-1)).^2 ; kernel = x_shape . ' * y_shape ; % create box indices .lboxw = [ 1,n_filter_x/2,1,n_sub_image_y ] ; cboxw = [ n_filter_x/2 + 1,n_filter_x/2+n_sub_image_x,1,n_sub_image_y ] ; rboxw = [ n_filter_x/2+n_sub_image_x+1,n_sub_image_x+n_filter_x,1,n_sub_image_y ] ; lboxi = [ 1,n_filter_x/2,1,n_sub_image_y ] ; rboxi = [ n_sub_image_x - n_filter_x/2 + 1,n_sub_image_x,1,n_sub_image_y ] ; start_time = etime(clock , zero_clock ) ; % set start time .% loop over each trial . for i_trial = 1:n_trial % copy center sub_image into work_image .work_image(cboxw(1):cboxw(2),cboxw(3):cboxw(4 ) ) = sub_image ; if ( comm_size > 1 ) ltag = 2.*i_trial ; % create message tag .rtag = 2.*i_trial+1 ; % send left sub - image .l_sub_image = sub_image(lboxi(1):lboxi(2),lboxi(3):lboxi(4 ) ) ; mpi_send ( left , ltag , comm , l_sub_image ) ; % receive right padding .r_pad = mpi_recv ( right , ltag , comm ) ; work_image(rboxw(1):rboxw(2),rboxw(3):rboxw(4 ) ) = r_pad ; % send right sub - image .r_sub_image = sub_image(rboxi(1):rboxi(2),rboxi(3):rboxi(4 ) ) ; mpi_send ( right , rtag , comm , r_sub_image ) ; % receive left padding .l_pad = mpi_recv ( left , rtag , comm ) ; work_image(lboxw(1):lboxw(2),lboxw(3):lboxw(4 ) ) = l_pad ; end work_image = conv2(work_image , kernel,'same ' ) ; % compute convolution .% extract sub_image .sub_image = work_image(cboxw(1):cboxw(2),cboxw(3):cboxw(4 ) ) ; end end_time = etime(clock , zero_clock ) ; % get end time for the this message .total_time = end_time - start_time % print the results .% print compute performance .gigaflops = total_ops / total_time / 1.e9 ; disp(['gigaflops : ' , num2str(gigaflops ) ] ) ; mpi_finalize ; % finalize matlab mpi .exit ; .... 99 message passing interface ( mpi ) , http://www.mpi-forum.org/ matlab*p , a. edelman , mit , http://www-math.mit.edu// a parallel linear algebra server for matlab - like environments , g. morrow and robert van de geijn , 1998 , supercomputing 98 http://www.supercomp.org/sc98/techpapers/sc98_fullabstracts/morrow779/index.htm automatic array alignment in parallel matlab scripts , i. milosavljevic and m. jabri , 1998 parallel matlab development for high performance computing with rtexpress , http://www.rtexpress.com/ matlab parallel example , kadin tseng , http://scv.bu.edu/scv/origin2000/matlab/matlabexample.shtml multimatlab : matlab on multiple processors a. trefethen et al , paramat , http://www.alphadata.co.uk/dsheet/paramat.html investigation of the parallelization of aew simulations written in matlab , don fabozzi 1999 , hpec99 matpar : parallel extensions to matlab , http://hpc.jpl.nasa.gov/ps/matpar/ mpi toolbox for matlab ( mpitb ) , http://atc.ugr.es/javier-bin/mpitb_eng a matlab compiler for parallel computers . m. quinn , http://www.cs.orst.edu//matlab.html cornell multitask toolbox for matlab ( cmtm ) , http://gremlin.tc.cornell.edu/er/media/2000/cmtm.html j. kepner , a multi - threaded fast convolver for dynamically parallel image filtering , 2002 , accepted journal of parallel and distributed computing
the true costs of high performance computing are currently dominated by software . addressing these costs requires shifting to high productivity languages such as matlab . matlabmpi is a matlab implementation of the message passing interface ( mpi ) standard and allows any matlab program to exploit multiple processors . matlabmpi currently implements the basic six functions that are the core of the mpi point - to - point communications standard . the key technical innovation of matlabmpi is that it implements the widely used mpi `` look and feel '' on top of standard matlab file i / o , resulting in an extremely compact ( lines of code ) and `` pure '' implementation which runs anywhere matlab runs , and on any heterogeneous combination of computers . the performance has been tested on both shared and distributed memory parallel computers ( e.g. sun , sgi , hp , ibm , linux and macosx ) . matlabmpi can match the bandwidth of c based mpi at large message sizes . a test image filtering application using matlabmpi achieved a speedup of using 304 cpus and % of the theoretical peak ( 450 gigaflops ) on an ibm sp2 at the maui high performance computing center . in addition , this entire parallel benchmark application was implemented in 70 software - lines - of - code , illustrating the high productivity of this approach . matlabmpi is available for download on the web ( www.ll.mit.edu/matlabmpi ) .
the method of science calls for the understanding of selected aspects of behaviour of a considered system , given available measurements and other relevant information .the measurements may be of the variable ( ) while the parameters that define the selected system behaviour may be , ( ) or the selected system behaviour can itself be an unknown and sought function of the known input variable vector ( ) , so that . in either case , we relate the measurements with the model of the system behaviour as in the equation or where the function is unknown .alternatively , in either case the scientist aims to solve an inverse problem in which the operator , when operated upon the data , yields the unknown(s ) .one problem that then immediately arises is the learning of the unknown function . indeed is often unknown though such is not the norm for example in applications in which the data is generated by a known projection of the model function onto the space of the measurables , identified as this known projection .thus , image inversion is an example of an inverse problem in which the data is a known function of the unknown model function or model parameter vector [ among others]jugnon , qui2008,bereto , radon_xrays , bookblind . on the other hand ,there can arise a plethora of other situations in science in which a functional relationship between the measurable and unknown ( or ) is appreciated but the exact form of this functional relationship is not known [ to cite a few]parker , tarantola_siam , andrew , andrewreview , draper , tidal , gouveia .this situation allows for a ( personal ) classification of inverse problems such that * in inverse problems of type i , is known where or , * in inverse problems of type ii , is unknown .while inverse problems of type i can be rendered difficult owing to these being ill - posed and/or ill - conditioned as well as in the quantification of the uncertainties in the estimation of the unknown(s ) , inverse problems of type ii appear to be entirely intractable in the current formulation of ( or ) , where the aim is the learning of the unknown ( or ) , given the data .in fact , conventionally , this very general scientific problem would not even be treated as an inverse problem but rather as a modelling exercise specific to the relevant scientific discipline . from the point of view of inverse problems ,these entail another layer of learning , namely , the learning of from the data to be precise , from _ training data _training_geo , caers , astro . here by training data we mean data that comprises values of at chosen values of ( or at chosen ) .these chosen ( and therefore known ) values of ( or ) are referred to as the design points , so that values of generated for the whole design set comprise the training data.having trained the model for using such training data , we then implement this learnt model on the available measurements or test data to learn that value of ( or ) at which the measurements are realised .it is in principle possible to generate a training data set from surveys ( as in selected social science applications ) or generate synthetic training data sets using simulation models of the system simgeo , astrosim , atmos .however , often the physics of the situation is such that is rendered characteristic of the system at hand ( as in complex physical and biological systems ) .consequently , a simulation model of the considered system is only an approximation of the true underlying physics and therefore risky in general ; after all , the basic motivation behind the learning of the unknown ( or ) is to learn the underlying system physics , and pivoting such learning on a simulation model that is of unquantifiable crudeness , may not be useful .thus , in such cases , we need to develop an alternative way of learning or if possible , learn the unknown ( or ) given the available measurements without needing to know .it may appear that such is possible in the bayesian approach in which we only need to write the posterior probability density of the unknown ( or ) , given the data .an added advantage of using the bayesian framework is that extra information is brought into the model via the priors , thus reducing the quantity of data required to achieve inference of a given quality .importantly , in this approach one can readily achieve estimation of uncertainties in the relevant parameters , as distinguished from point estimates of the same . in this paperwe present the bayesian learning of the unknown model parameters given the measurements but no training data , as no training data set is available .the presented methodology is inspired by state space modelling techniques and is elucidated using an application to astronomical data .the advantages of the bayesian framework notwithstanding , in systems in which training data is unavailable , fact remains that can not be learnt .this implies that if learning of the unknown ( or ) is attempted by modelling as a realisation from a stochastic process ( such as a gaussian process ( ) or ito process or t - process , etc . ) , then the correlation structure that underlies this process is not known .however , in this learning approach , the posterior probability of the unknowns given the data invokes such a correlation structure . only by using training datacan we learn the covariance of the process that is sampled from , leading to our formulation of the posterior of the unknowns , given the measured data as well as the training data .to take the example of modelling using a high - dimensional , it might be possible of course to impose the form of the covariance by hand ; for example , when it is safe to assume that is continuous , we could choose a stationary covariance function rasmussen , such as the popular square exponential covariance or the matern class of covariance functions materngneiting , though parameters of such a covariance ( correlation length , smoothness parameter ) being unknown , values of these will then need to be imposed . in the presence of training data, the smoothness parameters can be learnt from the data . for systemsin which the continuous assumption is misplaced , choosing an appropriate covariance function and learning the relevant parameters from the measured data , in absence of training data , becomes even trickier .an example of this situation can arise in fact in an inverse problem of type i the unknown physical density of the system is projected onto the space of observables such that inversion of the available ( noisy ) image data will allow for the estimation of the unknown density , where the projection operator is known .such a density function in real systems can often have disjoint support in its domain and can also be typically characterised by sharp density contrasts as in material density function of real - life material samples crp . then, if we were to model this discontinuous and multimodal density function as a realisation from a , the covariance function of such a process will need to be non - stationary .it is possible to render a density function sampled from such a to be differently differentiable at different points , using for example prescriptions advanced in the literature paciorek , but in lieu of training data it is not possible to parametrise covariance kernels to ensure the representative discontinuity and multimodality of the sampled ( density ) functions .thus , the absence of training data leads to the inability to learn the correlation structure of the density function given the measured image data . a way out this problem could be to make an attempt to construct a training data set by learning values of the unknown system behaviour function at those points in the domain of the density , at which measured data are available ;effectively , we then have a set of data points , each generated at a learnt value of the function , i.e. this set comprises a training data . in this data setthere are measurement uncertainties as well as uncertainty of estimation on each of the learnt values of the system function .of course , learning the value of the function at identified points within the domain of the system function , is in itself a difficult task .thus , in this paradigm , the domain of the unknown system function is discretised according to the set of values of , , at which the measurements are available . in other words ,the discretisation of is dictated by the data distribution . over each -bin , the function held a constant such that for in the -th bin , the function takes the value , ; then we define and try to learn this vector , given the data .unless otherwise motivated , in general applications , the probability distribution of is not imposed by hand . in the bayesian frameworkthis exercise translates to the computing of the joint posterior probability density of distribution - free parameters given the data , where the correlation between and is not invoked , . of course ,framed this way , we can only estimate the value of the sought function at identified values of interpolation is used but once the training data , thus constructed , is subsequently implemented in the modelling of with a of appropriate dimensionality , statistical prediction at any value of may be possible .above , we dealt schematically with the difficult case of lack of training data .however , even when a training data set is available , learning using such data can be hard . in principle , can be learnt using splines or wavelets . however , a fundamental shortcoming of this method is that splines and wavelets can fail to capture the correlation amongst the component functions of a high - dimensional .also , the numerical difficulty of the very task of learning using this technique , and particularly of inverting the learnt , only increases with dimensionality .thus it is an improvement to model such a with a high - dimensional .a high - dimensional can arise in a real - life inverse problem if the observed data is high - dimensional , eg . the data is matrix - variate cbb . measurement uncertainties or measurement noise is almost unavoidable in practical applications and therefore , any attempt at an inference on the unknown model parameter vector ( or the unknown model function ) should be capable of folding in such noise in the data .in addition to this , there could be other worries stemming from inadequacies of the available measurements the data could be too small " to allow for any meaningful inference on the unknown(s ) or too big " to allow for processing within practical time frames ; here the qualification of the size of the data is determined by the intended application as well as the constraints on the available computational resources . however , a general statement that is relevant here is the fact that in the bayesian paradigm , less data is usually required than in the frequentists approach , as motivated above .lastly , data could also be missing ; in particular , in this paper we discuss a case in which the measurable lives in a space where is the state space of the system at hand .the paper is constructed as follows . in section [ sec : intro ] , we briefly discuss the outline of state space modelling . in the following section [ sec : generic ], our new state space modelling based methodology is delineated ; in particular , we explore alternatives to the suggested method in subsection [ sec : alternative ] .the astrophysical background to the application using which our methodology is elucidated , is motivated in section [ sec : casestudy ] while the details of the modelling are presented in section [ sec : modelreal ] .we present details of our inference in section [ sec : inference ] and applications to synthetic and real data are considered in section [ sec : synthetic ] and section [ sec : real ] respectively .we round up the paper with some discussions about the ramifications of our results in section [ sec : discussions ] .understanding the evolution of the probability density function of the state space of a dynamical system , given the available data , is of broad interest to practitioners across disciplines .estimation of the parameters that affect such evolution can be performed within the framework of state space models or ssms west97,polewesthar , harveybook , carlin92 .basically , an ssm comprises an observation structure and an evolution structure . assuming the observations to be conditionally independent , the marginal distribution of any observation is dependent on a known or unknown stationary model parameter , at a given value of the state space parameter at the current time .modelling of errors of such observations within the ssm framework is of interest in different disciplines winshipecology , birdstate .the evolution of the state space parameter is on the other hand given by another set of equations , in which the uncertainty of the evolved value of the parameter is acknowledged . a state space representation of complex systemswill in general have to be designed to capacitate high - dimensional inference in which both the evolutionary as well as observation equations are in general non - linear and parameters and uncertainties are non - gaussian . in this paperwe present a new methodology that offers a state space representation in a situation when data is collected at only one time point and the unknown state space parameter in this treatment is replaced by the discretised version of the multivariate probability density function ( ) of the state space variable .the focus is on the learning of the static unknown model parameter vector rather than on prediction of the state space parameter at a time point different to when the observations are made .in fact , the sought model parameter vector is treated as embedded within the definition of the of the state space variable . in particular , the method that we present here pertains to a partially observed state space , i.e. the observations comprise measurements on only some but not all of the components of the state space vector .thus in this paradigm , probability of the observations conditional on the state space parameters reduces to the probability that the observed state space data have been sampled from the of the full state space variable vector , marginalised over the unobserved components . herethis includes the sought static model parameter vector in its definition .in addition to addressing missing data , the presented methodology is developed to acknowledge the measurement errors that may be non - gaussian .the presented method is applied to real and synthetic astronomical data with the aim of drawing inference on the distribution of the gravitational mass of all matter in a real and simulated galaxy , respectively .this gravitational mass density is projected to be useful in estimating the distribution of dark matter in the galactic system .here we aim to learn the unknown model parameter vector given the data , where data comprises measurements of some ( ) components of the -dimensional state space parameter vector ; thus , . here .in fact , the data set is where the -th observation is the vector .let the state space be so that .let the observable vector be .let )=f_{{\boldsymbol{x}}}({\boldsymbol{x } } , { \boldsymbol{\alpha}})d{\boldsymbol{x}} ] and ] and let the width of each -bin be .then is discretised as the unknown model parameter vector where \quad b=1,2,\ldots , n_x \label{eqn : rho_discrete}\ ] ] where . then following on from equation [ eqn : inter ] we write this is in line with equation [ eqn : f_fin ] if we identify the function of the unknown model parameter vector in the rhs of equation [ eqn : f_fin ] with the unknown gravitational mass density vector .then the of the state space variables and depends of and and .then the equivalent of equation [ eqn : nu_second ] is . then plugging this in the rhs of equation [ eqn : likeli_prelim ] , the likelihood is then to compute the likelihood and thereafter the posterior probability of given data , we will need to compute the integral in equation [ eqn : nu_galaxy ] . according to the general methodology discussed above in section [ sec : generic ] , this is performed by discretising the domain of the of the state space variable , i.e. of . in order to achieve this discretisation we will need to invoke the functional relationship between and .next we discuss this .we recall the physical interpretation of as the norm of the `` angular momentum '' vector , i.e. is the square of the speed of circular motion of a particle with location and velocity ; here , `` circular motion '' is motion orthogonal to the location vector , distinguished from non - circular motion that is parallel to and the speed of which is .then as these two components of motion are mutually orthogonal , square of the particle s speed is where is the magnitude of the component of that is parallel to , i.e. but we recall that energy .this implies that where in the last equation , we invoked the definition of sing equation [ eqn : ldefn ] . at this stage , to simplify things , we consciously choose to work in the coordinate system in which the vector is rotated to vector by a rotation through angle , i.e. then by definition , =0 , i.e. the projection of the vector on the =0 plane lies entirely along the -axis .this rotation does not affect the previous discussion since * the previous discussion invokes the location variable either via , * or via as within the data structure : + .having undertaken the rotation , we refer to and as and respectively .this rotation renders the cross - product in the definition of simpler ; under this choice of the coordinate system , as ^ 2 & = & { \parallel { \boldsymbol{s}}{\boldsymbol{\times}}{\boldsymbol{v}}\parallel^2}\nonumber\\ & = & { \parallel(s_2 v_3 - s_3 v_2 , s_3 v_1 , -s_2 v_1)^t\parallel^2}\nonumber\\ & = & { \parallel(r v_3 \sin\gamma - r v_2\cos\gamma , r v_1\cos\gamma , -r v_1\sin\gamma)^t\parallel^2 } \nonumber\\ & = & { r^2\left[v_1 ^ 2 + ( v_2\cos\gamma - v_3\sin\gamma)^2\right ] } \label{eqn : lcoord}\end{aligned}\ ] ] where so that , so that in this rotated coordinate system , from equation [ eqn : ljhamela ] }{2 } } \nonumber \\ { } & & + \displaystyle{\frac{v_{nc}^2}{2}}. \label{eqn : elrevamp}\end{aligned}\ ] ] also , the component of along the location vector is . from equation [ eqn : ljhamela ] it is evident that for a given value of , the highest value of is attained if ( all motion is circular motion ) .this is realised only when the radius of the circular path of the particle takes a value such that } \label{eqn : ellmax}\ ] ] the way to compute given is defined in the literature bt as the positive definite solution for in the equation } = \displaystyle{-r^3\frac{d\phi(r)}{dr } } \label{eqn : ellsoln}\ ] ] we are now ready to discretise the domain of the of the state space variable , i.e. of in line with the general methodology discussed above in section [ sec : generic ] with the aim of computing the integral in equation [ eqn : nu_galaxy ] .we discretise the domain of where this 2-dimensional domain is defined by the range of values ] , by placing a uniform 2-dimensional rectangular grid over \times[\ell_{min},\ell_{max}] ] is broken into -bins each wide and the range ] .in other words , the aforementioned and .we normalise the value of with the maximal value that can attain for a given value of ( equation [ eqn : ellmax ] ) .the maximum value that can be attained by is for ; having computed from equation [ eqn : ellsoln ] , is computed .then , as normalised by , the maximal value of is 1 .also the lowest value of is 0 , i.e. =0 . in light of this , we rewrite equation [ eqn : prelim_cd ] as ,\nonumber \\ \ell&\in&[(d-1)\delta_\ell , d\delta_\ell],\nonumber \\ c&= & 1,2,\ldots , n_\epsilon,\nonumber \\ d&= & 1,2,\ldots , n_\ell .\label{eqn : prelim_cd2}\end{aligned}\ ] ] the -binning and -binning are kept uniform in the application we discuss below , i.e. and are constants .there are -bins and -bins .above we saw that as the range covered by normalised values of is ] .then we set -bin width and learn number of -bins , , from the data within our inference scheme . then at any iteration , for the current value of and the current ( which leads to the current value of according to equation [ eqn : poisson ] ) , placing ] .experiments suggest that for typical galactic data sets , between 5 and 10 implies convergence in the learnt vectorised form of the gravitational mass density .this leads us to choose a discrete uniform prior over the set , for : again , the minimum and maximum values of in the data fix and respectively , so that . the radial bin width is entirely dictated by the data distribution such that there is at least 1 data vector in each radial bin .thus , and are not parameters to be learnt within the inference scheme but are directly determined by the data .following equation [ eqn : likeli_fin ] , we express the likelihood in this application in terms of the of and , marginalised over all those variables that we do not have any observed information on .then for the data vector , the marginal is + where + }{2}} ] , + with ^ 2 ] where for , the definition provides a representation for all particles in the -th -bin with given observed values of , and .it then follows from ^ 2= ( r^{(k)})^2\left[(v_1 ^ 2 + \{v_2\cos\gamma -v^{(k)}_3\sin\gamma\}^2\right] ] and semi - major axis lying in the interval } ] , where .the area of these overlapping annular regions represents the volume of the -th -grid - cell in the space of and , at the value of .thus , the first step towards writing the volume of the -th -grid - cell in terms of the unobservables , is to compute the area of these overlapping annular regions in the space of and .such an area of overlap is a function of . at the next step ,we integrate such an area over all allowed , to recover the volume of the -th -grid - cell in the space of , and , i.e. the integral on the rhs of equation [ eqn : likeli_integ ]. there can be multiple ways these annular regions overlap ; three examples of these distinct overlapping geometries are displayed in figure [ fig : overlap ] . in each such geometry , it is possible to compute the area of this region of overlap since we know the equations of the curves that bound the area .however , the number of possible geometries of overlap is in excess of 20 and identifying the particular geometry to then compute the area of overlap in each such case , is tedious to code . in place of this , we allow for a numerical computation of the area of overlap ; this method works irrespective of the particulars of the geometry of overlap .we identify the maximum and minimum values of allowed at a given value of , having known the equations to the bounding curves , and compute the area of overlap in the plane of and using numerical integration .this area of overlap in the plane defined by and is a function of since the equations of the bounding curves are expressed in terms of .the area of overlap is then integrated over all values that is permitted to take inside the -th -grid - cell .for any -grid - cell , the lowest value can take is zero . for ] , the maximum value of is realised ( by recalling equation [ eqn : elrevamp ] ) as the solution to the equation and , at neighbouring values of ( the circular contours in red ) and at neighbouring values of ( the elliptical contours in black).,title="fig:",width=377 ] where is the projection of along the vector ( discussed in section [ sec : relation ] ) .thus , is given by the inner product of and the unit vector parallel to : where .under our choice of coordinate system , equation [ eqn : vr ] gives using this in equation [ eqn : zeqn ] we get this implies that given the observations represented by the -th data vector , \left[\epsilon_c - \phi(r)\right ] } = & & \nonumber \\\displaystyle{{\ell_d^2 } + v_2 ^ 2 ( s_2^{(k)})^2 + ( v_3^{(k)})^2 s_3 ^ 2 + 2v_3^{(k ) } v_2 s_2^{(k ) } s_3}. & & \label{eqn : zeqn3}\end{aligned}\ ] ] the highest positive root for from equation [ eqn : zeqn3 ] as the highest value that can attain in the -th -grid - cell .thus , for the -th cell , the limits on the integration over are 0 and the solution to equation [ eqn : zeqn3 ] .so now we have the value of the integral over and and hereafter over , for the -th -grid - cell .this triple integral gives the volume of the -th -grid - cell in the space of the unobservables , i.e. of .this volume is multiplied by the value of the discretised of the state space variable in this cell and the resulting product is summed over all and , to give us the marginalised ( see equation [ eqn : likeli_integ ] ) . once the marginalised is known for a given , the product over all contributes towards the likelihood .as we see from equation [ eqn : likeli_integ ] , the marginal of and is dependent on , so this normalisation will not cancel within the implementation of metropolis - hastings to perform posterior sampling .in other words , to ensure that the value of - and therefore the likelihood - is not artificially enhanced by choosing a high , we normalise for each , by the integrated over all possible values of , and , i.e. by where the possible values of are in the interval ] and of in ] .as far as priors on the gravitational mass density are concerned , astronomical models are available bt . all such models suggest that gravitational mass density is a monotonically decreasing function of . a numerically motivated form that has been used in the astrophysical communityis referred to as the nfw density nfw , though criticism of predictions obtained with this form also exist [ among others]deblok2003 . for our purposewe suggest a uniform prior on such that i.e. is the gravitational mass density as given by the 2-parameter nfw form , for the particle radial location , .in fact , this location is summarised as , the mid - point of the -th radial bin . and are the 2 parameters of the nfw density form . in our workthese are hyperparameters and we place uniform priors on them : and , where these numbers are experimentally chosen . given the data , we use bayes rule to write down the joint posterior probability density of + .this is }\times & & \nonumber \\\displaystyle{\prod_{b=1}^{n_{x}}\left[\frac{1}{\upsilon_{hi}^{(b)}(r_s,\rho_0 ) -\upsilon_{lo}^{(b)}(r_s,\rho_0)}\right ] } \times & & \nonumber \\\displaystyle{\frac{1}{r_{max}-r_{min}}}\times\displaystyle{\frac{1}{10^{14}-10^{9 } } } \times \displaystyle{\frac{1}{5}}.&&\end{aligned}\ ] ] where we used , . here, the factor is a constant and therefore can be subsumed into the constant of proportionality that defines the above relation .we marginalise and out of + to achieve the joint posterior probability of , and given the data .the marginalisation involves only the term }= ] ( recalling equation [ eqn : denprior ] ) . integrating this term over a fixed interval of values of and again over a fixed interval of , result in a constant that depends on , and .thus the marginalisation only results in a constant that can be subsumed within the unknown constant of proportionality that we do not require the exact computation of , given that posterior samples are generated using adaptive metropolis - hastings haario .thus we can write down the joint posterior probability of , and given the data as : } \label{eqn : posteriorfinal}\ ] ] we discuss the implemented inference next .we intend to make inference on each component of the vector and the matrix , along with .we do this under the constraints of a gravitational mass density function that is non - increasing with and a of the state space variable that is non - increasing with .motivation for these constraints is presented in section [ sec : priors ] . in other words , and for and .also , here and .first we discuss performing inference on using adaptive metropolis - hastings haario , while maintaining this constraint of monotonicity .we define it is on the parameters that we make inference .let within our inference scheme , at the -th iteration , the current value of be .let in this iteration , a candidate value of be proposed from the folded normal density , i.e. where the choice of a folded normal folded or truncated normal proposal density is preferred over a density that achieves zero probability mass at the variable value of 0 .this is because there is a non - zero probability for the gravitational mass density to be zero in a given radial bin . here and are the mean and variance of the proposal density that is proposed from .we choose the current value of as and in this adaptive inference scheme , the variance is given by the empirical variance of the chain since the -th iteration , i.e. ^ 2}{n - n_0 } } - \displaystyle{\left[\frac{\sum_{q = n_0}^{n-1 } \delta_b^{(q)}}{n - n_0}\right]^2}\end{aligned}\ ] ] we choose the folded normal proposal density given its ease of computation : }\ ] ] it is evident that this is a symmetric proposal density .we discuss the acceptance criterion in this standard metropolis - hastings scheme , after discussing the proposal density of the components of the matrix and the parameter .if is accepted , then the updated -th component of in the -th iteration is .if the proposed candidate is rejected then resorts back to .along similar lines , we make inference directly on let in the -th iteration , the current value of be and the proposed value be where the proposed candidate is sampled from the folded normal density where the variance is again the empirical variance of the chain between the -th and the -th iteration. then the updated general element of the state space matrix in this iteration is , if the proposed value as accepted , otherwise , .thus , the proposal density that a component of the matrix is proposed from is also symmetric .we propose from the discrete uniform distribution , i.e. the proposed value of in the -th iteration is \ ] ] where the bounds of the interval ] .in this section we illustrate the methodology on synthetic data set simulated from a chosen models for the of .the chosen models for this are or and .these are given by : } , \end{aligned}\ ] ] where with chosen in both models for the state space to be . herethe model parameters and are assigned realistic numerical values . from these2 chosen , values of were sampled ; these 2 samples constituted the 2 synthetic data sets and .the learnt gravitational mass density parameters and discretised version of the state space are displayed in figure [ fig : syn1 ] .some of the convergence characteristics of the chains are explored in figure [ fig : syn2 ] .the trace of the joint posterior probability of the unknown parameters given the data is shown along with histograms of learnt from 3 distinct parts of the chain that is run using data .in this section we present the gravitational mass density parameters and the state space parameters learnt for the real galaxy ngc3379 using 2 data sets and which respectively have sample size 164 pns and 29 bergond .an independent test of hypothesis exercise shows that there is relatively higher support in for an isotropic of the state space variable than in .given this , some runs were performed using an isotropic model of the state space ; this was achieved by fixing the number of -bins to 1 .then identically takes the value and is rendered a constant .this effectively implies that the domain of is rendered uni - dimensional , i.e. the state space is then rendered .recalling the definition of an isotropic function from remark [ remark : isotropic ] , we realise that the modelled state space is then an isotropic function of and .results from chains run with such an isotropic state space were overplotted on results from chains run with the more relaxed version of the that allows for incorporation of anisotropy ; in such chain , is in fact learnt from the data . plotted as in red and blue against ( the value of ) , at two different , recovered from a chains that use data .the modal value of the learnt number of -bins is 7 for this run .the state space parameters recovered using data are shown in black . _middle : _ gravitational mass density parameters estimated from a chain run with are shown in magenta , over - plotted on the same obtained using the same data , from a chain in which the number of -bins , .when is fixed as 1 , it implies that is then no longer a variable and then is effectively univariate , depending on alone .such a state space is an isotropic function of and ( see remark [ remark : isotropic ] ) .the estimated from such an isotropic of the state space variable is shown here in green .the mass density parameters learnt using the data learnt from an isotropic state space shown in black . _right : _ figure showing estimates of , against . herethe parameters in magenta are obtained from the same chain that produce the parameters in the middle panel using while those in green and black are obtained using the that were represented in the middle panel in the corresponding colours.,width=415 ]in this work we focused on an inverse problem in which noisy and partially missing data on the measurable is used to make inference on the model parameter vector which is the discretisation of the unknown model function , where is an orthogonal transformation of and .the measurable and the sought function are related via an unknown function . given that the very physics that connects to is unknown where can not construct training data , i.e. data comprising a set of computed for a known . in the absence of training data , we are unable to learn the unknown functional relationship between data and model function , either using splines / wavelets or by modelling this unknown function with a gaussian process .we then perform the estimation of at chosen values of , i.e. discretise the range of values of and estimate the vector instead , where is the value of for in the -th -bin .we aim to write the posterior of given the data .the likelihood could be written as the product of the values of the of the state space vector achieved at each data point , but the data being missing , the is projected onto the space of and the likelihood is written in terms of these projections of the . is embedded within the definition of the domain of the of .the projection calls for identification of the mapping between this domain and the unobserved variables ; this is an application specific task .the likelihood is convolved with the error distribution and vague but proper priors are invoked , leading to the posterior probability of the unknowns given the data .inference is performed using adaptive mcmc .the method is used to learn the gravitational mass density of a simulated galaxy using synthetic data , as well as that in the real galaxy ngc3379 , using data of 2 different kinds of galactic particles . the gravitational mass density vector estimated from the 2 independent data setsare found to be distinct .the distribution of the gravitational mass in the system is indicated by the function .the discretised form of this function defines the parameters , .these are computed using the learnt value of the parameters and plotted in figure [ fig : anisotropy ] .we notice that the estimate of can depend on the model chosen for the state space ; thus , the same galaxy can be inferred to be characterised by a higher gravitational mass distribution depending on whether an isotropic state space is invoked or not . turning this result around, one can argue that in absence of priors on how isotropic the state space of a galaxy really is , the learnt gravitational mass density function might give an erroneous indication of how much gravitational mass there is in this galaxy and of corse how that mass is distributed . it may be remarked that in lieu of such prior knowledge about the topology of the system state space , it is best to consider the least constrained of models for the state space , i.e. to consider this to be dependent on both and .it is also to be noted that the estimate for the gravitational mass density in the real galaxy ngc3379 appears to depend crucially on which data set is being implemented in the estimation exercise .it is possible that the underlying of the variable is different for the sub - volume of state space that one set of data vectors are sampled from , compared to another .as these data vectors are components of and of different kinds of galactic particles , this implies that the state space that the different kinds of galactic particles relax into , are different .
in this paper we focus on a type of inverse problem in which the data is expressed as an unknown function of the sought and unknown model function ( or its discretised representation as a model parameter vector ) . in particular , we deal with situations in which training data is not available . then we can not model the unknown functional relationship between data and the unknown model function ( or parameter vector ) with a gaussian process of appropriate dimensionality . a bayesian method based on state space modelling is advanced instead . within this framework , the likelihood is expressed in terms of the probability density function ( ) of the state space variable and the sought model parameter vector is embedded within the domain of this . as the measurable vector lives only inside an identified sub - volume of the system state space , the of the state space variable is projected onto the space of the measurables , and it is in terms of the projected state space density that the likelihood is written ; the final form of the likelihood is achieved after convolution with the distribution of measurement errors . application motivated vague priors are invoked and the posterior probability density of the model parameter vectors , given the data is computed . inference is performed by taking posterior samples with adaptive mcmc . the method is illustrated on synthetic as well as real galactic data . + * keywords * : bayesian inverse problems ; state space modelling ; missing data ; dark matter in galaxies ; adaptive mcmc . ,
one of the roles of a mobile application platform is to help users avoid unexpected or unwanted use of their personal data .mobile platforms currently use permission systems to regulate access to sensitive resources , relying on user prompts to determine whether a third - party application should be granted or denied access to data and resources .one critical caveat in this approach , however , is that mobile platforms seek the consent of the user the first time a given application attempts to access a certain data type and then enforce the user s decision for all subsequent cases , regardless of the circumstances surrounding each access .for example , a user may grant an application access to location data because she is using location - based features , but by doing this , the application can subsequently access location data for behavioral advertising , which may violate the user s preferences .earlier versions of android ( 5.1 and below ) asked users to make privacy decisions during application installation as an all - or - nothing ultimatum ( ask - on - install ) : either all requested permissions are approved or the application is not installed .previous research showed that few people read the requested permissions at install - time and even fewer correctly understood them .furthermore , install - time permissions do not present users with the context in which those permission will be exercised , which may cause users to make suboptimal decisions not aligned with their actual preferences . for example , egelman et al . observed that when an application requests access to location data without providing context , users are just as likely to see this as a signal for desirable location - based features as they are an invasion of privacy . asking users to make permission decisions at runtime at the moment when the permission will actually be used by the application provides more context ( i.e. , what they were doing at the time that data was requested ) .however , due to the high frequency of permission requests , it is not feasible to prompt the user every time data is accessed . in ios and android m ,the user is now prompted at runtime the first time an application attempts to access one of a set of `` dangerous '' permission types ( e.g. , location , contacts , etc . ) .this _ ask - on - first - use _ ( aofu ) model is an improvement over ask - on - install ( aoi ) . prompting users the first time an application uses one of the designated permissions gives users a better sense of context : their knowledge of what they were doing when the application first tried to access the data should help them determine whether the request is appropriate . however , wijesekera et al .showed that aofu fails to meet user expectations over half the time , because it does not account for the varying contexts of future requests .the notion of _ contextual integrity _ suggests that many permission models fail to protect user privacy because they fail to account for the context surrounding data flows .that is , privacy violations occur when sensitive resources are used in ways that defy users expectations .we posit that more effective permission models must focus on whether resource accesses are likely to defy users expectations in a given context not simply whether the application was authorized to receive data the first time it asked for it .thus , the challenge for system designers is to correctly infer when the context surrounding a data request has changed , and whether the new context is likely to be deemed `` appropriate '' or `` inappropriate '' for the given user .dynamically regulating data access based on the context requires more user involvement to understand users contextual preferences .if users are asked to make privacy decisions too frequently , or under circumstances that are seen as low - risk , they may become habituated to future , more serious , privacy decisions . on the other hand ,if users are asked to make too few privacy decisions , they may find that the system has acted against their wishes .thus , our goal is to automatically determine _ when _ and under _ what _ circumstances to present users with runtime prompts . to this end , we collected real - world android usage data in order to explore whether we could infer users future privacy decisions based on their past privacy decisions , contextual circumstances surrounding applications data requests , and users behavioral traits .we conducted a field study where 131 participants used android phones that were instrumented to gather data over an average of 32 days per participant .also , their phones periodically prompted them to make privacy decisions when applications used sensitive permissions , and we logged their decisions .overall , participants wanted to block 60% of these requests .we found that aofu yields 84% accuracy , i.e. , its policy agrees with participants prompted responses 84% of the time .aoi achieves only 25% accuracy .we designed new techniques that use machine learning to automatically predict how users would respond to prompts , so that we can avoid prompting them in most cases , thereby reducing user burden .our classifier uses the user s past decisions in related situations to predict their response to a particular permission prompt .the classifier outputs a prediction and a confidence score ; if the classifier is sufficiently confident , we use its prediction , otherwise we prompt the user for their decision .we also incorporate information about the user s behavior in other security and privacy situations to make inferences about their preferences : whether they have a screen lock activated , how often they visit https websites , and so on .we show that our scheme achieves 95% accuracy ( a reduction in error rate over aofu ) with significantly less user involvement than the _ status quo_. the specific contributions of our work are the following : we conducted the first known large - scale study on quantifying the effectiveness of ask - on - first - use permissions . we show that a significant portion of the studied participants make contextual decisions on permissions using the foreground application and the visibility of the permission - requesting application .we show how a machine - learned model can incorporate context and better predict users privacy decisions . to our knowledge, we are the first to use passively observed traits to infer future privacy decisions on a case - by - case basis at runtime .there is a large body of work demonstrating that install - time prompts fail because users do not understand or pay attention to them .when using install - time prompts , users often do not understand which permission types correspond to which sensitive resources and are surprised by the ability of background applications to collect information .applications also transmit a large amount of location or other sensitive data to third parties without user consent .when possible risks associated with these requests are revealed to users , their concerns range from annoyance to wanting to seek retribution . to mitigate some of these problems , systems have been developed to track information flows across the android system or introduce finer - grained permission control into android , but many of these solutions increase user involvement significantly , which can lead to habituation .additionally , many of these proposals are useful only to the most - motivated or technically savvy users .for example , many such systems require users to configure complicated control panels , which many are unlikely to do .other approaches involve static analysis in order to better understand how applications _ could _ request information , but these say little about how applications _ actually _ use information .dynamic analysis improves upon this by allowing users to see how often this information is requested in real time , but substantial work is likely needed to present that information to average users in a meaningful way .solutions that require runtime prompts ( or other user interruptions ) need to also minimize user intervention , in order to prevent habituation .other researchers have developed recommendation systems to recommend applications based on users privacy preferences .systems have also been developed to predict what users would share on mobile social networks , which suggests that future systems could potentially infer what information users would be willing to share with third - party applications . by requiring users to self - report privacy preferences ,clustering algorithms have been used to define user privacy profiles even in the face of diverse preferences .however , researchers have found that the order in which information is requested has an impact on prediction accuracy , which could mean that such systems are only likely to be accurate when they examine actual user behavior over time ( rather than relying on one - time self - reports ) .liu et al .clustered users by privacy preferences and used ml techniques to predict whether to allow or deny an application s request for sensitive user data . however , their dataset was collected from a set of highly privacy - conscious individuals those choosing to install a permission - control mechanism .furthermore , the researchers removed `` conflicting '' user decisions , in which a user chose to deny a permission for an application , and then later chose to allow it. however , these conflicting decisions happen nearly 50% of the time in the real world , and accurately reflect the nuances of user privacy preferences ; they are not experimental mistakes , and therefore models need to account for them .in fact , previous work found that users commonly reassess privacy preferences after usage .liu et al.also expect users to make 10% of permission decisions manually , which , based on field study results from wijesekera et al ., would result in being prompted every three minutes .this is obviously impractical .our goal is to design a system that can automatically make decisions on behalf of users , that accurately models their preferences , while also not over - burdening them with repeated requests .closely related to this work , liu et al . performed a field study to measure the effectiveness of a privacy assistant that offers recommendations to users on privacy settings that they could adopt based on each user s privacy profile the privacy assistant predicts what the user might want based on the inferred privacy profile and static analysis of the third - party application .while this approach increased user awareness on resource usage , the recommendations are static : they do not consider each application s access to sensitive data on a case - by - case basis .such a coarse - grained approach goes against previous work suggesting that people do want to vary their decisions based on contextual circumstances . a blanket approval or denial of a permission to a given application carries a considerable risk of privacy violations or loss of desired functionality . in contrast, our work tries to infer the appropriateness of a given request by considering the surrounding contextual cues and how the user has behaved in similar situations in the past , in order to make decisions on a case - by - case basis using dynamic analysis .their dataset was collected from a set of highly privacy - conscious and considerably tech - savvy individuals , which might limit the generalization of their claims and findings , whereas we conducted a field study on a more representative sample .nissenbaum s theory of contextual integrity suggests that permission models should focus on information flows that are likely to defy user expectations .there are three main components involved in deciding the appropriateness of a flow : the context in which the resource request is made , the role played by the agent requesting the resource ( i.e. , the role played by the application under the current context ) , and the type of resource being accessed .neither previous nor currently deployed permission models take all three factors into account .this model could be used to improve permission models by automatically granting access to data when the system determines that it is appropriate , denying access when it is inappropriate , and prompting the user only when a decision can not be made automatically , thereby reducing user burden . _access control gadgets _( acgs ) were proposed as a mechanism to tie sensitive resource access to certain ui elements .authors posit that such an approach will increase user expectations since a significant portion of participants expected a ui interaction before a sensitive resource usage , giving users an implicit mechanism to control access and increasing the awareness on resource usage .the biggest caveat in this approach is tying a ui interaction to each sensitive resource access is practically impossible due to the high frequency at which these resources are accessed , and due to the fact that many legitimate resource accesses occur without user initiation .wijesekera et al .performed a field study to operationalize the notion of `` context , '' so that an operating system can differentiate between appropriate and inappropriate data requests by a single application for a single data type .they found that users decisions to allow a permission request were significantly correlated with that application s visibility : in this case , the context is using or _ not _ using the requesting application .they posit visibility of the application could be a strong contextual cue that influences users responses to permission prompts .they also observed that privacy decisions were highly nuanced , and therefore a one - size - fits - all model is unlikely to be sufficient ; a given information flow may be deemed appropriate by one user and inappropriate by another user .they recommended applying machine learning in order to infer individual users privacy preferences . to achieve this, research is needed to determine what factors affect user privacy decisions and how to use those factors to make privacy decisions on the user s behalf .while we can not automatically capture everything involved in nissenbaum s notion of context , we can try for the next - best thing : we can try to detect when context has likely changed ( insofar as to decide whether or not a different privacy decision should be made for the same application and data type ) , by seeing whether the circumstances surrounding a data request are similar to previous requests or not ..felt et al . proposed granting a select set of 12 permissions at runtime so that users have contextual information to infer why the data might be needed .our instrumentation omits the last two permission types ( internet & write_sync_settings ) and records information about the other 10 .[ cols= " < " , ]
current smartphone operating systems regulate application permissions by prompting users on an ask - on - first - use basis . prior research has shown that this method is ineffective because it fails to account for context : the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access . we performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources . we built a classifier to make privacy decisions on the user s behalf by detecting when context has changed and , when necessary , inferring privacy preferences based on the user s past decisions and behavior . our goal is to automatically grant appropriate resource requests without further user intervention , deny inappropriate requests , and only prompt the user when the system is uncertain of the user s preferences . we show that our approach can accurately predict users privacy decisions 96.8% of the time , which is a four - fold reduction in error rate compared to current systems .
in this article , we characterize the extreme - strike behavior of implied volatility curves for fixed maturity for uncorrelated gaussian stochastic volatility models .this introduction contains a careful description of the problem s background and of our motivations . before going into details , we summarize some of the article s specificities ; all terminology in the next two paragraphs is referenced , defined , and/or illustrated in the remainder of this introduction . we hold calibration of volatility smiles as a principal motivator .cognizant of the fact that non - centered gaussian volatility models can be designed in a flexible and parsimonious fashion , we adopt that class of models , imposing no further conditions on the marginal distribution of the volatility process itself , beyond pathwise continuity .the spectral structure of second - wiener chaos variables allows us to work at that level of generality .we find that the first three terms in the extreme - strike implied volatility asymptotics which is typically amply sufficient in applications can be determined explicitly thanks to three parameters characterizing the top of the spectral decomposition of the integrated variance . in order to prove such a precise statement while relying on a moderate amount of technicalities , we make use of the the simplifying assumption that the stochastic volatility is assumed independent of the stock price s driving noise . when considering the trade - off between this restriction and calibration considerations , we observe that our model flexibility combined with known explicit spectral expansions and numerical tools may allow practicioners to compute the said spectral parameters in a straightforward fashion based on smile features , while also allowing them to select their favorite gaussian volatility model class. specific examples of gaussian volatility processes are non - centered brownian motion , brownian bridge , and ornstein - uhlenbeck models .this last sub - class can be particularly appealing to practitioners since it contains stationary volatilities , and includes the well - known stein - stein model .we also mention how any gaussian model specification , including long - memory ones , can be handled , thanks to the numerical ability to determine its spectral elements .we understand that the assumption of the stochastic volatility model being uncorrelated implies the symmetry of the implied volatility in the wings , which in some applications , is not a desirable feature ; on the other hand , in many option markets , liquidity considerations limit the ability to calibrate using the large - strike wing ( see the calibration study on spx options in ( * ? ? ?* section 5.4 ) ) .the case of a correlated gaussian stochastic volatility model is more complicated , but constitutes an interesting mathematical challenge , which we will investigate separately from this article , since one may need to develop completely new methods and techniques .an important step toward a better understanding of the asymptotic behavior of the implied volatility in some correlated stochastic volatility models is found in the articles dfjv1,dfjv2 .another problem which is mathematically interesting and important in practice is the asymptotics for implied volatility in small or large time to maturity , on which we report in separate works .studies in quantitative finance based on the black - scholes - merton framework have shown awareness of the inadequacy of the constant volatility assumption , particularly after the crash of 1987 , when practitioners began considering that extreme events were more likely than what a log - normal model will predict .propositions to exploit this weakness in log - normal modeling systematically and quantitatively have grown ubiquitous to the point that implied volatility ( iv ) , or the volatility level that market call option prices would imply if the black - scholes model were underlying , is now a _ bona fide _ and vigorous topic of investigation , both at the theoretical and practical level .the initial evidence against constant volatility simply came from observing that iv as a function of strike prices for liquid call options exhibited non - constance , typically illustrated as a convex curve , often with a minimum near the money as for index options , hence the term ` volatility smile ' .stock price models where the volatility is a stochastic process are known as stochastic volatility models ; the term ` uncorrelated ' is added to refer to the submodel class in which the volatility process is independent of the noise term driving the stock price . in a sense , the existence of the smile for any uncorrelated stochastic volatility model was first proved mathematically by renault and touzi in .they established that the iv as a function of the strike price decreases on the interval where the call is in the money , increases on the interval where the call is out of the money , and attains its minimum where the call is at the money .note that renault and touzi did not prove that the iv is locally convex near the money , but their work still established stochastic volatility models as a main model class for studying iv ; these models continued steadily to provide inspiration for iv studies . a current emphasis , which has become fertile mathematical ground , is on iv asymptotics , such as large / small - strike , large - maturity , or small - time - to - maturity behaviors .these are helpful to understand and select models based on smile shapes .several techniques are used to derive iv asymptotics .for instance , by exploiting a method of moments and the representation of power payoffs as mixtures of a continuum of calls with varying strikes , in a rather model - free context , r. lee proved in that , for models with positive moment explosions , the squared iv s large strike behavior is of order the log - moneyness times a constant which depends explicitly on supremum of the order of finite moments .a similar result holds for models with negative moment explosions , where the squared iv behaves like for small values of .more general formulas describing the asymptotic behavior of the iv in the ` wings ' ( or ) were obtained in ( see also the book ) . from the standpoint of modeling , one of the advantages of lee s original result is the dependence of iv asymptotics merely on some simple statistics , namely as we mentioned , in the notation in , the maximal order of finite moments for the underlying , i.e. < \infty \right\ } .\]]this allows the author to draw appropriately strong conclusions about model calibration .a typical class in which is positive and finite is that of gaussian volatility models , which we introduce next .we consider the stock price model of the following form : , \label{mart}\]]where the short rate is constant , with an arbitrary continuous deterministic function on ] independent of , with arbitrary covariance .note that it is not assumed in ( [ e : mart ] ) that the process is a solution to a stochasic differential equation as is often assumed in classical stochastic volatility models .a well - known special example of a gaussian volatility model is the stein - stein model introduced in , in which the volatility process is the so - called mean - reverting ornstein - uhlenbeck process satisfying is the level of mean reversion , is the mean - reversion rate , and is level of uncertainty on the volatility ; here is another brownian motion , which may be correlated with . in the present paper ,we adopt an analytic technique , encountered for instance in the analysis of the uncorrelated stein - stein model by this paper s first author and e.m .stein in ( see also ) .returning to the question of the value of , for a gaussian volatility model , it can sometimes be determined by simple calculations , which we illustrate here with an elementary example .assume is a geometric brownian motion with random volatility , i.e. a model as in ( mart ) where ( abusing notation ) is taken the non - time - dependent where is a constant and is an independent unit - variance normal variate ( not dependent on ) .thus , at time , with zero discount rate , . to simplify this example to the maximum , also assume that is centered ; using the independence of and , we get that we may replace by in this example , since this does not change the law of ( i.e. in the uncorrelated case , s non - positivity does not violate standard practice for volatility modeling ) .then , using maturity , for any , the moment , via a simple change of variable , equals = \frac{1}{2\pi \sqrt{1+p\sigma ^{2}}}\iint_{\mathbf{r}^{2}}dx~dy~\exp \left ( -\frac{1}{2}\left ( y^{2}+w^{2}-2\frac{p\sigma } { \sqrt{1+p\sigma ^{2}}}wy\right ) \right)\]]which by an elementary computation is finite , and equal to , if and only if in the cases where the random volatility model above is non - centered and is correlated with , a similar calculation can be performed , at the essentially trivial expenses of invoking affine changes of variables , and the linear regression of one normal variate against another .the above example illustrates heuristically that , by lee s moment formula , the computation of might be the quickest path to obtain the leading term in the large - strike expansion of the iv , for more complex gaussian volatility models , namely ones where the volatility is time - dependent .however , computing is not necessarily an easy task , and appears , perhaps surprisingly , to have been performed rarely . for the stein - stein model, the value of can be computed using the sharp asymptotic formulas for the stock price density near zero and infinity , established in for the uncorrelated stein - stein model , and in for the correlated one .these two papers also provide asymptotic formulas with error estimates for the iv at extreme strikes in the stein - stein model . beyond the stein - stein model , littlewas known about the extreme strike asymptotics of general gaussian stochastic volatility models . in the present paper, we extend the above - mentioned results from and to such models . adopting the perspective that an asymptotic expansion for the iv can be helpful for model selection and calibration , our objective is to provide an expansion for the iv in a gaussian volatility model relying on a minimal number of parameters , which can then be chosen to adjust to observed smiles .the restriction of non - correlated volatility means that the stock price distribution is a mixture model of geometric brownian motions with time - dependent volatilities , whose mixing density at time is that of the square root of a variable in the second - chaos of a wiener process . that second - chaos variable is none other than the integrated variance relying on a general hilbert - space structure theorem which applies to the second wiener chaos , we prove that , in the most general case of a non - centered gaussian stochastic volatility with a possible degeneracy in the eigenstructure of the covariance of viewed as a linear operator on \right ) ] of the orthogonal projection of the mean function on the first eigenspace of .specifically , with the iv as a function of strike , letting be the discounted log - moneyness , as , we prove the constants and depend explicitly on and , and also depends explicitly on .the details of these constants are in theorem [ t : isvm ] on page .a similar asymptotic formula is obtained in the case where , using symmetry properties of uncorrelated stochastic volatility models ( see formula ( [ e : ee ] ) on page ) .the specific case of the stein - stein model is expanded upon in some detail .numerical illustrations of how the small strike asymptotics can be used are provided in section num , in the context of calibration ; we explain some of the calibration ideas in the next subsection .the first - order constant is always strictly positive .the second - order term ( the constant ) vanishes if and only if is orthogonal to the first eigenspace of , which occurs for instance when .the third - order term vanishes if and only if the top eigenvalue has multiplicity one , which is typical .the behavior of and as functions of is determined partly by how the top eigenvalue depends on , which can be non - trivial . in the present paper ,we assume is fixed . for fixed maturity , assuming that has lead multiplicity for instance , a practitioner will have the possibility of determining a value and a value to match the specific root - log - moneyness behavior of small- or large - strike iv ; moreover in that case , choosing a constant mean function , one obtains where is the top eigenfunction of .market prices may not be sufficiently liquid at extreme strikes to distinguish between more than two parameters ; this is typical of calibration techniques for implied volatility curves for fixed maturity , such as the ` stochastic volatility inspired ' ( svi ) parametrization disseminated by j. gatheral : see ( see also and the references therein ) .our result shows that gaussian volatility models with non - zero mean are sufficient for this flexibility , and provide equivalent asymptotics irrespective of the precise mean function and covariance eigenstructure , since modulo the disappearance of the third - order term in the unit top multiplicity case , only and are relevant .moreover , our gaussian parametrization is free of arbitrage , since it is based on a semi - martingale model ( [ mart ] ) . in the case of svi parametrization, the absence of arbitrage can be non - trivial , as discussed in and gj .modelers wishing to stick to well - known classes of processes for may then adjust the value of by exploiting any available invariance properties for the desired class for example , if is standard brownian motion , or the brownian bridge , on ] and , \]]respectively . while such processes used in a jump - free quantitative finance context for volatility modeling will require , in addition , that be adapted to filtration of the wiener process driving the stock - price ( as in ( [ mart ] ) ) , under our simplifying assumption that and be independent , this adaptability assumption can be considered as automatically satisfied , or equivalently , as unnecessary , since the filtration of can be augmented by the natural filtration of .define the centered version of : fix a time horizon .it is not hard to see that for all .since the gaussian process is almost surely continuous , the mean function is a continuous function on ] .this is a consequence of the dudley - fernique theory of regularity , which also implies that and boast moduli of continuity bounded above by the scale ( see ) , but this can also be established by elementary means implies its continuity in probability on .hence , the process is continuous in the mean - square sense ( see , e.g. , , lemma 1 on p. 5 , or invoke the equivalence of norms on wiener chaos , see ) .mean - square continuity of implies the continuity of the mean function on ] , ^{2} ] . ] . in our analysis, it will be convenient to refer to the karhunen - love expansion of . applying the classical karhunen - love theorem to ( see , e.g. , , section 26.1 ), we obtain the existence of a non - increasing sequence of non - negative summable reals , an i.i.d . sequence of standard normal variates , and a sequence of functions which form an orthonormal system in \right ) ] as the following operator \right ) , \quad 0\leq t\leq t,\]]and , , are the corresponding eigenvalues ( counting the multiplicities ) .we always assume that the orthonormal system is rearranged so that in particular , is the top eigenvalue , and is its multiplicity . using ( [ e : kl ] ) , we obtain is worth pointing out that this expression for the integrated variance of the centered volatility is in fact the most general form of a random variable in the second wiener chaos , with mean adjusted to ensure almost - sure positivity of the integrated variance .this is established using a classical structure theorem on separable hilbert spaces , as explained in ( * ? ? ?* section 2.7.4 ) .in other words ( also see ( * ? ? ? * section 2.7.3 ) for additional details ) , any prescribed mean - adjusted integrated variance in the second chaos is of the form ^{2}}g\left ( s , t\right ) dz\left ( s\right ) dz\left ( t\right ) + 2\left\vert g\right\vert _{ l^{2}\left ( [ 0,t]^{2}\right ) } ^{2}\]]for some standard wiener process and some function ^{2}\right ) ] , then hence , .for instance , the previous equality holds for a centered gaussian process .equality ( [ e : kl ] ) can be rewritten as follows : + s \notag \\ & = \sum_{n=1}^{\infty } \lambda _ { n}\left [ z_{n}+\frac{\delta _ { n}}{\sqrt{\lambda _ { n}}}\right ] ^{2}+\left ( s-\sum_{n=1}^{\infty } \delta _ { n}^{2}\right ) .\label{e : kl2}\end{aligned}\]]it follows from ( [ e : kl1 ] ) and ( [ e : kl2 ] ) that if the function belongs to the image space ) ] , and the fourier coefficient of the mean function with respect to the corresponding eigenfunction .we refer to section practical for further discussion .since the processes and in ( [ e : svm ] ) are independent , the stochastic volatility model described in ( [ e : svm ] ) belongs to the class of the so - called symmetric models ( see section 9.8 in ) .it is known that for symmetric models , the next statements can be established using theorem [ t : isvm ] , corollary [ c : coro ] , and formula ( [ e : ee ] ) .[ t : isvmo ] the following formula holds for the function as : \sqrt{\log\frac{s_0e^{rt}}{k } } \notag \\ & \quad+\frac{\sqrt{2}\widetilde{b}}{t^{\frac{1}{2}}(8\widetilde{c}+t ) ^{\frac{1}{4 } } } \left[\frac{1}{\left(\sqrt{8\widetilde{c}+t}-\sqrt{t}\right ) ^{\frac{1}{2 } } } -\frac{1}{\left(\sqrt{8\widetilde{c}+t}+\sqrt{t}\right)^{\frac{1}{2 } } } \right ] \notag \\ & \quad+\frac{1-n_1}{4}\frac{\log\log\frac{s_0e^{rt}}{k } } { \sqrt{\log{\frac{s_0e^{rt}}{k } } } } + o\left(\left(\log\frac{s_0e^{rt}}{k}\right ) ^{-\frac{1}{2}}\right ) .\label{e : holdse}\end{aligned}\ ] ] the constants in ( [ e : holdse ] ) are the same as in theorem [ t : isvm ] .[ c : corol ] the following are true : ( i)if , then as , } \sqrt{\log \frac{s_{0}e^{rt}}{k } } \notag \\ & \quad + \frac{\sqrt{2}\delta _ { 1}}{t^{\frac{1}{2}}(4+\lambda _ { 1})^{\frac{1}{4}}\left [ ( \sqrt{4+\lambda _ { 1}}+\sqrt{\lambda _ { 1}})^{\frac{1}{2}}+(\sqrt{4+\lambda _ { 1}}-\sqrt{\lambda _ { 1}})^{\frac{1}{2}}\right ] } \notag \\ & \quad + o\left ( \left ( \log \frac{s_{0}e^{rt}}{k}\right ) ^{-\frac{1}{2}}\right ) .\label{e : n11}\end{aligned}\]](ii)if is a centered gaussian process , then as , } \sqrt{\log \frac{s_{0}e^{rt}}{k } } \\ & \quad + \frac{1-n_{1}}{4}\frac{\log \log \frac{s_{0}e^{rt}}{k}}{\sqrt{\log \frac{s_{0}e^{rt}}{k}}}+o\left ( \left ( \log \frac{s_{0}e^{rt}}{k}\right ) ^{-\frac{1}{2}}\right ) .\end{aligned}\]](iii)if is a centered gaussian process and , then as , } \sqrt{\log \frac{s_{0}e^{rt}}{k}}+o\left ( \left ( \log \frac{s_{0}e^{rt}}{k}\right ) ^{-\frac{1}{2}}\right ) .\ ] ]the classical stein - stein model is an important special example of a gaussian stochastic volatility model .the stein - stein model was introduced in .the volatility in the uncorrelated stein - stein model is the absolute value of an ornstein - uhlenbeck process with a constant initial condition . in this section, we also consider a generalization of the stein - stein model , in which the initial condition for the volatility process is a random variable . of our interest in the present sectionis a gaussian stochastic volatility model with the process satisfying the equation . here , , and .it will be assumed that the initial condition is a gaussian random variable with mean and variance , independent of the process .it is known that if , then the initial condition is equal to the constant .the mean function of the process is given by and its covariance function is as follows : therefore , the following formula holds for the variance function : and hence , if , then the process , ] .the constant in ( [ e : eqin3 ] ) is determined from for all . on the other hand ,if , then is given by ( [ e : eqin ] ) , while the functions are defined by for all and ] .option prices are derived by computing average payoffs over the paths .the details are well known , and are omitted . in the fou case , the exact same methodology is used , except that one must specify the technique used to simulate increments of the fbm process which drives : we used the circulant method , which is based on fbm s spectral properties , and was proposed by a.t .dieker in a 2002 thesis : see . given this simulated data , before embarking on the task of calibrating parameters , to ensure that our methodology is relevant in practice, it is important to discuss liquidity issues .it is known that the out - of - the - money call options market is poorly liquid , implying that the large strike asymptotics for call and iv prices are typically not visible in the data .we concentrate instead on small strike asymptotics . there , depending on the market segment , options with maturities on the scale of one or several months can be liquid with small bid - ask spread for log moneyness as far down as . convincing visual evidence of this can be found in figures 3 and 4 in which report 2011 data for spx options . based on this , we will use the value as an illustrative lower bound for : beyond this lower bound , liquidity is insufficient to measure iv .this immediately creates a problem due to the similar magitude , in this range of , of the constant term and the next term in the expansion whose order we can only guarantee to be no larger than , according to the big term in corollary [ c : corol ] part ( i ) . in spite of this , calibration of to as in ( calibration ) works well in practice , as our examples below show .we also find that the inability to neglect compared to introduces a bias in the calibration , whose magnitude , while not constant , gives a good idea of the relative size of the next term in the small strike expansion , motivating the open problem of looking for this next term , and perhaps the following one .we begin with the stationary uncorrelated stein - stein model with constant mean - reversion level , rate of mean reversion , and so called vol - vol parameter .the systems of equations needed to perform calibration here have a somewhat triangular structure . according to section [ s : uss ] , if one is to calibrate , access to is needed , if one may rely on independent knowledge of the level of mean reversion .specifically , one would solve the following system unfortunately the constant also depends on the unknowns in the following non - trivial way: is not fixed , the task of determining which value of represents the minimal solution of the first equation above , given the large number of solutions to the above system , is difficult . we did not pursue this avenue further for this reason , and because the aforementioned bias due to liquidity means that one would not be in the right asymptotic regime .nevertheless , in the stein - stein model , our asymptotics allow us to calibrate the vol - vol parameter .the equation for finding given a measurement of and prior knowledge of is much simpler .indeed , since is assumed given , the base frequency is computed easily as the smallest positive solution of ( [ e : eqin2 ] ) .independently of this , based on , the first relation in ( [ calibration ] ) determine . then according to equation ( [ e : lambda ] ) , we obtain immediately simulated iv data for the call option with ( signifying a typical mean level of volatility of ) , ( fast mean reversion , every eight weeks or so ) , and ( high level of volatility uncertainty ) .how to estimate from the data is not unambiguous .we adopt a least - squares method , on an interval of -values of fixed length ; after experimentation , as a rule of thumb , an interval of length provides a good balance between providing a local estimate and drawing on enough datapoints .we start the interval as far to the left as possible while avoiding any range where the data representing the function may exhibit clear signs of convexity .this is in an effort to coincide with values of for which sufficient liquidity exists in practice . as a guide to assess this liquidity, we use the study reported in ( * ? ? ?* section 5.4 ) , which we mentioned in the introduction .this liquidity depends heavily on the option maturity . for a one - month option , the above calibration method reports , based on the interval ] .other calibrations , not reported here because of their similarity with these two , show that calibration accuracy increases with more liquid options , which is consistent with the intuition that being able to use intervals further to the left should allow a better match with the asymptotic regime ( [ e : onemore1 ] ) .the graphs of the data versus the asymptotic curve , showing excellent agreement for the first term , are given in fig . 1 and fig. 2 . as mentioned above, this agreement explains why this calibration for works well even though there is a clear bias between the curve ( solid lines in fig . 1 and fig .2 ) and the true ( simulated ) iv data ( blue data points in fig . 1 and fig .2 ) . one may immediately conjecture that the gap between the solid and discrete lines is representative of the term in corollary [ c : corol ] , in the sense that the next term in the expansion might be of the form for some constant , and that our simulation might give us access to the value of .this does not appear to be quite the case .when graphing the expression for \sqrt{-k} ] , with hurst parameter . in , it was shown empirically that standard statistical methods for long - memory data are inadequate for estimating .this difficulty can be attributed to the fact that the volatility process can have non - stationary increments . in addition , some of the classical methods use path regularity or self - similarity as a proxy for long memory , which can not be exploited in practice since there is a lower limit to how frequenty observations can be made without running into microstructure noise . to make matters worse ,the process is not directly observed ; in such a partial observation case , a general theoretical result was given in , by which the rate of convergence of any estimator of can not exceed an optimal -dependent rate which is always slower than , where is the number of observations .given the non - stationarity of the parameter on a monthly scale , a realistic time series at the highest observation frequency where microstructure noise can be ingored ( e.g. one stock observation every 5 minutes ) would not permit even the optimal estimators described in from pinning down a value of with any acceptable confidence level .the work in proposes a calibration technique based on a straightforward comparison of simulated and market option prices to determine .our strategy herein is similar , but based on implied volatility .we consider the fou model described above with the following parameters : , , , , with different values of the hurst parameter .as mentioned above , we simulated option prices using standard monte carlo , where the fou process is produced by a.t .dieker s circulant method . since the values of for each not known explicitly or semi - explicitly , we resorted to the method developed in by s. corlay in for optimal quantification : there , the infinite - dimensional eigenvalue problem is converted to a matrix eigenvalue problem which uses a low - order quadrature rule for approximating integrals ( a trapezoidal rule is recommended ) , after which a richardson - romberg extrapolation is used to improve accuracy .we repeat this procedure for the fou process with the above parameters , for each value of from to , with increments of .the corresponding values we obtain for in each case are collected in the following table : our illustration of the calibration method then consists of starting with simulated iv data for a fou model with fixed , then implementing the same procedure as in the stein - stein example to determine the value of by using an interval of length which is realistic in terms of liquidity constraints , and then matching that value of to the closest value in the above table , thereby concluding that the simulated data is consistent with the corresponding value of in the table .the results of this method are summarized as follows our method shows a good level of accuracy , though it does not appear capable of distinguishing between the case of no memory and a simulation with very low memory ( ) .the accuracy of the method has some sensitivity to the intervals being used , particularly for small true values of : in that case the first term in the implied volatility asymptotic curve seems to dictate a need for including data points for log - moneyness near . for higher values of , we noted greater robustness to the interval being used to estimate .for example , with a true used in simulation , calibrated values of and were obtained with the intervals ] respectively .more generally , one notes that the bias between the curve and the simulated iv data is much larger when than in the classical stein - stein case where , as illustrated in fig .4 . this provides an even stronger call than in the case to compute the constants and in the conjecture in relation ( [ conjecture ] ) , as functions of the first few kl eigen - elements .we also computed the fou s kl eigenvalues and for each , though these are not used in our calibration .they are nevertheless instructive since they indicate at what speed the kl expansions of and of the integrated variance might converge . without reporting all the values ,we find that the ratio decreases from for to for , while for , the values are even smaller , decreasing from to over the same range of . if the values of and in our conjecture ( [ conjecture ] ) are of the same order as the ratio , this could give additional hope that computing in the fou case might be enough to implement a two - parameter calibration , with only a few values of needed to approximate . finally , by comparing fig .1 with the first graph in fig .4 , where all ou parameters remain the same except for which goes from to , one notes that the difference jumps from values on the order of to much larger values , on the order of .this indicates that the constant in formula ( [ just ] ) might have a discontinuous dependence on as approches ; this phase transition between the case of no memory ( ) to that of long memory ) is presumably due to a jump in the value of kl elements as approaches .this could lead to a method for determining whether long memory is present , but in the case of iv asymptotics it would be constrained by liquidity considerations .we will investigate all these questions in another article .s. benaim , p. k. friz , and r. lee . on black - scholes implied volatility at extreme strikes . in : r. cont ( ed . ) , _ frontiers in quantitative finance : volatility and credit risk modeling _ , wiley , hoboken ( 2009 ) , 19 - 45 .deuschel , p. k. friz , a. jacquier and s. violante .marginal density expansions for diffusions and stochastic volatility i : theoretical foundations . _communications on pure and applied mathematics _* 67 * ( 2014 ) , 40 - 82 .deuschel , p. k. friz , a. jacquier and s. violante .marginal density expansions for diffusions and stochastic volatility i : applications . _communications on pure and applied mathematics _* 67 * ( 2014 ) , 321 - 350 .j. gatheral . a parsimonious arbitrage - free implied volatility parametrization with application to the valuation of the volatility derivatives . in : _ global derivatives and risk management , _ madrid , may 26 , 2004 .a. gulisashvili .asymptotic equivalence in lee s moment formulas for the implied volatility , asset price models without moment explosions , and piterbarg s conjecture . _ international journal of theoretical and applied finance _ * 15 * ( 2012 ) , 1250020 .a. gulisashvili and e. m. stein .asymptotic behavior of the stock price distribution density and implied volatility in stochastic volatility models ._ applied mathematics and optimization _ * 61 * ( 2010 ) , 287 - 315 .d. revuz and m. yor ._ continuous martingales and brownian motion _ , springer - verlag berlin 2004 . m. rosenbaum .estimation of the volatility persistence in a discretely observed diffusion model ._ stochastic processes and their applications _ , * 118 * ( 8) ( 2008 ) , 1434 - 1462 . , , , , ,title="fig : " ] , , , , ,title="fig : " ] , , , , ,title="fig : " ] , , , , ,title="fig : " ] , , , , ,title="fig : " ]
we consider a stochastic volatility stock price model in which the volatility is a non - centered continuous gaussian process with arbitrary prescribed mean and covariance . by exhibiting a karhunen - love expansion for the integrated variance , and using sharp estimates of the density of a general second - chaos variable , we derive asymptotics for the stock price density and implied volatility in these models in the limit of large or small strikes . our main result provides explicit expressions for the first three terms in the expansion of the implied volatility , based on three basic spectral - type statistics of the gaussian process : the top eigenvalue of its covariance operator , the multiplicity of this eigenvalue , and the norm of the projection of the mean function on the top eigenspace . strategies for using this expansion for calibration purposes are discussed . * ams 2010 classification * : 60g15 , 91g20 , 40e05 . * keywords * : stochastic volatility , implied volatility , large strike , karhunen - love expansion , chi - squared variates .
* sharpe : * it has become apparent from many talks at this conference that chiral extrapolation is an issue of great practical importance .different approaches are being tried , and it is certainly timely to have a general discussion of the issue . in order to focus the discussion , i sent the panelists a draft list of key questions to focus their thoughts as they were preparing their remarks .these questions have evolved as a result of feedback , and my present version ( in no particular order ) is as follows . 1. how small does the quark mass need to be to use chiral perturbation theory ( ) ? 2 .do we need to use fermions with exact chiral symmetry to reach the region where applies ?what fit forms should we use outside the chiral region ?4 . is the strange quark light enough to be in the chiral regime ? 5 .is it necessary to include effects in the chiral lagrangian ? 6 .can we use ( present or future ) partially quenched simulations to obtain quantitative results for physical parameters ? 7 .can we use quenched simulations to give quantitative results for physical parameters ? 8 .is it possible and/or desirable to work at ?* bernard : * at this conference , and in the recent literature , several groups have emphasized a key point about chiral extrapolations : over the typical current range of lattice values for the light quark masses , the data for many physical quantities is quite linear . yet linear extrapolations will miss the chiral logarithms that we know are present and therefore may introduce large systematic errors into the results .jlqcd , kronfeld and ryan , and yamada s review talk here have stressed the relevance of this point for heavy - light decay constants ; while the adelaide group has brought out the same point in the context of baryon physics .all these groups deserve a lot of credit for bringing this important issue to the fore . nowthe question is : `` what are we going to do about it ? '' attempts to extract the logarithms directly in the current typical mass range are in my opinion doomed to failure : the extreme linearity of the data indicates , at best , that higher order terms must be contributing in addition to the logarithms , or , at worst , that we are out of the chiral regime altogether .the only real solution is to go to lower quark masses .we need to be well into the chiral regime , to see the logs and make controlled fits including this known chiral physics .my rough guess in the heavy - light decay constant case is that we need , or at best .the latter range may be reachable , with significant work , with wilson - type fermions ; while the former may require , at least in the near term , staggered fermions .the use of staggered light valence quarks in heavy - light simulations , as was suggested by wingate at _ lattice 2001 _ , should make the chiral regime for that problem accessible very soon .a different approach has been advocated by the adelaide group .they say we can take into account the chiral logarithms in the current range of masses by modeling the turn - off of chiral logarithms with a quantity - dependent cutoff that represents the `` core '' of the object under study .i have nothing against modeling _per se _ ; i think it can be an excellent tool to gain qualitative insight into the physics .what i think is wrong , or at least wrong - headed , about the adelaide approach is the suggestion that one can use it to extract reliable quantitative answers with controlled errors .extraction of such answers is after all why we are doing lattice physics in the first place .the adelaide model introduces a single parameter , the core size , to describe the very complicated real physics involving couplings to all kinds of particles s , s , _ etc . _ as one moves out of the chiral regime .the change in their results when they change the parameter by some amount or vary the functional form at the cutoff is simply not a reliable , systematically improvable error .in other words , their model is an uncontrolled approximation .suppose , however , one phrases the question in the following way : `` given some lattice data in the linear regime , are you likely to get closer to the right answer with a linear fit , or with an adelaide form that interpolates between linear behavior and the known chiral behavior at low mass ? '' phrased that way , my answer would be , `` the adelaide form . ''but the problem is that , while you are most likely closer to the right answer , you do not know the size of the errors unless you know the right answer to begin with ! in my opinion , the linear fit is a `` straw man '' alternative .the real alternative is to go to lighter masses and fit to the known chiral form .this approach , and this approach only , will produce controlled , systematically improvable errors : to improve , just go to higher order in the chiral expansion or to still lighter masses .now if we want to go to lighter masses , i would argue that the easiest way to do so is by using staggered fermions .dynamical staggered fermions are very fast , and they have an exact lattice chiral symmetry . however , as you know , many of the other staggered symmetries are broken at finite lattice spacing .first of all , let me talk about nomenclature .i would like to advocate here the use of the word `` taste '' to describe the 4 internal fermion types inherent in a single staggered field .taste symmetry is violated on the lattice at but becomes exact in the continuum limit .i reserve the word `` flavor '' for different staggered fields , which have an exact lattice symmetry ( in the equal mass case ) that mixes them .for example , milc is doing simulations with 3 flavors ( , , and ) with .normally each flavor would have 4 tastes , but we do the usual trick of taking the fourth root of the determinant to get a single taste per flavor .of course this is ugly and non - local , and one must test that there are no problems introduced in the continuum limit .i find it useful to think about the effects of taste symmetry breaking as just a more complicated version of `` partial quenching '' .sharpe and shoresh have taught us that , as long as a theory has the right number of sea quarks ( 3 ) , the chiral parameters are physical even if the masses of the quarks are not physical , and even if the valence and sea quark masses are different ( _ i.e. _ , even if the theory is partially quenched ) . with three staggered flavors and s ,the theory is , i believe , still in the right sector and has physical chiral parameters .but it is like a theory with 12 sea quarks , each with weight , rather than 3 normal flavors . in order to extract the physical chiral parameters from an ordinary 3-flavor partially quenched theory , we need the correct functional forms calculated in partially quenched chiral perturbation theory .similarly , in order to extract the physical chiral parameters from a theory of 3 staggered flavors with s , we need the functional forms calculated in a staggered chiral perturbation theory ( s ) .this includes the effect of the taste violations .the starting point of s is the chiral lagrangian of lee and sharpe , which is the low energy effective theory for a single staggered field , correct to . to apply it to the case of interest, one must generalize to 3 flavors ( which turns out to be non - trivial ) , calculate relevant quantities at 1-loop , and the adjust for the effect of taking s .student chris aubin and i have done this for and .( is the average quark mass . )one can fit the milc data very well with our results .we are in the process of extending this work to , and heavy - light decay constants , as well as allowing for different valence and sea quark masses . *hashimoto : * since the computational cost required to simulate dynamical quarks grows very rapidly as the sea quark mass is decreased , controlled chiral extrapolation is crucial to obtain reliable predictions for physical quantities . through this short presentationi would like to share our experience with chiral extrapolations obtained from the unquenched simulation being performed by the jlqcd collaboration using nonperturbatively -improved wilson fermions on a relatively fine lattice , 0.1 fm .further details are presented in a parallel talk .the strategy we have in mind when we do the chiral extrapolation is to use chiral perturbation theory ( ) as a theoretical guide to control the quark mass dependence of physical quantities .for this strategy to work one has to push the sea quark mass as light as possible and test whether the lattice data are described by the one - loop formula .( the lowest order prediction usually does not have quark mass dependence . )if so , chiral extrapolation down to the physical pion mass is justified . in fullqcd predicts the chiral logarithm with a definite coefficient depending only on the number of active flavors , which gives a non - trivial test of the unquenched lattice simulations .for example the pcac relation is given as \end{aligned}\ ] ] for flavors of degenerate quarks with a mass and .similar expression for the pseudoscalar meson decay constant is .\ ] ] the coefficient of the chiral log term is fixed , while the low energy constants are unknown .figure [ fig : b_vs_mpi2 ] shows the comparison of lattice data with ( [ eq : chpt_psmass ] ) , and it is unfortunately clear that the lattice result does not reproduce the characteristic curvature of the chiral logarithm .the same is true for the pseudoscalar meson decay constant , and the ratio test using partially quenched leads to the same conclusion .the most likely reason is that the dynamical quarks in our simulations are still too heavy .in fact , the corresponding pseudoscalar meson mass ranges from 550 to 1,000 mev , for which we do not naively expect that works , especially at the high end .our analysis of the partially quenched data suggests that a meson mass as low as 300 mev is necessary to be consistent with one - loop .let us now discuss the systematic uncertainty in the chiral extrapolation . since we know that the is valid for small enough quark masses , the chiral extrapolation has to be consistent with the one - loop formula at least in the chiral limit .if we assume that the chiral logarithm dominates only below a scale , a possible model is to take the one - loop formula below while using a conventional polynomial fitting elsewhere .both functions may be connected so that their value and first derivative match at the scale .the scale is unknown , though we naively expect that is around 300500 mev .therefore , we should consider the dependence on in a wider range , say 01,000 mev , as an indication of the systematic error in the chiral extrapolation .a plot showing these fitting curves is presented in .another possible functional form is that suggested by the adelaide - mit group .they propose using the one - loop formula calculated with a hard momentum cutoff , which amounts to replacing the chiral log term by .it is a model in the sense that we use it above the cutoff scale .fits to the pion decay constant are shown in figure [ fig : fpi_vs_mpi2_adelaide - mit ] .the fit curves represent the model with = 0 , 300 , 500 , and mev .since we do not have a solid theory to choose the cutoff scale , the variation of the chiral limit should be taken as the systematic uncertainty , whose size is of order of % .[ fig : fpi_vs_mpi2_adelaide - mit ] the large uncertainty associated with the chiral extrapolation as discussed above has not attracted much attention , partly because most simulations have been done in the quenched approximation , for which the chiral behavior of physical quantities is quite different .in contrast , in unquenched qcd , confirming the predictions of gives a non - trivial test of the low energy behavior obtained from lattice calculations . for pion or kaon physics it is essential to perform the lattice simulation in a region where is applicable , since the physics analysis often relies on .state - of - the - art unquenched lattice simulations using wilson - type fermions are still restricted to the large sea quark mass region , for which we do not find an indication of the one - loop chiral logarithm .this means that there could be a sizable systematic uncertainty in the chiral extrapolation .i have discussed the example of the pion decay constant ; a similar analysis is underway for the heavy - light decay constants and light quark masses .* pallante : * chiral extrapolation of weak matrix elements and in particular kaon matrix elements ( i.e. , decays , semileptonic kaon decays ) is a very delicate issue .one of the most difficult tasks still remains the calculation of matrix elements , where is a weak four - quark effective operator at scales , typically .since elastic ( soft ) final state interactions ( fsi ) of the two pions are large especially in the total isospin zero channel ( see and refs . therein ) , it is mandatory to overcome the _ maiani - testa no - go _ theorem and to include the bulk of fsi effects directly in the lattice measurement of kaon matrix elements , while keeping under control residual corrections through the use of chiral perturbation theory ( ) .a considerable step forward in this respect has been made in refs . , where it has been shown that the physical matrix element can be extracted from the measurement of an euclidean correlation function at _ finite volume_. the _ finite volume _ matrix element is converted to the infinite volume one via a multiplicative _ universal _ factor ( denoted as ll factor in the following ) , i.e. only depending on the quantum numbers of the final state .there are three main reasons why is needed , at least up to next - to - leading order ( nlo ) , in extracting the physical matrix element from a lattice euclidean correlation function : 1 ) lattice simulations are presently performed at unphysical values of light quark masses , so that is needed to parameterize mass dependences and perform the extrapolation to their physical value , provided it is applicable at the values of quark masses used on the lattice .2 ) lattice simulations may be done with unphysical choices of the kinematics , simpler than the physical one , and again is needed for the extrapolation .3 ) is an appropriate tool to monitor in a perturbative manner the size of _ systematic errors _ due to a ) ( partial ) quenching , b ) finite volume and c ) non - zero lattice spacing .it is also important to note that the possibility of computing lattice matrix elements with choices of momenta and masses different from the physical ones is a very powerful method , once we want to determine the low - energy constants ( lec ) which appear in the chiral expansion of observables at nlo . by varying momenta , and masses, we can increase the number of linear combinations of lec that can be extracted from a lattice computation .a possible strategy for the _ direct _ measurement of matrix elements has been formulated in ref . , specifically for the case ( see also ref . at this conference ) .this strategy is general , and can be applied also to the case .it is as follows : 1 ) evaluate the euclidean correlation function at _ fixed physics _, i.e. at fixed two - pion total energy at finite volume ; 2 ) divide by the appropriate source ( sink ) correlation functions at finite volume .this step produces the finite volume matrix element , and 3 ) multiply it by the universal ll factor to get the infinite volume amplitude : .4 ) if not able to apply the procedure 1 ) to 3 ) directly for the physical kinematics , then apply the procedure for an alternative choice of the kinematics that is sufficient to fully determine the physical amplitude at nlo in .two such choices for the case are the spqr kinematics , where one of the two pions carries a non - zero three - momentum , and the strategy proposed in ref . , using the combined measurements of at and , at non - zero momentum , and transition amplitudes .the second strategy is also sufficient for the case , while the spqr kinematics for is under investigation . also , the ll factor derived in is only applicable to the center - of - mass frame , while its generalization to a moving frame has not yet been derived ( see ref . for a discussion ) .unfortunately , most realistic lattice simulations are still performed in the quenched approximation or , at best , in the partially quenched approximation with two or three dynamical ( sea ) flavors .the loss of unitarity due to ( partial ) quenching of chiral group has dramatic consequences in the channel of amplitudes .loss of unitarity implies the failure of watson s theorem and lscher s quantization condition . as a consequence ,the fsi phase extracted from the quenched weak amplitude is no longer universal ( i.e. it may also depend on the weak operator ) and finite volume corrections of quenched weak matrix elements among physical states are not universal ( i.e. the universality of the ll factor does not work as in the full theory ) . the reason why the case is a peculiar one is that the rescattering diagram of the two final state pions ( the one producing the phase of the amplitude ) is modified by ( partial ) quenching already at one loop in .this is not the case for however , where the rescattering diagram is unaffected by quenching at least to one loop in .this guarantees the applicability of the _ direct _ strategy to the channel also in the quenched approximation , at least up to one loop in the chiral expansion . another consequence of ( partial ) quenching is the contamination of qcd - lr penguin operators , like , by new non - singlet operators which appear at leading order in the chiral expansion ( i.e. order , even enhanced respect to the order singlet operator ) .this contamination does not affect transitions , being pure . given the above picture ,a few conclusions can be drawn . at present, plays a crucial role in the extrapolation of lattice weak matrix elements to their physical value , or to the chiral limit .however , the applicability of at the lowest orders ( typically up to nlo ) in the extrapolation procedure is guaranteed only for sufficiently light values of lattice meson masses .this means that one should work in a region of quark masses sufficiently far below the first relevant resonance .the situation can be further complicated by the presence of fsi effects , especially in the channel .these effects can be either analytically resummed or the bulk of them be directly included into the finite volume lattice matrix element .most critical appears the situation in the presence of quenching , due to the lack of unitarity . for matrix elements ,strategies proposed for a _ direct _ measurement with unquenched simulations can still be used in a quenched simulation at least up to nlo in the chiral expansion .this is no longer true for matrix elements . in this case , quenching and partial quenching affect _universal _ properties of the weak amplitude already at one loop in , and in addition produce a severe contamination of qcd - lr penguin operators with new non - singlet operators . however , those problems disappear in the partially quenched case with and , where partially quenched correlation functions reproduce those of full qcd .* leinweber : * until recently , it was difficult to establish the range of quark masses that can be studied using chiral perturbation theory ( ) .now , with the advent of lattice qcd simulation results approaching the light quark mass regime , considerable light has been shed on this important question .it is now apparent that current leading - edge dynamical - fermion lattice - qcd simulation results lie well outside the applicable range of traditional dimensionally - regulated ( dim - reg ) in the baryon sector .the approach of the adelaide group is to incorporate the known or observed heavy quark behavior of the observable in question and the known nonanalytic behavior provided by within a single functional form which interpolates between these two regimes .the introduction of a finite - range regulator designed to describe the finite size of the source of the meson cloud of hadrons achieves this result .the properties of the meson - cloud source are parameterized and the values of the parameters are constrained by lattice qcd simulation results . without such techniques , one can not connect experiment and current dynamical - fermion lattice - qcd simulation results for baryonic observables .the use of a finite - range regulator might be confused with modeling .however , it is already established that can be formulated model independently using finite - range regulators such as a dipole .the coefficients of leading - nonanalytic ( lna ) terms are model independent and unaffected by the choice of regulation scheme .the explicit dependence on the finite - range regulation parameter is absorbed into renormalized coefficients of the chiral lagrangian .the shape of the regulator is irrelevant to the formulation of .however , current lattice simulation results encourage us to look for an efficient formulation which maximizes the applicable pion - mass range accessed via one- or two - loop order .an optimal regulator ( perhaps motivated by phenomenology ) will effectively re - sum the chiral expansion encapsulating the physics in the first few terms of the expansion .the approach is systematically improved by simply going to higher order in the chiral expansion . our experience with dipole and monopolevertex regulators indicates that the shape of the regulator has little effect on the extrapolated results , provided lattice qcd simulation results are used to constrain the optimal regulator parameter on an observable - by - observable basis . in order to correctly describe qcd, the coefficients of nonanalytic terms must be fixed to their known model - independent values .this practice differs from current common practice within our field where these coefficients are demoted to fit parameters and optimized using lattice simulation results which lie well beyond the applicable range of traditional dim - reg .the failure of the approach is reflected in fit parameters which differ from the established values of by an order of magnitude spoiling associated predictions .i will focus on the extrapolation of the nucleon mass as it encompasses the important features which led to subsequent developments required to extrapolate today s lattice qcd results .figure [ slide1 ] displays the results of a finite - range chiral expansion of the nucleon mass ( solid curve ) constrained by dynamical - fermion simulation results from ukqcd ( open symbols ) and cp - pacs ( closed symbols ) .the expression for the nucleon mass \ , , \nonumber\end{aligned}\ ] ] arises from the one - loop pion - nucleon self - energy of the nucleon , with the momentum integral regulated by a sharp cutoff .the lattice simulation results constrain the optimal regulator parameter , , to 620 mev .of course it is desirable to use more realistic regulators such as a dipole form when keeping only one - loop terms of the chiral expansion . for small the standard lna behavior of obtained with the correct coefficient . for large , the arctangent tends to zero and suppresses the nonanalytic behavior in accord with the large quark masses involved .the scale of the regulator has a natural explanation as the scale at which the pion compton wave length emerges from the hadronic interior .it is the scale below which the neglected extended structure of the effective fields becomes benign .the valid regime of the truncated expansion of is the regime in which the choice of regulator has no significant impact . to gain further insight into the validity of the truncated expansion of traditional dim - reg , one can perform a power series expansion of the arctangent in terms of and keep terms only to a given power . the dim - reg expansion of ( [ npexp ] ) for small is provided by curves ( i ) through ( iv ) in fig . [ slide1 ]. curve ( i ) contains terms to order , and ( ii ) to order .this is the correct implementation of the lna behavior of .the behavior dramatically contrasts the common but erroneous approach discussed above .the applicable range of traditional dim - reg to lna order for the nucleon mass is merely mev .incorporation of the next analytic term extends this range to 400 mev .curve ( iv ) illustrates the effect of including the term of the expansion . within the range ,dim - reg requires three analytic - term coefficients , , and ( the coefficient of ) , to be constrained by lattice qcd simulation results .the adelaide approach optimizes , and in place of .tuning the regulator parameter is not modeling .instead , optimization of the regulator provides the promise of suppressing and higher - order terms .one can understand how this approach works through the consideration of how the regulator models the physics behind the effective field - theory , but such descriptions do not undermine the rigorous nature of the effective field theory . while the adelaide approach of ( [ npexp ] ) _ is _ , it is the current state of lattice qcd simulation results that demand the parameters of the chiral expansion be determined in other ways .the extension to generalized pad approximates , modifications of log arguments and meson - source parameterizations are methods to constrain the chiral parameters with today s existing lattice qcd simulation results .traditional dim - reg to one loop knows nothing about the extended nature of the meson cloud source .as there is no other mechanism to incorporate this physics , the expansion fails catastrophically if it is used beyond the applicable range .moreover , convergence of the dim - reg expansion is slow as large errors associated with short - distance physics in loop integrals ( not suppressed in dim - reg ) must be removed by equally large analytic terms .these points are made obvious by examining the predictions of the power series expansions ( curves ( i ) through ( iv ) ) of fig .[ slide1 ] at . curve ( ii ) incorporating terms to is particularly amusing .in contrast , the optimal finite - range regulation of the adelaide group provides an additional mechanism for incorporating finite - size meson - cloud effects beyond that contained explicitly in the leading order terms of the dim - reg expansion .the finite - size regulator effectively re - sums the chiral expansion , suppressing higher - order terms and providing improved convergence .the net effect is that a catastrophic failure of the chiral expansion is circumvented and a smooth transition to the established heavy quark behavior is made .it is time for those advocating standard chiral expansions to use them with the established model - independent coefficients and in a regime void of catastrophic failures ; a regime that can be extended using finite - size regulators .the approach of the adelaide group provides a mechanism for confidently achieving these goals with the cautious conservatism vital for the future credibility of our field .* lepage : * this is a remarkable time in the history of lattice qcd . for the first time we appear to have an affordable procedure for almost realistic unquenching .improved staggered quarks are so efficient that the milc collaboration has already produced thousands of configurations with small lattice spacings and three flavors of light quark : one at the strange quark mass , and the other two at masses of order or or less of the strange quark mass .for the first time we can envisage a broad range of phenomenologically relevant lattice calculations , in such areas as physics and hadronic structure , that are precise to within a few percent _ and _ that must agree with experiment .chiral extrapolations are likely to be one of the largest sources of systematic error in such high - precision work .the milc collaboration is already working at much smaller light - quark masses than have been typical in the field ; there is little doubt that these masses are small enough for a viable chiral perturbation theory . andpartial quenching provides a powerful tool for determining the needed chiral parameters .such a systematic approach is essential for high precision .as discussed by claude bernard , the most significant complication in the chiral properties of improved staggered quarks comes from their `` taste - changing '' interactions . crudely speaking these generate a non - zero effective quark mass , proportional to , even for zero bare quark mass .this effect is perturbative in qcd , and can be removed by modifying the quark action .it can also be measured directly in simulations ; we should know shortly how significant it is for typical lattice spacings .an important aspect of high - precision lattice qcd is choosing appropriate targets .high - precision work in the near future will focus on stable or nearly stable hadrons. it will be much harder to achieve errors smaller than 1020% for processes that involve unstable hadrons such as the or .one might try to extrapolate through the decay threshold , but thresholds are intrinsically nonanalytic and so extrapolation is very unreliable .hadrons very near to thresholds , such as the or the , may be more accessible , but even these will be unusually sensitive to the light - quark mass since this affects the location of the threshold. such considerations will dictate which simulations we do and how we do them .consider , for example , how we set the physics parameters in a simulation .the splittings in the or systems are ideal for determining the lattice spacing .the hadrons involved are well below the - and - thresholds .they have no valence and quarks , and couple 100 or 1000 times more weakly to than ordinary mesons .this means these splittings are almost completely insensitive to light - quark masses ( once these are small enough ) .finally , and somewhat surprisingly , the splittings are almost completely insensitive the and quark masses as well . to a pretty good approximation, the only thing these splittings depend upon is .bad choices for setting would be the mass or even the - splitting , since the is only 40mev away from a threshold .another example concerns setting the strange quark mass .obvious choices for this are the splittings and .these involve no valence and quarks , and so require much less chiral extrapolation than say .also they are , by design , approximately independent of the heavy quark masses as well . andeach of the hadrons is far from thresholds .to a pretty good approximation , these splittings depend only upon .the cleo - c experiment presents a particularly exciting opportunity for lattice qcd , as discussed by rich galik at this meeting . within about 18 monthscleo - c will start to release few percent accurate results for , , .a challenge for lattice qcd is to _ predict _ these results with comparable precision .this would provide much needed credibility for high - precision lattice qcd , substantially increasing its impact on heavy - quark physics generally .it would also be a most fitting way to celebrate lattice qcd s anniversary .* wittig : * further to the issues discussed in my plenary talk , i would like to focus on two questions , namely * how can we gain information on physical quantities in the _ most reliable _ way ? * how can we check the validity of ? as an example let me come back to the masses of the light quarks .their absolute values are not accessible in , but quark mass ratios have been determined at nlo , using values for the low - energy constants ( lec s ) that were estimated from phenomenology in conjunction with theoretical assumptions .the results are individual values can thus be obtained if one succeeds in computing the absolute normalization in a lattice simulation .the most easily accessible quark mass on the lattice is surely , for which an extrapolation to the chiral regime is not required .the combination of the lattice estimate for with the ratios in eq.([eq_qmratios ] ) then yields the values of without chiral extrapolations of lattice data .this is a reliable procedure , provided that the theoretical assumptions , which are used to determine some of the lec s that are needed for the results in ( [ eq_qmratios ] ) , are justified .whether this is the case can be studied in lattice qcd , either by computing ratios like or directly on the lattice , or by determining the lec s themselves in a simulation .the apparent advantage of the latter is that only moderately light quark masses are required .furthermore , it is difficult though not impossible to distinguish between and in lattice simulations .can we trust the current lattice estimates for the low - energy constants ?alpha and ukqcd have extracted them by studying the quark mass behaviour in the range in order to check whether lattice estimates for the low - energy constants make sense phenomenologically , we can use the results for to predict the ratio of decay constants , whose experimental value is .ukqcd have simulated flavours of dynamical quarks . forthe sake of argument , let us assume that the quark mass dependence is not significantly different in the physical 3-flavour case .the data can then be fitted using the expressions in partially quenched for . in this wayone obtains where the first error is statistical , the second is systematic , and the inverted commas remind us that this is not really the 3-flavour case . after inserting this estimate into the expression for in `` full '' qcd obtains which is consistent with the experimental result .this is an indication that the quark mass dependence of pseudoscalar decay constants in the physical 3-flavour case is not substantially different from the simulated 2-flavour theory .it is interesting to note that the estimate in eq.([eq_fkfpi ] ) decreases by 15% if the chiral logs are neglected in the expression for , i.e. this example then demonstrates that the inclusion of chiral logarithms can significantly alter predictions for su(3)-flavour breaking ratios such as .this observation was also made recently by kronfeld & ryan in the context of the corresponding ratio for b - meson decay constants , i.e. . unlike the situation for , however , there is no experimental value to compare with . to summarize : these examples serve to show that estimates for light quark masses can be obtained in a reliable way by combining lattice simulations with , whose strengths and weaknesses are largely complementary .in order to arrive at mass values for the up- and down quarks , the `` indirect '' approach via the determination of low - energy constants offers clear advantages over attempts to compute these masses directly in simulations .* bernard : * the aim of lattice qcd ( lqcd ) is to predict numbers in a controlled way .the problem introduced by the adelaide approach is that nearly any functional form will fit across the linear portion of the data but model - dependent constants are being introduced that can change the extrapolated answer by an unknown amount .although changes in the chiral regulator are ordinarily thought of as harmless , since they can be absorbed into changes in the analytic terms , that is not true when theory is used to fit data in the regime above the cutoff , in the linear regime .the detailed form of the cutoff is then important , and there is no universality .we need to _ fit _ the chiral logs in a controlled manner .indeed , we can now do this with the improved staggered fermion data , which extends down to , as long as we use the appropriate chiral lagrangian .another approach that should be pursued is using chirally improved or fixed - point fermions for valence quarks on dynamical configurations generated with improved staggered quarks .this would provide important tests of staggered results . *hashimoto : * first , i agree with claude bernard about the importance of distinguishing rigorous lattice calculations from those with model dependence .nevertheless , i think that models are a useful way of estimating systematic uncertainties .my second remark concerns the extraction of low energy constants ( lec ) in the chiral lagrangian from fits to lattice data . to do this, we must first check that fits the lattice data .when the sea quark mass is too large , the formulae will not work .we find that they do not work for jlqcd s data , and thus do not quote results for lec .the situation might , however , be better with the staggered fermion data .third , staggered fermions have the advantage of allowing one to push sea - quark masses closer to the chiral limit , but wilson fermions are useful for their simplicity and should be used as a cross - check . * pallante : * present chiral extrapolations for weak matrix element calculations use , and this is too high to trust .we need to bring the mass down to . it may be that to work reliably in this regime requires chirally symmetric fermions . in this regard ,the approach mentioned by giusti is very interesting : matching results for correlation functions in small volumes to the predictions of chiral perturbation theory in order to calculate lecs .the use of small volumes may allow one to work with dynamical chirally symmetric fermions . at the very least , this should be pursued as a complementary approach to allow comparisons .i also agree with claude bernard about the dangers of modeling .i think we have enough theoretical tools given the lightness of light quarks and the heavy quark expansion that we can control systematic errors , which we can not do in a model .* leinweber : * we in adelaide are not interested in modeling , either .if we were modeling , we would fix the value of the regulator parameter from phenomenology .instead , we determine it , quantity by quantity , by fitting lattice data in the region where the regulator sets in .our aim is to provide a simple analytic _ parameterization _ , incorporating known physics .systematic errors can be estimated by varying the parameterization .for example , our studies of the meson indicate a systematic error after chiral extrapolation of about 50 mev. the lattice community needs to do better than making linear fits just because the data looks linear .we _ know _ that there is nonanalytic behavior at small .ideally we should calculate at much smaller where we can use any regulator including dimensionally regulated , but until we can do this we need alternative parameterizations which extend to higher .finally , i would like to advocate setting the scale using the static quark potential and not .the former is insensitive to light quark masses , whereas the latter clearly is .* lepage : * first , let me note that if we use the potential to set the scale , then we should use and not as is usually done . the traditional value for comes from models , not from rigorous calculations .we can , however , infer the correct value from other determinations of the lattice spacing .second , let me address the issue of modeling versus using .much of what the adelaide group does can be interpreted as an implementation of that uses a momentum cut - off , rather than dimensional regularization , to control ultraviolet divergences .momentum cutoffs , with , have been quite useful in applications of to low - energy nuclear physics .typically such cutoffs make it easier to guess the approximate sizes of coupling constants that havent been determined yet. it would be quite interesting for the adelaide group or someone else to explore whether momentum cutoffs lead to benefits in non - nuclear problems .the use of a momentum cut - off does not , however , extend the reach of chiral perturbation theory to higher energies ; ultimately the physics is the same , no matter what the uv regulator .i see no problem with the adelaide approach in so far as it is equivalent to with a momentum cutoff , but this entails a more systematic approach to the enumeration and setting of parameters .finally , i would like to reemphasize the importance of using small quark masses , and the significance of the fact that milc simulations are entering this regime .this is a new world one in which we can control all systematic errors .* wittig : * let me first comment on the `` catastrophic failure '' of when extended too far observed by jlqcd .ukqcd does not see such a failure , and it is important that the two groups discuss this point and attempt common methods of analysis .concerning models , let me reiterate that i think modeling is a dangerous path to follow .models are usually based on one particular mechanism .it is then unclear to what extent they are able to capture this aspect at the quantitative level , and whether they are general enough to describe other related phenomena correctly .one particular concern is whether a given model can be falsified .is it possible to choose the parameters to make the results come out correct for some quantities , but wrong for others ?* stamatescu : * i have heard advocates of different fermion actions : staggered , wilson and others .i was hoping to hear more than simple advocacy , and think it would be useful to have a comparison of the uses of each type of fermion .* shoresh : * concerning the need to take the fourth - root of the determinant when using staggered fermions it has been stated that there is no evidence that it is wrong , but that there is no proof that it is correct .this seems to come under the heading of uncontrolled errors .are any of the panelists uncomfortable with staggered fermions ? pursuing this point , let me note that chirally symmetric fermions can be simulated for lower quark masses and this has been done in quenched simulations . what is the feasibility of doing dynamical simulations with , say , overlap fermions ? *bernard : * the issue of taking roots of the determinant is certainly an important concern for those of us using staggered fermions .one way to study this issue is to compare results from simulations to the theoretical predictions of chiral perturbation theory including `` taste '' violations .if successful , this will show not only that staggered fermions have the correct chiral behavior in the continuum limit but also that we understand and can control the approach to that limit .that should go a long way towards reassuring those who are skeptical of staggered quarks .* golterman : * i would like to emphasize the importance of the issue of whether the strange quark is light enough to be in the chiral regime .this is very important for present calculations of kaon weak matrix elements , which all rely on , and actually extract lecs , rather than physical decay amplitudes .i am supportive of the use of improved staggered fermions .we need unquenched results for phenomenological applications .i am concerned that the numbers coming out of the lattice data group working groups will be coming primarily from quenched simulations .* rajagopal : * it is possible to generalize the first of the questions posed by steve sharpe about how small is small enough . in thermodynamics with 2 quarksthere is a second - order phase transition as the mass goes to zero .when is small but non - zero there is a well - defined scaling function that can be used to gauge how small an is small enough . in this context , as in the context of sharpe s question as posed , it may turn out that small enough means pion masses of order or smaller than in nature .can these two ways of gauging what is small enough be related ?i would also like to hear the reaction to my take on the adelaide approach . if a calculation of an observable is linearly extrapolated and it misses , what can i learn from this ?i think that a model can make plausible that qcd is not wrong .i hear derek leinweber fighting the urge to use linear fits where we know that the data should not be linear .but , in order to calculate an observable quantitatively from qcd , say at the few percent level with controlled errors , we must have lattice data , not a model .the value of models is that they can yield qualitative understanding , for example of what physics is being missed by linear extrapolation .* leinweber : * i agree that to get an answer at the 1% level , we need new lattice results at light quark masses .but we should nt throw out the parameterization of the regulator .i encourage everyone here to do the extrapolations with a variety of regulator parameterizations and verify the uncertainties for themselves .we do need more light - quark lattice results , but i think that we can use the adelaide approach now to obtain results at the 5% systematic uncertainty level . *giusti : * with regard to lepage s comments on competing with cleo , let me make the following comments .first , simulations with overlap fermions have developed very quickly , and are becoming competitive , and we should not stop these and start up with improved staggered fermions .second , in the last 15 - 20 years the errors on and have approximately halved .how can you expect the errors to go down by a factor of five in one year , which is what is needed to attain the 1 - 2% errors you are aiming for ?* lepage : * the errors on would have been reduced by far more than half had it not been for the uncontrolled systematic error due to quenching .the quenching errors dominated all others because decay constants are very sensitive to unquenching .given realistic unquenching , with improved staggered quarks , the dominant errors now are in the perturbative matching to the continuum , and we know how to remove them ( and are doing it ) .again , we are in a new world . *mawhinney : * let me note that dynamical simulations with 2 flavors of domain - wall fermions using an exact algorithm are _already underway_. the parameters are , , and a fifth dimension of .the residual mass is .thus , although domain - wall fermions are certainly numerically more intensive than staggered , wilson , etc . , they are not so far from simulating qcd . [ more serious response , added after maarten golterman and michael ogilvie explained the question and the answer to me : on cp-2 , the connection between staggered fermions and naive fermions is lost , so that , although the staggered theory does exist , it has no relation to a theory of particles of spin 1/2 .equivalently , momentum space on cp-2 is different from what we are used to , so one can not make the usual construction a continuum spin 1/2 field out of the staggered field at the corners of the brillouin zone . ] * soni : * i want to stress two related points .first , that we do nt necessarily need experimental data to test our methods we can compare results using different discretizations and methods . is a good example of such cross - checking the comparison of results obtained using staggered and chirally symmetric fermions will provide a detailed test of our methodology .second , regarding cleo - c , i am worried about trying to guide experimental efforts , which are enormously costly , toward the fantasies of theorists .i am worried about telling them what to measure based on what quantities we are able to calculate .if you think that staggered fermions can calculate quantities so precisely , then why not go to the particle data book .there are quite a lot of quantities that have _ already been measured very precisely _ , such as the mass difference ( known to ) . * lepage : * even without the lattice , experimentalists should be measuring these quantities .they are important to test heavy quark effective theory , and as inputs into studies of -physics .the cleo - c measurements are important to lattice qcd because they test the right things , not just the spectrum .we use a large , complex collection of techniques ; we need a large number and variety of tests in order to calibrate all of our methods to the level of a few % .cleo - c is uniquely useful for such tests because it will accurately measure the analogues of precisely the quantities most important to high - precision physics .we have promised the government that we can do calculations well , and it is about time that we came through .if we ca nt calculate to better than 15% , then why are they paying for us to have computer time ?maybe we will be humiliated by cleo if our predictions fail , but since when is that a reason not to try ? * creutz : * while we ca nt calculate at the physical mass , i have always thought it was fun to play and change the mass . in particular , i am fascinated by the prediction that if one has odd number of negative mass fermions then cp is spontaneously violated .i think it would be cool if we could simulate such a theory on the lattice . but staggered fermions always generate fermions in pairs .do you have any ideas or comments about this ?( this is a subtle criticism of staggered fermions . ) * lepage : * yes . somehow whenever i am talking to somebody about staggered fermions , they always manage to bring up the one situation that we are absolutely sure that we can not solve with staggered fermions .* brower : * back to that fourth root of the determinant .how does this work if , before one takes the root , there is a different mass for each taste . surely , one wants the chiral logs to characterize the splitting as it occurs on the lattice .* bernard : * this can be done by using chiral perturbation theory including taste - violating terms and making the connection between diagrams and `` quark - flow diagrams '' .one can determine which of the meson diagrams correspond to virtual quark loops , and then multiply each diagram by the correct power of .essentially , one is putting in the fourth root by hand .* leinweber * for baryons with a sharp cut - off , , and this factor of two is very important in practice . in the meson sectorthere is some question as to whether one needs to introduce a finite - range form - factor style regulator .it is important to remember that this second scale , , is not a regulator scale in traditional . with dimensional regulation, the pion mass sets the scale of physics associated with loop integrals . asthe pion mass becomes large , short - distance physics dominates and the effective field theory undergoes a catastrophic failure . there was an earlier comment suggesting that the parameterization of the regulator introduces model - dependent constants that do nt go away .these constants do go away as they may be absorbed into a renormalization of the chiral lagrangian coefficients .young , d.b .leinweber , a.w .thomas and s.v .wright , hep - lat/0111041 .leinweber , a.w .thomas , k. tsushima and s.v .wright , ( 2000 ) 074502 .leinweber , a.w .thomas , k. tsushima and s.v .wright , ( 2001 ) 094502 .m. wingate , .golterman and e. pallante , jhep 0110 ( 2001 ) 037 ; and ( hep - lat/0208069 ) . t. hatsuda , ( 1990 ) 543 .leinweber , d.h .lu and a.w .thomas , ( 1999 ) 034014 .hackett - jones , d.b .leinweber and a.w .thomas , ( 2000 ) 143 .leinweber and a.w .thomas , ( 2000 ) 074505 .hackett - jones , d.b .leinweber and a.w .thomas , ( 2000 ) 89 .donoghue , b.r .holstein and b. borasoy , ( 1999 ) 036002 .
this is an approximate reconstruction of the panel discussion on chiral extrapolation of physical observables . the session consisted of brief presentations from panelists , followed by responses from the panel , and concluded with questions and comments from the floor with answers from panelists . in the following , the panelists have summarized their statements , and the ensuing discussion has been approximately reconstructed from notes .
in newtonian physics , physical processes are understood with respect to a fixed spatial coordinate system and a time parameter , which is absolute and ever increasing .predictions are entirely deterministic .quantum theory and general relativity depart from this classical picture in opposing manners .quantum theory gives probabilistic predictions as to the outcomes of measurements , but retains fixed space and time coordinates . on the other hand ,general relativity is deterministic , but shows that space and time form a dynamical structure .reconciling these fundamental philosophical differences is one of the many challenges one is faced with in trying to construct a theory of quantum gravity .there have been many different approaches to this problem with many different results .one way of moving forward is to dismiss classical assumptions and create a probabilistic theory that has a dynamic causal structure .however , what results is indefinite casual structure .this is more radical than either probabilistic predictions or dynamical space - time structure . in general relativity ,a separation between space - time locations is either space - like or time - like .an indefinite causal structure would allow for a separation between space - time locations to be something like a quantum superposition of a space - like and a time - like separation . while we may be uncertain of the causal structure of the path between measurements , we know where in space - time we make measurements , what measurements we have made , and what outcomes we get . with this data, we can examine probabilistic correlations for information .the causaloid framework ( ,, ) provides us with the necessary structure. we will outline the essentials of this framework in section 2 .it is natural in discussions of causal structure to raise the question of entropy .the second law of thermodynamics tells us that in an isolated system , entropy can increase or remain the same , but it can never decrease . in informationtheory , entropy is viewed as being a measure of uncertainty before we measure a state or equivalently , the amount of information gained by upon learning the state of a system .inherent in both concepts of entropy is an assumed causal structure , specifically that there exists a background time .the standard definition of entropy is in the context of a definite causal structure with reference to absolute time . in order to make sense of entropy in an indefinite causal structure, a clear definition must be established . to do so requires consideration of the following questions : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what are the concepts from the usual picture of entropy in a definite causal structure that are necessary to define entropy ?what are the analogues to these concepts in a picture with indefinite causal structure ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using the formalism introduced in the causaloid framework , we are able to provide answers to these questions and then , define a causally - unbiased entropy . in section 2 , we will review the relevant aspects of the causaloid framework .we then proceed with the new developments . in section 3, we define a new type of product that is utilized in the work on entropy .the definition of causally - unbiased entropy and resulting features are developed in section 4 .every experiment results in a set of data from making measurements on a system . each piece of datacould be thought of as a card with three pieces of information on it ; where the measurement is made in space - time , what is measured , and what the result of the measurement is .we will represent each card ( or piece of data ) as where denotes the space - time information , denotes the information pertinent to a choice of measurement or action , and denotes the information regarding an observation or outcome of a measurement .the set of all possible cards ( i.e. all possible measurements with all possible outcomes with every space - time configuration ) is denoted .we can imagine running an experiment an infinite number of times so as to be able to obtain relative frequencies . in order for the cards to tell us the relative frequencies , we must systematically sort them .each distinct is defined as an _ elementary region _ of space - time .a _ composite region _, denoted , is a set of elementary regions .( note : these definitions of elementary region " and composite region " differ from those in . ) therefore , these cards can be sorted according to their associated space - time region .the set of all possible cards with the same space - time information written on them is the _ measurement information for elementary region . we denote this set as .the measurement information for composite region is the union of all sets of measurement information for the elementary regions contained within the composite region .more concisely , we can further sort the measurement information in a region . the _ procedure in a region _ , denoted , is the set of all distinct choices of measurement recorded for the region . for composite regions , we define the following set : similarly , the _ outcome set in a region _ , denoted is defined to be the set of all distinct outcomes of a measurement recorded for the region . again , for composite regions , we define notice that and .the composite structure we expected of our space - time regions is reflected structure throughout these sets .a set of cards with the measurement information for a region has no more or less structure than an elementary region of space - time .therefore , without adding structure or losing generality , we can take the sets to be elementary regions , at least , for the purposes of this paper .from this point forward , the term _ region _ will be used interchangeably to refer to objects of type or and type or .notice that the set of all cards can be viewed as all the cards from all ( elementary ) regions . so is the largest of all regions that can be considered .these definitions provide a firm foundation on which the causaloid framework rests both mathematically and conceptually .the most basic quantity that we would want to be able to calculate is the probability that a certain ( set of ) outcome(s ) is observed given that a certain ( set of ) measurement(s ) has been performed at a ( set of ) location(s ) in space and time .suppose that the set of locations we are interested in is .the set of all the cards corresponding to these locations called .we write pairings of measurements and corresponding outcomes in as . a specific outcome and measurement pairis denoted as ( or equivalently , ) .the set of all in region is .the set comprised of all the cards not in is .we call the _ generalized preparation _ because it is the information that surrounds not only from the immediate past , but from the future and the rest of space - time as well . by the choices we make in setting up the experiment, we can put conditions on the generalized preparation such that is well - defined .( see ref. for details . )then we can write for a specific pair , we can write this probability as we use the short - hand to denote the probability defined in eq.([p_alpha1 w gp ] ) . one way to specify the state of a system is to list all the possible for elements of . however , this over - specifies the state .we do not usually need to know the probability of every outcome of every measurement in order to determine what the complete state of the system is .physical theories tell us what relationships exist between variables and what constraints those relationships place on the variables of the system . these relationships and constraintscan be used to determine a reduced set of probabilities from which all other probabilities can be represented .the reduced set of probabilities is defined such that any probability can be written as a linear combination of the probabilities in the reduced set .let us denote the reduced or fiducial set in as .this process of going from the set of all the probabilities to the smallest essential set we call _ first level physical compression_. this can be expressed as such that where encodes the physical compression and therefore , is determined by the details of the physical theory .we can define a _ decompression matrix _, such that where means the component of .let us consider two distinct regions . in a similar fashion to the single region case, we specify the state of the system by listing all . where is the cartesian product .it can be shown that which implies that the following list of probabilities is sufficient . this is effectively first level compression on each index .but if a physical theory has some connection between the two regions , may no longer be the smallest set that is sufficient to represent all possible states .then _ second level physical compression _ is possible .it is defined to be such that when , second level compression is trivial .but it is proven in that it is possible that .now we can define a second level decompression matrix . by comparing eq.([1st level ] ) and eq.([2nd level ] ) , we infer that where which is the desired second level decompression matrix .this matrix encodes how we move from s to s .using the definition of the first level decompression matrix , eq.([2nd level matrix ] ) becomes this defines the _ causaloid product _ , denoted which unifies the different causal structure - specific products .explicitly , it is this product that allows us to look at the probabilistic correlations between arbitrary locations in space - time without specifying the causal relationship .we have shown second level compression for the case where we have two regions .this is easily generalized for any number of regions .the object that would encode the compression for three regions would be , for four regions would be , etc .after second level compression over multiple regions , we have there is a third level of physical compression that compresses these multi - region -matrices to give the _ causaloid _ , , which is defined as where is determined by the rules of the physical theory ( for detailed discussion of how this works see [ 2 ] ) .by decompressing the set , we can obtain the -matrix for any set of regions .this means that the causaloid gives us the ability to perform any calculation that the physical theory allows for . up to this pointwe have exclusively dealt with probabilities conditioned on procedures .it is more useful to also be able to condition on outcomes .specifically , we d like an expression for the following : using bayes theorem , this becomes where denotes that the sum is over all possible outcomes corresponding to the measurement ( in ) .( for simplicity , we have suppressed the part of the notation denoting the generalized preparation . ) in the causaloid framework , this becomes where .( the sum being over in this notation has the same meaning as the sum being over all outcomes consistent with . ) in order for this probability to be considered well - defined , the right hand side can not depend on .since and are determined exclusively by the physical theory , neither has any dependence on .however , does depend on .this implies that in order for the probability eq.([probdef ] ) to be well defined ( i.e. not depend on ) , it must vary with .the dependence on can be removed altogether by requiring that be parallel to .therefore , the above probability is well defined if and only if with this condition , we get two distinct regions ; and . by definition we wanted to take the dot product between two vectors of the above form . using decompression matrices , we can write where , , and . notice that we can write as \ ] ] similarly , \ ] ] define and , similarly , using this , eq.([devdot ] ) becomes where and .this suggests that the essence of is a relationship between and mediated by matrices that depend on .therefore , we can view eq.([devdot ] ) as kind of product of and .dot products of this form come up frequently enough that we will define this as the -dot product and denote it as we will make use of this product later in the paper .standard definitions of entropy assume fixed causal structure . herewe develop a causally - unbiased definition of entropy in the causaloid formalism .shannon entropy for a classical state is defined as the definition of used in this equation requires that the structure of space - time be organized with the following features : * a region of interest , * an immediate past space - time region , * sufficient data about what happened in * a measurement * a set of outcomes , , corresponding to this allows us to write removing all time bias from these features of space - time structure , we get * a region of interest , * a reference region * an outcome / measurement pair in , * a measurement * a set of outcomes , , corresponding to the reference region can be thought of as a kind of preparation region that is not limited to being in the causal past .in fact , the choice of reference region is arbitrary as illustrated in fig .[ indefinitecs ] .the definition of in a causally - unbiased structure is ( since is arbitrary , we should technically say ` with respect to the reference region ' .however , for the sake of brevity , we will assume that ` with respect to ' is implied much as ` with respect to the past ' is taken as implied in the causally - biased situation . ) using the above definition of , we define the entropy relative to the reference data as notice that this reduces to the causally - biased definition of entropy when is the past ; measures the microstate in the classical case or measures in the basis where is diagonal in the quantum case .taking the probability to be well - defined , eq.([parallel r s ] ) and eq.([theo s ] ) give the following definition of entropy : of course , this equation requires that . loosening this condition slightly, we can consider what happens when is nearly parallel to , using the definition of the probability from eq.([probdef ] ) .the entropy associated with this is it becomes necessary to shorten the notation for the following work so will be denoted as ( where the index is represented by ) and will be denoted as . as with any vector, can be decomposed into a component parallel to and a component perpendicular to ( i.e. components in and , respectively ) .that is , using the unit vectors as defined , can be decomposed as where is the component of that is perpendicular to the plane defined by and .the probability of interest , , then becomes where .notice that the first term is equivalent to a well - defined probability ( eq . [ parallel r s ] ) .we require the second term to be small since the deviation from well - defined should be small .since we have already required that be small , we need only place restrictions on . for the purposes of this subsection, we will work in the plane defined by and .define the angle between and the projection of into the plane to be .define the length of the projection of into the plane to be . using basic trigonometry, we get therefore , k can be written in a form that is dependent on only one variable , as follows : as tends towards , tends to infinity .therefore , to ensure that the second term of ( [ notwdp ] ) is small , we require that be finite . assume it to be a property of the state space for that there exists some . clearly , in order for to be finite .so is bounded as follows : the corresponding to will be denoted as .further bounds can be placed on by the state space of the physical theory . for our purposes , it is sufficient that is finite . in light of ( [ notwdp ] ) , entropy , as defined in ( [ generalentropy ] ) , becomes \label{deventropy0}\end{aligned}\ ] ] since is very small ( as is implied by the fact that and are nearly parallel ) and is finite , we can take a taylor expansion ( to leading order ) of the first term .doing this gives \nonumber \\& = & -\sum_i \left(\frac{v_i^{\parallel}}{u}\right ) \log_2 \left(\frac{v_i^{\parallel}}{u}\right ) + k\left(\frac{{v_i^{\perp}}}{u}\right ) \log_2 \left(e\frac{v_i^{\parallel}}{u}\right ) + { \cal o}\left({v_i^{\perp}}^2\right)\end{aligned}\ ] ] notice that the first term is equivalent to the definition of entropy where and that reduces to this definition when . that is ,when ( or equivalently , ) for , we will define using as defined in the previous section , we can regard as a kind of correction to the causally - biased entropy .then , to leading order is an entirely new quantity with no direct classical analogue so understanding its physical interpretation is a non - trivial matter .if we consider entropy as a measure of uncertainty , then is the measure of our uncertainty that the measurement in region will yield the specific outcome , given the data we have from the reference region .since our reference region is arbitrary , one way to view is that it measures how completely the region prepares " region . in this sense, preparation influences our uncertainty . in a definite causal structure ,an immediate past region would completely prepare our region of interest and would be zero .however , in the causally - indefinite picture , we can not require a priori if the reference region that we have chosen will completely prepare our region of interest .if there are no influences on our uncertainty from outside region , then the probability will be well - defined and will be zero .but if there are influences on our uncertainty from outside region , then the magnitude of will reflect that . for the sake of completenessthe s and s must be translated into s and s .notice that substituting for and for gives using the -dot product the above equations simplify to this allows us to completely specify the entropy of relative to a preparation in the causaloid framework .it is straightforward to generalize this to define the joint entropy of and with reference to a preparation " .simply redefine and as where and using the same procedure as for one region , we get in this manner , we can define causally - unbiased entropy in the causaloid framework for any number of regions .in a definite causal structure , the only thing required for a definition of entropy that is not in an indefinite causal structure is an immediate past region .since there is no reason in an indefinite causal structure to choose any reference region over any other , we simply choose an arbitrary region .this ensures that we do not hold on to any pre - conceived notions of space - time and its connection to causality .the definition of the causally - unbiased entropy resulted in a correction to the causally - biased definition of entropy . in a sense ,the q factor gives us an emergent idea of causality .it is a measure of the extent to which our region of interest is causally connected to our reference ( or preparation " ) region .if it is zero , the traditional ideas of causality are recovered .the next step would be determining how the q factor could potentially be physically observed . to do somay require us to know more of the theoretical and mathematical properties of q. which mathematical properties of shannon entropy hold for causally - unbiased entropy ? what is the status of the second law of thermodynamics in an indefinite causal structure ? to go about answering this , we could consider how `` evolves '' along tubes through indefinite space - times .these questions will be the subjects of continuing work in the near future .this work was supported by ogs .research at perimeter institute for theoretical physics is supported in part by the government of canada through nserc and by the province of ontario through mri .r. penrose and m. a. h. maccallum , `` twistor theory : an approach to the quantization of fields and space - time , '' phys .* 6 * , 241 ( 1972 ) .s. w. hawking , `` quantum gravity and path integrals , '' phys .d * 18 * , 1747 ( 1978 ) . c. rovelli and l. smolin , `` loop space representation of quantum general relativity , '' nucl .b * 331 * , 80 ( 1990 ) .r. d. sorkin , `` on the role of time in the sum over histories framework for gravity , '' int .j. theor .phys . * 33 * , 523 ( 1994 ) .j. ambjorn , j. jurkiewicz and r. loll , `` quantum gravity , or the art of building spacetime , '' hep - th/0604212 .l. hardy , `` probability theories with dynamic causal structure : a new framework for quantum gravity , '' gr - qc/0509120 .l. hardy , `` towards quantum gravity : a framework for probabilistic theories with non - fixed causal structure , '' j. phys . a * 40 * , 3081 ( 2007 ) , gr - qc/0608043 .l. hardy , `` formalism locality in quantum theory and quantum gravity , '' gr - qc/0804.0054 .l. d. landau and e. m. lifshitz , _ course of theoretical physics vol . 5 : statistal physics pt . 1 _ 3rd ed .( butterworth heinemann , oxford , 1980 ) .
entropy is a concept that has traditionally been reliant on a definite notion of causality . however , without a definite notion of causality , the concept of entropy is not all lost . indefinite causal structure results from combining probabilistic predictions and dynamical space - time . combining the probabilistic nature of quantum theory and dynamical treatment space - time from general relativity is an approach to the problem of quantum gravity . the causaloid framework lays the mathematical groundwork to be able to treat indefinite causal structure . in this paper , we build on the causaloid mathematics and define a causally - unbiased entropy for an indefinite causal structure . in defining a causally - unbiased entropy , there comes about an emergent idea of causality in the form of a measure of causal connectedness , termed the q factor .
this paper investigates a poor behavior of markov chain monte carlo ( mcmc ) method .there have a vast literature related to the sufficient conditions for a good behavior , ergodicity : see reviews and and textbooks such as and .the markov probability transition kernel of mcmc is harris recurrent under fairly general assumptions .moreover , it is sometimes geometrically ergodic . in practice ,however the performance can be bad even if it is geometrically ergodic .in we introduced a framework for the analysis of monte carlo procedure .monte carlo procedure is defined as a pair of underlying probability structure and a sequence of `` estimator '' for the target probability distribution . using the framework we constructed consistency , which is a good behavior of monte carlo procedure .current study , we apply the framework to study bad behavior .there are several bad behaviors for monte carlo procedure .two extreme cases are , a ) the sequence generated by monte carlo procedure has very poor mixing property , and b ) the sequence goes out to infinity .we call a ) degeneracy and the paper is devoted to the study of the property .we focus on a ) in this paper . for b ) , see examples 3.2 and 3.3 of .to describe degeneracy more precisely , we consider a numerical simulation for the following simple model : where is a -valued explanatory variable and is a parameter and is a cumulative distribution function of the normal distribution ( see section [ normal ] ) . explanatory variable is generated from uniformly distribution on .we define two gibbs sampler and .assume we have observation and and prior is set to be standard normal distribution .there are two ways for construction of the gibbs sampler .one way is to prepare latent variable from and set then gibbs sampler is generated by iterating the following procedure : for given and for , generate from truncated to ] if and truncated to if .then update from normal distribution with mean and variance defined by write for this gibbs sampler .we obtain two gibbs samplers and .although the constructions are similar and both of which have geometric ergodicity , the performances are different .( upper ) and ( lower ) .solid line is for and dashed lines is for .[figure1],width=453 ] figure [ figure1 ] is a trajectory of the gibbs sampler sequence for iteration and sample size ( upper ) and ( lower ) . for each sample size , by ergodicity , empirical distributions tend to the same posterior distribution of for two gibbs samplers as . however the solid line has poor mixing property than .therefore it may produces a poor estimation of the posterior distribution .the difference becomes larger when the sample size in figure [ figure1 ] ( lower ) .the trajectory from ( solid line ) is almost constant . for both simulations ,the true value is and the initial value is set to .even though has geometric ergodicity , it has poor mixing property .we would like to say is degenerate .later we will prove that it is degenerate after certain localization . on the other hand consistent under the same scaling by theorem 6.4 of .we study such a poor behavior , degeneracy , in this paper .the analysis may seem to be just a formalization of obvious facts .however , sometimes degeneracy can not be directly visible and it produces non - intuitive results . in this paper, we obtain the following results for ( markov chain ) monte carlo methods . 1 .degeneracy and local degeneracy of monte carlo procedure are defined and analyzed .2 . as an example, we studied cumulative link model .marginal augmentation method is known to work at least as good as the original gibbs sampler .we show that in some cases , marginal augmentation really improves the asymptotic property , and the rest of the cases , surprisingly , we show that both of the mcmc methods does not have local consistency .the paper is organized as follows .section [ dmc ] is devoted to a study of degeneracy of monte carlo procedure in general . in subsection [ clc ]we briefly review consistency of monte carlo procedure , and after that we define degeneracy and apply it to markov chain monte carlo procedure .next we examine the degeneracy for an example , cumulative link model .we prepare section [ apc ] for the asymptotic property of cumulative link model itself .there is no monte carlo procedure in this section . in section [ ags ]we apply degeneracy to the model and obtain asymptotic properties of markov chain monte carlo methods for cumulative link model .let and .we write the integer part of by ] such that 1 . is a probability measure on for . is -measurable for any .we may write instead of . if is -finite measure instead of probability measure , we call a transition kernel . write for a probability distribution function of and write . for and -positive definite matrix , a function is a probability distribution function of where is a determinant of and is a transpose of a vector . for a probability measure on ,a central value is a point satisfying element of is denoted by . for a probability measure on , let be for . for ,we call central value if each is a central value of . there is no practical reason for the use of the central value for markov chain monte carlo procedure as is used in this paper .we use it because of its existence and continuity .that is , ( a ) for the posterior distribution , its mean does not always exist but the central value does and moreover , it is unique and ( b ) if , then the central value of tends to that of .see .in this section , we introduce a notion of degeneracy and local degeneracy of monte carlo procedure .we use the same framework as to describe local degeneracy . in their approach ,monte carlo procedure is considered to be a pair of random probability measure and transition kernels .we briefly review their framework in subsection [ clc ] . in this subsection , we prepare a quick review of the framework of .let be a measurable space .let be a countable product of .each element of is denoted by and its first subsequence is denoted by .let be a complete separable metric space equipped with borel -algebra .we define non - random monte carlo procedure .the meaning of `` non - random '' will be clear after we define `` random '' mote carlo procedure in definition [ mcp ] and standard gibbs sampler in definition [ ssg ] .a pair is said to be non - random monte carlo procedure on where is a probability measure on and is a sequence of probability transition kernels from to .a simplest example of non - random monte carlo procedure is a non - random crude monte carlo procedure .if we want to calculate an integral for probability measure and measurable function , one approach is to generate i.i.d .sequence from and calculate . in this case and we write and instead of and .this simple monte carlo method is sometimes called a crude monte carlo method .we can describe it as a non - random monte carlo procedure .let be a countable product of a probability measure on ( that is , ) and be where is a dirac measure . then .we call a crude monte carlo procedure .accept - reject method generate i.i.d .sequence from on from another probability measure .assume that is absolutely continuous with respect to and for some , generate i.i.d .sequence from and from the uniform distribution ] ( that is , ) and be as ( [ ar ] ) where and .we call accept - reject procedure for .now we consider a random monte carlo procedure .let be a probability space .[ mcp ] a pair is said to be monte carlo procedure defined on on where is a probability transition kenel from to , that is 1 . is a probability measure on . is -measurable for any .and is a sequence of a probability transition kernel from to .we call stationary if is ( strictly ) stationary for -a.s. .stationarity plays an important role for the asymptotic behavior of monte carlo procedure .markov chain monte carlo procedure is a class of monte carlo procedure .let be a probability transition kernel from to and be a probability transition kernel from to .we call a probability transition kernel from to random markov measure generated by if is a markov measure having initial probability distribution and a probability transition kernel , that is , monte carlo procedure has as a random markov measure , we call markov chain monte carlo procedure defined on on . as a measure of efficiency , we define consistency and local consistency for a sequence of monte carlo procedures. let be a probability space , be a measurable space and be a complete separable metric space equipped with borel -algebra for .let where be a monte carlo procedure on on for .the purpose of the monte carlo procedure is to approximate a sequence of probability transition kernels from to for each .let be a bounded lipshitz metric on defined by .then measures a loss of the approximation of by . is an average loss with respect to .we define a risk of the use of the monte carlo procedure up to for an approximation of by a sequence of monte carlo procedure is said to be consistent to if for any .when tends to a point mass , the above consistency does not provide good information .in such a case , we consider local consistency .let and a centering be measurable .we consider a scaling .let let for .if is consistent to , is said to be local consistent to .this scaling is just one example .for other cases , such as mixture model considered in , where for some . moreover ,the scaling factor ( or in the above example ) may depend on the observation .however , for the current paper , it is sufficient to consider the above scaling . in the end of the subsection, we briefly review the definition of standard gibbs sampler and extend it to non - i.i.d . structure .let be a complete separable metric space equipped with borel -algebra .let and be measurable spaces .assume the existence of probability transition kernels with probability measures assume we have relations where and are marginal distributions of .moreover , we assume where is also a marginal distribution . let for and .let be where and .[ ssg ] set as a random markov measure generated by .then is called a sequence of standard gibbs sampler defined on on . using the abbreviation defined in the next subsection, we can write .the framework described in the previous subsection is useful as a formal definition for monte carlo procedures .however , it is sometimes inconvenient to write down for every time . in this paperwe use two abbreviations to denote monte carlo procedure .first one is abbreviation for a class of empirical distribution .all examples of in the rest of the aper has the following form where and .then we write for .we also use a notation .for example , if and , then is denoted by . if and is the identity map , we write .the second abbreviation is about transformation .let where is a polish space .for a probability transition kernel , we define similarly , we define for .set and .then is a monte carlo procedure defined on on and we call a transform of . for example , if , then .now we consider a localization of a transform .let and be open sets . for , we define a scaling where . we write and for the scaling of and with respectively , that is , where .we say is locally consistent to if is consistent to .the following lemma states that is locally consistent if is .[ coneq ] let be map except a compact set of . assume and both the law of and are tight .then is local consistent to if is local consistent to .let .fix and such that .we first remark that for any , there exists such that for , there exists a compact set such that second we consider then by assumption , both and tends to . by the differentiability of , we have andhence we can replace of ( [ fs ] ) by if .using this replacement , we have which means local consistency of to . roughly speaking, this lemma says that , if is `` equivalent '' to for some and is locally consistent , then is also locally consistent .we define minimal representation and equivalence of monte carlo procedure .let be a monte carlo procedure for .for a realization of , we write , for the law of , that is , then we call a minimal representation of . if two monte carlo procedures have the same minimal representation , we call equivalent . note that even if is markov chain monte carlo procedure , a minimal representation may lose markov property of original monte carlo procedure .we define degeneracy of monte carlo procedure .let and note that for bounded lipshitz metric for probability measures on a metric space , where is a dirac measure .a sequence of monte carlo procedure on on is said to be degenerate if for any . if is degenerate , we call locally degenerate .in fact , as a measure of poor behavior , degeneracy is sometimes too wide .roughly speaking , among degenerate monte carlo procedures , there are relatively good one and bad one .even if monte carlo procedure is degenerate , sometimes it tends to in a slower rate .this convergence property is called a weak consistency by although the terminology in that paper is slightly different from the current one .we can distinguish degenerate monte carlo procedures by the rate .the following is an example for non - random markov chain monte carlo procedure .let .let .let be a non - random markov chain monte carlo procedure on where is generated by .let be a probability transition kernel from to itself and ( open interval ) be a measurable map .assume .we can show that if is tight and ( a ) if for any compact set , or ( b ) if for any compact set and , then is degenerate . to show ( a ) , fix and . by assumption, there exists a compact set such that let which is an event without any jump in first steps .then on the event , where .hence which proves the first claim . to show ( b ) , as above , fix and .let be the event which does not move far from initial point in first steps .by assumption , there exists a compact set such that where .then and on the event , hence which proves the second claim .a sequence of consistent monte carlo procedures can be degenerate .however it is very spacial case . in particular, we have the following proposition .we call a sequence of probability transition kernel from to degenerate if there exists a measurable map such that if its localization is degenerate , we call locally degenerate .[ degpro2 ] let and let be consistent to and also degenerate .then is degenerate .by degeneracy , there exists such that tends to .then since the left hand side is bounded by where both two terms tend to .write marginal distribution of on by , that is , . then the above convergence can be rewritten by by triangular inequality , is bounded by plus .hence we have for each , we can find to be and measurable .therefore we have . hence by triangular inequality, we can replace in ( [ degenpi ] ) by which completes the proof . for stationary case , the following proposition is useful to prove degeneracy .[ degpro1 ] let .let be stationary .then is degenerate if and only if since , the sufficiency is obvious by applying for the definition of degeneracy . on the other hand ,if ( [ degenstatio ] ) holds , take . for fixed , by stationarity , and by ( [ degenstatio ] ) . by triangular inequality , on the event , where is bounded by hence which proves the claim .we consider local degeneracy of the sequence of standard gibbs sampler defined in section 6.1 of .[ onestepgibbs ] a sequence of a standard gibbs sampler is degenerate if and only if there exists a measurable function such that moreover , if , is locally degenerate under the scaling if and only if there exists where is the localization of .assume that is degenerate .then by proposition [ degpro1 ] and ( [ blfact ] ) , tends to .then as in the proof of proposition [ degpro2 ] , there exists a measurable function such that tends to .this proves ( [ degeq1 ] ) by ( [ blfact ] ) . on the other hand , if ( [ degeq1 ] ) holds .then by triangular inequality , and by stationarity , the two terms on the right hand side have the same law .we have and the integral of the right hand side by tends to .hence and degeneracy follows by proposition [ degpro1 ] .the proof for local degeneracy is the same replacing sequence by where . by the proposition ,it is easy to show that when standard gibbs sampler is ( locally ) degenerate , then standard multi - step gibbs sampler ( not defined here ) is also ( locally ) degenerate .this is another validation for the ordering of transition kernels of .we could not establish similar relation for local consistency . for local degeneracy , we have the following .we omit the proof since it is the same for local consistency .[ degeq ] let be map except a compact set of .assume and the law of is tight .then is local degenerate if is local degenerate .we consider a cumulative link model .probability space is defined by and with probability measure having a compact support . for , and .let be a cumulative distribution function on .when , a parameter is constructed by such that and . when , .the model is with dummy parameters and .the parameter space is this cumulative link model is useful for the analysis of ordered categorical data .see monographs such as and .the analysis for gibbs sampler for the model will be studied in the next section . before that , in this section , we show the regularity of the model .first we check quadratic mean differentiability .we recall the definition of quadratic mean differentiability .let be a measurable space and be a parametric family on the space where be an open set .assume the existence of a -finite measure on having for a -measurable function for any fixed . is called quadratic mean differentiable at if there exists a -valued -measurable function such that for any such that . when is quadratic mean differentiable at , a lot of properties such as local asymptotic normality of the likelihood ratio hold with minimal assumptions .see monographs such as and .consider our model ( [ cummod ] ) .the measurable space is for our model and -finite ( in fact , finite ) measure is defined by for the choice of , satisfying is .we assume the following bit strong regularity condition . for , write the law of by .[ assqmd ] 1 .[ assqmdfof ] for a continuous strictly positive measurable function .2 . the support of is compact , which is not included in any subspace of dimension strictly lower than . under assumption[ assqmd ] , is quadratic mean differentiable at any . take -valued measurable function to be which is well defined by assumption [ assqmd ] and set by by theorem 12.2.2 of , if is continuous , is quadratic mean differentiable . since is a finite measure , it is sufficient to show the existence of for any bounded open set , take an open set to be its closure is compact . take such that then there exists such that and for the choice of , by continuity and positivity of , there exists constants such that ).\ ] ] then for , for , choosing to be small enough , and are satisfied .hence the denominator of the right hand side of ( [ derivxy ] ) is uniformly bounded for . for the numerator of ( [ derivxy ] ) , we have and the absolute values of the above two terms are uniformly bounded by for .hence ( [ derivxy ] ) is uniformly bounded and the claim follows by theorem 12.2.2 of by the bounded convergence theorem . for and , set where is defined by ( [ derivxy ] ) .this function is called a normalized score function . by quadratic mean differentiability, the law of tends to .moreover , if there exists a uniformly consistent , the posterior distribution tends to a normal distribution . in the next subsection, we show the existence of the test .we prepare notations for the large sample setting .let and write its element where , .for and , a sequence of measurable functions ] for some and a compact set such that then the existence of uniformly consistent test for against follows for any .this fact comes from quadratic mean differentiability of the model and continuity of in proholov metric .[ unilem2 ] under assumption [ assqmd ] with , there exists a uniformly consistent test for against for any .we take three steps to construct a uniformly consistent test . in the first step ,we divide into subsets . in the second step ,we construct a uniformly consistent test for each parametric family . in the last step we set which will be a uniformly consistent test . for the first step , construct .choose from to be .then there exists such that for any having , there exists such that by , and for , there exists such that therefore , if we take then . to be disjoint , set and for . in the second step ,we construct a uniformly consistent test for the parametric family for each .we show that we can construct a test on which satisfies ( [ c2mod ] ) .write and .the test is where will be defined later .note that since , $ ] . by definition when , this value is bounded by which equals to .if we take , by definition of , tends to or for and hence ( [ test ] ) tends to hence if we take slightly larger than to be , there exists a compact set such that ( [ c2mod ] ) holds .hence we can find a uniformly consistent test for against .in the last step , we take where each is uniformly consistent test for against .then by construction is uniformly consistent test for against .now we extend this test for the model ( [ cummod ] ) for . by lemma [ unilem ]it is sufficient to construct the test for against for each .we apply the test constructed in lemma [ unilem2 ] for each .let and and define a map to be and to be its obvious generalization .when , the law of only depends on and defined by therefore it is a model ( [ cummod ] ) for with explanatory variable and parameter .write above model by .for the parametric family , by lemma [ unilem2 ] , we can construct a uniformly consistent test for against .then defines a uniformly consistent test for against .hence is uniformly consistent test for against . as a summarywe obtain the following . [ unipro ] for the model ( [ cummod ] ) under assumption [ assqmd ] , there exists a uniformly consistent test for against for any .if there exists a uniformly consistent test , the posterior distribution has consistency under regularity condition on the prior distribution .let be a prior distribution where denote the lebesgue measure .let assume the existence of such that write for the fisher information matrix of and write for the central value of .the following is a consequence of bernstein - von mises s theorem .let be the total variation distance between probability measures and on , [ bvmthm ] assume is continuous and strictly positive . under assumption[ assqmd ] , we will denote for .we also denote for its scaling by . by the above corollary, the total variation distance between and tends to .we consider asymptotic properties of the gibbs sampler for cumulative link model .let be a probability space defined in section [ apc ] and let be a parametric family defined in ( [ cummod ] ) . under the same settings as subsection [ uct ] , we construct markov chain monte carlo methods on the model ( [ cummod ] ) and examine its efficiency . to construct gibbs sampler , we introduce a hidden variable .there are several possibilities for the choice of the structure .we consider two choices among them .we refer the former , `` null - conditional update '' and `` -conditional update '' for the latter : \\ x\sim p_x(dx ) & z\sim f(z+\beta^tx)dz & y = j\ \mathrm{if}\ z\in ( \alpha^{j-1},\alpha^j ] \end{array}\right.\ ] ] above update defines .for example , for -conditional update }(z)dz\delta_j(dy).\ ] ] for each construction is equal to the parametric family defined in ( [ cummod ] ) . as definition [ ssg ], we can construct a sequence of standard gibbs sampler on on where . the gibbs sampler is known to work poorly except with -conditional update .this phenomena can be explained by our approach .when is the central value of , a scaling can be defined .we will show that the sequence of a standard gibbs sampler is not locally consistent because the model does not satisfy the regularity condition of theorem 6.4 of except the case with -conditional update .the detail will be discussed later . on the other hand ,there are some markov chain monte carlo methods which works better than above gibbs sampler .we consider a marginal augmentation method introduced by ( see also closely related algorithm , parameter expansion method by ) . in the method, we introduce a new parameter with prior and write for the new parameter set with new prior distribution the new model with new parameter set is defined by \\ x\sim p_x(dx ) & z\sim f(g(z+\beta^tx))gdz & y = j\ \mathrm{if}\ z\in ( \alpha^{j-1},\alpha^j ] \end{array}\right.\ ] ] where we refer the former , `` null - conditional update with marginal augmentation '' and `` -conditional update with marginal augmentation '' for the latter .write the above parametric family .the original model corresponds to .some important properties are summarized as follows : 1 .its marginal is written by the original model : for , where the parametric family in the right hand side is ( [ cummod ] ) .the -marginal of prior and posterior distribution for parameter expanded model are the same as those without expansion , that is 3 .the probability distribution is well defined in the following sense : .[nopx]asymptotic properties of gibbs sampler with and without marginal augmentation ( ma ) .the letter o means local consistency and x means local non - consistency .p means local consistency for . [cols="^,^,^,^,^",options="header " , ] we prepare notation for the a ) fisher information matrix , b ) the central value and c ) the normalized score function for two models a ) and b ) for fixed . for fixed ,we write if tends in -probability to . 1 .write fisher information matrices by for models a ) and b ) with respectively .we write .2 . write central values by for and with respectively .note that .we denote and for fisher information matrices and at .3 . write a normalized score function of a ) by see ( [ nor ] ) for the definition of normalized score function .now we are going to construct an approximation of .since is almost fixed parameter ( just a formal sense ) where as an update of , is a transition kenel of a standard gibbs sampler for parametric family b ) . with a regularity conditions, we can directly apply theorem 6.4 to the model b ) which yields normal approximation of here we used .we denote for this approximated probability transitoin kernel .we can rewrite using . under , then by a simple algebra , this yields an approximation of by normal distribution with mean with variance .we denote this normal approximation by .hence we obtain approximation of . 1 . has the form ( [ prior ] ) and and for lebesgue measure and where are continuous and strictly positive .2 . has a derivative which is continuous and with the above assumption , we can show that null - conditional update produces local degenerate gibbs sampler for a map . for probability transition kernels and , we denote [ nullem ] under assumptions [ assqmd ] and [ asspri ] , for null - conditional update construction with marginal augmentation , the following value tends to : for -conditional update with marginal augmentation , tends to .we only show the former since proof for the latter is almost the same .first we show tightness of .we have and the both terms in the right hand side have the same law defined after corollary [ bvmthm ] under . hence the tightness for follows by corollary [ bvmthm ] . for any , fix to be the probability of the event is lower than in the limit . in the following ,we only consider under the event . as the comment before proposition [ unipro ], we consider simpler models .it is sufficient to show the convergence of for for . for each , for , since comes from ( see ( [ xfullmodel ] ) ) . by simple algebra , for each fixed , assume that is in the support of and set . by the above inequality, we have where now we show that the probabilities of events and are negligible . since the proof is the same , we only show for .the event is where is note that .when we write for the probability of the event with respect to , we have this value tends to if and in fact equals to where the limit is strictly positive .hence ( [ pros ] ) tends to for each .its integration by also tends to by the bounded convergence theorem .hence tends in probability to . for each conditional update ,the quadratic mean differentiability of the parametric family for fixed comes from the continuity of the corresponding fisher information matrices : for null - conditional update and -conditional update , the matrices are with respectively .the condition for the support is clear .we show the existence of uniformly consistent test . for null - conditional update ,write for the fixed parameter .consider a submodel of original model .then by subsection [ uct ] , this submodel has uniformly consistent test .now we consider a re - parametrization .since is continuous , re - paramezrized model , which is in fact has also uniformly consistent test . for the null - conditional update wtih marginal augmentation, is locally degenerate by lemma [ nullem ] and proposition [ degpro1 ] .therefore , by , is locally degenerate by lemma [ degeq ] . on the other hand ,if is locally consistent , by mapping , should be locally consistent by lemma [ coneq ] . since is not degenerate with the scaling with map for or , it is impossible by proposition [ degpro2 ] .hence is not locally consistent .it is quite similar for -conditional update . for this case, is locally degenerate and hence is also locally degenerate by a map . on the other hand ,if is locally consistent , then should be locally consistent since for a map . since is not degenerate with the scaling with map for , it is impossible .hence is not locally consistent .for null conditional update case , consider where .the probability transition kernel of its minimal representation is we show that we can replace by in the above transition kernel .write for the central value of .first we apply bernstein von - mises s theorem for for the approximation . by this approximation, we can approximate by defined by for some continuous function , uniformly in , hence by tightness of and convergence of to in probability , we can replace of by ( see the proof of theorem 6.4 of ) . then using bernstein - von mises s theorem again, it is validated to replace in in the sense of where is the transition kernel after replacement of by .we already have an approximation of .therefore is locally consistent by the convergence of total variation by the same argument in the proof of theorem 6.4 of . by the similar argument , for -conditional update for , or locally consistent with respectively .therefore is locally consistent by a map for the former and for the latter .
we study asymptotic behavior of monte carlo method . local consistency is one of an ideal property of monte carlo method . however , it may fail to hold local consistency for several reason . in fact , in practice , it is more important to study such a non - ideal behavior . we call local degeneracy for one of a non - ideal behavior of monte carlo methods . we show some equivalent conditions for local degeneracy . as an application we study a gibbs sampler ( data augmentation ) for cumulative logit model with or without marginal augmentation . it is well known that natural gibbs sampler does not work well for this model . in a sense of local consistency and degeneracy , marginal augmentation is shown to improve the asymptotic property . however , when the number of categories is large , both methods are not locally consistent .
neutron stars ( nss ) can be powered by different sources of energy : rotation , accretion , residual heat and magnetic field . up to now ,more than twenty x - ray pulsating sources are identified as isolated , slowly rotating ( ) neutron stars , whose large x - ray luminosity / s can not usually be explained in terms of rotational energy , at variance with rotation - powered pulsars ( rpps ) .historically , they are classified as anomalous x - ray pulsars ( axps ) or soft gamma repeaters ( sgrs ) , and they are recognized as magnetars . in these objects , mf decay and evolution is the dominant energy source and crustal deformations produced by the huge internal magnetic stresses are the likely cause of bursts and episodic giant flares . in most cases ,the spin - down inferred external magnetic field turns out to be super - strong . recently , increasing evidence gathered in favor of a unified scenario for isolated neutron stars , their different observed manifestations being mainly due to a different topology and strength of the internal mf , and to evolution ( ) .the typical soft x - ray ( ) spectrum of magnetars is given by a thermal component ( ) , plus a hard tail phenomenologically described by a power law with a photon index .this hard tail is believed to be produced by resonant compton ( up)scattering ( rcs ) of photons emitted by the star surface onto mildly relativistic particles flowing in magnetosphere . in order to take into account for this effect, several codes have been developed in recent years .the two basic inputs are the external mf and a prescription for the particle space and velocity distribution . with these ingredients , and with a given energy distribution for the incoming surface photons ( the seed spectrum ) , the mc simulations of rcs provide the emerging , reprocessed spectrum , which must be compared with observational data. magnetars also exhibit a high - energy ( kev ) power - law tail , the origin of which is still under debate .one of the possible mechanisms is again rcs , whose efficiency at high energy is very sensitive to mf and plasma velocity . in this paperwe discuss the effect of the geometry of the mf , which is obtained from new calculations of magnetostatic , force - free equilibria .the shape of the internal mf of nss is largely unknown , but it is thought to be complex , with toroidal and poloidal components of similar strength , and with significant deviations from a dipolar geometry . in the crust ,a large toroidal field is a necessary ingredient to explain the observed thermal and bursting properties of magnetars , even in the case in which they show a low value of the inferred external dipolar field .the complex internal topology must be smoothly matched to an external solution , so that the internal mf geometry at the star surface has a strong effect on the external mf . in this framework, the rpp phenomenolgy is expected to be better explained by the presence of a ns with a simple , nearly potential , dipolar field , while magnetars activity is compatible with the presence of a complex external field , as a result of the transfer of magnetic helicity from the internal to the external field .a non - potential mf needs supporting currents , and charges can be either extracted from the surface or produced by one - photon pair creation in the strong mf . in the inner magnetospheric regionmf lines corotate with the star , with footprints connected with the crustal field , which evolves on long timescales ( years ) .furthermore , resistive processes in the magnetosphere act on a typical timescale of years , much longer than the typical response of the tenuous plasma , whose alfvn velocity is near to the velocity of light . for these reasons ,a reasonable approximation is to build stationary , force - free magnetospheric solutions , assuming that the footprints are anchored in the crust and neglecting resistivity .then , the ideal mhd force - free equation is a simple balance between electromagnetic forces , provided that inertial , pressure , and gravity terms are negligible where we have introduced the rotationally induced electric field , the related charge density and the current density .the electric field gives corrections of order .neglecting such contributions is safe if we consider the region closest to a magnetar surface ( ) , and remembering that the light cylinder for slow rotators lies typically at a few thousands stellar radii ( and magnetar periods are in the 110 s range ) .therefore , condition ( [ eq_electromagnetic_forces ] ) reduces simply to : the particles can flow only along mf lines . even if the pulsar equation simplifies in this limit, it still contains an arbitrary function , which basically describes the toroidal field as a function of the poloidal one ( see for the mathematical formulation ) . in the literature ,only the simplest ( semi-)analytical solutions have been derived and later employed in applications . in particular, in the context of magnetars , the self - similar twisted dipole family of solutions is the most popular choice .proposed for solar corona studies , and later extended to the magnetar scenario , in self - similar models the radial dependence of the mf components is described by a simple power law , .these solutions are semi - analytical and described by only one parameter , the radial index ( see for globally twisted multipolar fields ) .however , globally twisted solutions necessary evolve in time and magnetic configurations in which the twist affects only a limited bundle of field lines are likely required by observations . as an alternative to self - similar models , in we presented a numerical method to build general solutions .the force - free solutions are obtained by iteration starting from an arbitrary , non - force - free poloidal plus toroidal field . by the introduction of a fictitious velocity field proportional to the lorentz force, the code is designed to dissipate only the part of the current not parallel to the mf .the boundary conditions are an arbitrary , fixed radial mf at the surface , , and the requirement of a continuous matching with a potential dipole at large distances .this method is able to produce general configurations , whose features may be radically different from the self - similar solutions . in this work ,we try to reproduce some -bundle like structure , aiming at providing input for mc simulations .in particular , we focus on two models , whose mf and current distribution are shown in fig .[ fig : model12 ] . the first one is obtained fixing a dipolar form for radial mf , while the second one has a concentration of field lines near the south pole . to get these profiles ,we have chosen the component of the vector potential as : \end{aligned}\ ] ] from which we recover ( is the value of at north pole ) .both models have high helicity , but it is not homogeneously distributed like in self - similar models .model 1 is characterized by an equatorial asymmetry , with current and twist concentrated in a closed bundle near the equatorial region and a near the southern semi - axis .model 2 is more extreme : its helicity is 25 times larger than for model 1 .both are supposed to be more realistic than the self - similar models , since currents are more concentrated near the axis .( grey logarithmic scale ) with scattering surfaces for photons of 1 , 3 and ( here , ; see text ) ; second panel : current intensity ( grey linear scale ) , in units of .,title="fig : " ] ( grey logarithmic scale ) with scattering surfaces for photons of 1 , 3 and ( here , ; see text ) ; second panel : current intensity ( grey linear scale ) , in units of .,title="fig : " ] ( grey logarithmic scale ) with scattering surfaces for photons of 1 , 3 and ( here , ; see text ) ; second panel : current intensity ( grey linear scale ) , in units of .,title="fig : " ] ( grey logarithmic scale ) with scattering surfaces for photons of 1 , 3 and ( here , ; see text ) ; second panel : current intensity ( grey linear scale ) , in units of .,title="fig : " ]the mc code we employ is presented in and we refer to these works for further details . herewe just summarize some basic concepts .it is assumed that currents flow in a non - potential magnetosphere and that the electron density is large enough for the medium to be optically thick to resonant scattering : photons produced by the cooling star surface can be then up - scattered by rcs , populating the hard tail of the spectrum . in the stellar frame, the resonant energy corresponds to the doppler - shifted gyration frequency of the electron where is the local intensity of mf , the electron velocity , the lorentz factor , and the cosine of the angle between the photon direction and the electron momentum .the non - resonant contributions are negligible , since the resonant cross - section is up to five orders of magnitude larger than the thomson cross - section . figs . [fig : model12 ] show the scattering surfaces in the two models , for photons of 1 , 3 and , assuming and .the surfaces are far from being spherically symmetric . in model 2 , which has the strongest helicity ,the scattering surfaces of soft x - ray photons lie tens of stellar radii away from the surface , especially near the axis .the scattering optical depth for a given seed photon energy depends on the local particle density , the velocity of the scatterer and the local value of the mf .since photons can be pushed to energies of some hundreds kevs , non - conservative scattering and the fully relativistic qed cross sections are used .the velocity distribution is assumed to be a one - dimensional ( along the field line ) , relativistic maxwellian distribution with a plasma temperature and a bulk velocity , as described in .the strong simplification in this approach is that the velocity distribution does not depend on position , which reduces the microphysical inputs to two parameters : the temperature of the plasma and . only recently ,a more consistent treatment of the charges space and velocity distribution was attempted .these new results show that the velocity distribution is non - trivial and strongly depends on position , which opens a new range of possibilities , so far unexplored . given a seed spectrum for the photons emitted from the surface ( assumed as a blackbody with temperature ), the mc code follows the photon propagation through the scattering magnetosphere .when a photon enters in a parameter region where no more resonant scatterings are possible , the photon escapes to infinity , where its energy and direction are stored .the sky at infinity is divided into patches , so that viewing effects can be accounted for ( e.g. if line - of - sight is along the angles , , only photons collected in the patch which contains that pair of angles are analyzed ) .the angle - averaged spectrum is obtained simply by averaging over all patches .in previous works , self - similar magnetospheric models were used to obtain synthetic spectra that were satisfactorily fitted to x - ray observations , giving typical values of ] . in this paperwe explore the dependence of the spectrum on the magnetosphere model by comparing results obtained with self - similar models and with our new numerical solutions for force - free magnetospheres .we present results comparing models with a magnetic field intensity at the pole of g. the seed spectrum is assumed to be a planckian isotropic distribution for both ordinary and extraordinary photons . in figs .[ fig : beta_bfinal1 ] and [ fig : beta09 ] we plot the angle - averaged spectra , i.e. , the distribution of all photons escaped to infinity , independently on the final direction .[ fig : beta_bfinal1 ] shows the effects produced by changes in the bulk velocity , keeping fixed the mf configuration ( model 1 ) , and the electron temperature ( ) .the main fact is the the spectrum in the region becomes harder as is increased , while it is depleted of photons of energies in the range .a similar , but less pronounced effect , is obtained by increasing the electron temperature . in fig .[ fig : beta09 ] we compare spectra obtained with fixed values of and , varying only the mf topology .the three lines correspond to model 1 ( solid ) , model 2 ( dashes ) and a self - similar model ( dash - dotted line ) with , which approximately has the same total helicity of model 1 .we have chosen a high value of to show a case where the effects are larger , but our conclusions do not change qualitatively for lower values of .the high - energy cutoff produced by electron recoil is also evident .we see that the major differences arising from changes in mf topology appear in the hard tail of the spectra , which is clearly harder for the self - similar model . however , the thermal part of the spectrum is more depleted , especially for model 2 .kev , , changing mf model . ]kev , , changing mf model .] however , in the angle - averaged spectrum , the differences due to the mf topology are partially smeared out , and the comparison between synthetic spectra as seen from different viewing angles ( first three panles in fig . [fig : dir ] ) is more interesting .the comptonization degree reflects the inhomogeneous particle density distribution and therefore the particular current distribution produces important differences between different viewing angles . since neutron stars rotate , the study of pulse profiles and phase - resolved spectra can trace the geometric features of the scattering region ( tens of stellar radii ) . in the lower right panelwe show the light curves for the three mf models , in bands 0.510 kev and 20200 kev , for an oblique rotator with a certain line of sight ( in ) . for self - similar models ( lower left panel ) ,the thermal part of the spectrum shows a smooth dependence with the viewing angle , which in turn translates into relatively regular light curves . in models 1 and 2 , differences in the spectra induced by viewing angles are more pronounced . in these cases ,the spectrum is much more irregular and asymmetries are larger . in particular , model 1 ( upper left ) shows a softer spectrum when seen from northern latitudes , with important spectral differences .the pulsed fraction of model 1 is very high in the hard x - ray band , while it is comparable with the self - similar model in the soft range .model 2 ( upper right ) has a more symmetric distribution of currents , and the comptonization degree at different angles depends on the energy band in a non trivial way .this results in large pulsed fraction for both energy ranges and in notable differences between their pulse profiles .( see text).,title="fig : " ] ( see text).,title="fig : " ] + ( see text).,title="fig : " ] ( see text).,title="fig : " ] the high energy tail , seen at different colatitudes , can vary by one order of magnitude or more . investigating the geometry can help to recognize the different components seen in the hard tails via pulse phase spectroscopy . on the other hand ,the macrophysical approach used in this work to obtain the mf topology should be accompanied by the corresponding microphysical description to determine the velocity distribution of particles , here simply taken as a model parameter . a fully consistent description of both mf geometry and the spatial and velocity distribution of particles is needed to advance our interpretation of magnetar spectra .9 beloborodov a m and thompson c 2007 _ apj _ * 657 * 967 beloborodov a m 2009 _ apj _ * 703 * 1044 beloborodov a m 2011 _ high - energy emission from pulsars and their systems _ ed torres d f and rea n p 299 fernndez r and thompson c 2007 _ apj _ * 660 * 615 fernndez r and davis s w 2011 _ apj _ * 730 * 131 low b c and lou y q 1990 _ apj _ * 352 * 343 mereghetti s 2008 _ a&arv _ * 15 * 225 nobili l , turolla r and zane s 2008 _ mnras _ * 386 *1527 nobili l , turolla r and zane s 2008 _ mnras _ * 389 * 989 pavan l , turolla r , zanes and nobili l 2009 _ mnras _ * 395 * 753 pons j a and perna r 2011 _ apj _ * 741 * 123 thompson c , lyutikov m and kulkarni s r 2002 _ apj _ * 574 * 332 turolla r , zane s , pons j a , esposito p and rea n 2011 _ apj _ * 740 * 105 vigan d , pons j and miralles j a 2011 _ a&a _ * 533 * a125 zane s , rea n , turolla r and nobili l 2009 _ mnras _ * 398 * 1403
nowadays , the analysis of the x - ray spectra of magnetically powered neutron stars or _ magnetars _ is one of the most valuable tools to gain insight into the physical processes occurring in their interiors and magnetospheres . in particular , the magnetospheric plasma leaves a strong imprint on the observed x - ray spectrum by means of compton up - scattering of the thermal radiation coming from the star surface . motivated by the increased quality of the observational data , much theoretical work has been devoted to develop monte carlo ( mc ) codes that incorporate the effects of resonant compton scattering ( rcs ) in the modeling of radiative transfer of photons through the magnetosphere . the two key ingredients in this simulations are the kinetic plasma properties and the magnetic field ( mf ) configuration . the mf geometry is expected to be complex , but up to now only mathematically simple solutions ( self - similar solutions ) have been employed . in this work , we discuss the effects of new , more realistic , mf geometries on synthetic spectra . we use new force - free solutions in a previously developed mc code to assess the influence of mf geometry on the emerging spectra . our main result is that the shape of the final spectrum is mostly sensitive to uncertain parameters of the magnetospheric plasma , but the mf geometry plays an important role on the angle - dependence of the spectra .
the enormous potential of stellar photometry from space is finally being realised by the most ( microvariability & oscillations of stars ) mission , a low - cost canadian space agency ( csa ) microsatellite which was launched in june 2003 ( ; ) . this potential particularly for stellar seismology had been recognised for almost two decades ( e.g. , ; ) , and many dedicated space missions have been proposed .missions which reached a fairly advanced state of development but were not ultimately funded include : prisma , ppm , stars , spex , and mons .the status of the funding for the esa mision eddington is still not entirely clear .the first instrument designed and built for stellar seismology through photometry from space was evris , a 10-cm telescope feeding a photomultiplier tube detector , mounted aboard the mars-96 probe .unfortunately , mars-96 failed to achieve orbit .evris was intended to be the precursor to corot , the cnes - funded mission due for launch in 2006 , to explore stellar structure through seismology and search for planets through photometric transits .expected to join corot in space in 2008 is kepler , whose primary goal is detection of earth - sized planets via transits , but it will also obtain photometry uniquely powerful for stellar astrophysics . another useful tool for space photometry has turned out to be nasa s wire satellite , whose primary scientific mission of infrared mapping failed . however , the satellite has proved to be a stable functioning platform for its 5-cm startracker telescope and ccd , which has been exploited successfully for stellar photometric studies ( e.g. , ) .most , wire , corot , kepler and eddington are all ccd - based photometric experiments , and the first three are low - earth - orbit ( leo ) missions with similar orbital environments ( radiation and scattered earthshine ) .all these missions have a common need to extract information on stellar variability from data cubes consisting of upwards of hundreds of thousands of two - dimensional ccd frames ( or sub - rasters ) containing from hundreds to millions of pixels each .the modes of observation range from in - focus ( kepler ) and defocussed imaging ( corot , and most in its direct imaging mode ) of fields with many targets to fabry imaging of the instrument pupil illuminated by a single target ( most in its principal operation mode ) .in addition to its scientific value , most is a superb testbed for leo space photometric techniques which can be applied to other missions .we present here a comprehensive approach for handling and reducing techniques which are relevant for other leo space photometry missions and ground - based ccd photometry of cluster fields .because most is the first fully operational ccd space photometer , we use its archive to describe our reduction technique .this justifies a brief description of the mission .the most instrument is a 15-cm rumak - maksutov optical telescope feeding twin ccd detectors ( one dedicated to stellar photometry , the other for guiding ) through a single broadband filter .most was designed to obtain rapid photometry ( at least one exposure per minute ) of bright stars ( ) with long time coverage ( up to 2 months ) with high duty cycle .its goal is to achieve photometric precision down to a few micromagnitudes ( ) in a fourier amplitude spectrum at frequencies down to about 1 mhz .it has achieved this goal for targets such as procyon and boo .most is a microsatellite with a mass of only 54 kg and hence little inertia .it is able to perform optical photometry of point sources thanks to a stabilisation system or attitude control system ( acs ) developed by dynacon , inc .( ; ) .previous microsats could not achieve pointing stability better than about to , essentially useless for optical astronomical imaging .the requirement for the most acs performace was ; the goal was .the actual acs performance was improved during the commissioning and early science operations of most through software upgrades and refined use of the acs reaction wheels , so that pointing precision is now about rms .anticipating image wander across the most science ccd as large as , and the lack of an on - board flatfielding calibration system ( due to power and cost limitations ) , the most design relies on producing a pupil image illuminated by target starlight .this extended image is produced by a fabry microlens ( similar in concept to the fabry lens common to photoelectric photometers ) and moves by no more than 0.1 pixel if the star beam moves by from optimum pointing .( the resolution of the ccd is per physical pixel . ) to provide simultaneous sky background measurements , and for redundancy against fabry lens and/or ccd defects , most is equipped with a array of microlenses , each of which images the telescope entrance pupil . centred on each microlens is a field stop , 1 arcmin in diameter , etched in a chromium mask .we refer to , for a detailed description of the focal plane arrangement and in particular to figs.7 and 8 of that paper .the primary science target is typically centred on one of these microlenses .a binned mean profile of the resulting fabry images is shown in fig.[f : meanfabryimage ] .each fabry image is an annulus with an outer diameter of 44 ( unbinned ) pixels .the pixels outside the annulus are sampled in a square subraster which gives an indicator for light scattered away from the pupil image or entering the focal plane independently of the telescope optics .the principal data format for most photometry referred to as science data stream 2 ( sds2 ) consists of a resolved image of the target fabry image , binned ( to satisfy the limited most downlink rates ) , and 7 more heavily binned adjacent sky background fabry images .there are also subrasters of the ccd shielded from light which provide dark and bias measurements .each sds2 data file also contains a complete set of exposure parameters ( exposure start time , integration time , ccd gain , etc . ) and extensive satellite telemetry ( e.g. , ccd temperature , acs errors , etc . ) . for longer - term data backup or to increase the sampling rate without exceeding the downlink limits , the most subraster images can be processed on - board , to produce science data stream 1 ( sds1 ) .fabry image windows are integrated to give total intensities , backgrounds are generated from the four corners of each window , and sums of the pixel intensities binned in columns and rows are generated .sds1 data can be sent to earth at a rate 10 times higher than sds2 , but do not allow as flexible reduction and analysis .most also obtains direct imaging photometry of stars , as in a cluster field like m67 .] , based on defocussed images ( fwhm 2.2 pixels ) in portions of the science ccd not covered by the fabry microlens field stop mask .processing of these data is described by .the acs ccd is now also used to obtain photometry of about guide stars , to be described by . in this paper , we are primarily concerned with most photometry in the sds2 format , although we do address sds1 in section [ sec : sds1 ] .the most fabry imaging approach is quite insensitive to telescope pointing errors , by keeping an extended nearly fixed pupil image on the same ccd pixels ( to within 0.1 physical pixel if pointing deviations are less than ) .however , movements of the incoming beam of starlight on the fabry microlens will introduce small photometric errors due to slightly different throughputs with changes in the light path through the telescope and camera optics .the greatest sensitivity may be due to tiny inhomogeneities in the fabry microlens itself .the microlens array was etched into the bk-7 glass window above the ccd , and each element was tested in the lab for optical quality with artificial star images .defects in certain microlenses in the array were identified and mapped in advance of launch , but there is always the possibility of subtle damage in orbit .one would expect some correlation between photometric errors and attitude control system ( acs ) errors , but the relationship is complicated by the fact that the acs system updates its guiding information typically once per second , whereas science exposures last for tens of seconds for the majority of targets .thus , the star beam has moved across some complex path during the exposure . in an observing run , occasionally there is a pointing error which places the star too close to the edge of the field stop during part or all of an exposure , and significant signal from the stellar point spread function ( psf ) is lost . pointing errorscan also bring other sources ( stars , galaxies and nebulae ) near the primary science target in and out of the field stop , introducing photometric errors .this was explored extensively in most mission planning , and there are no sources close to the most fabry imaging targets bright enough to affect the intended photometric precision . in any event ,most pointing has improved to the level where these factors are now relatively unimportant , but some of them did manifest themselves in the commissioning science data ( e.g. , cet ; see ) and in some of the early primary science data .the reduction procedure takes acs output into account to automatically reject images where at least one error value exceeds a pre - defined limit ( see [ ssec : rejection ] ) , indicating a target position being outside the nominal fabry lens area at least during part of the integration .cet , october 2004 ) . for transitions through the south atlantic anomaly , where the terrestrial magnetic field is weak , the impact rates rise significantly . ]cosmic ray subtraction is important in ground - based ccd imaging , and it is generally even more critical for space - based photometry .the effects of energetic particle hits on the most science ccd have been mitigated in the mission design in several ways : 1 .the ccds are shielded by the camera housing , equivalent to a spherical barrier of aluminium 5 mm thick .most is in a low earth orbit ( altitude 820 km ) in which radiation fluxes are relatively low .most fabry imaging photometry employs only a small effective area of the science ccd , rather than the entire chip , presenting a relatively small target for incoming particles .most s orbit does carry it through the south atlantic anomaly ( saa , fig.[f : saa ] ) , a dip in the earth s magnetosphere which allows higher cosmic ray fluxes at lower altitudes .passages through the saa mean increased risk of an on - board computer crash due to a single event upset ( seu ) or `` latch - up '' .most has experienced such particle - hit - induced crashes at an average rate of about once every two months in the 18 months of normal science operations .otherwise , the increased particle fluxes during saa passages do not have a significant effect on either acs accuracy or stellar photometry .for those particles which do reach the relevant portions of the most science ccd , there are two factors which make them easier to treat in the photometric reduction : 1 .cosmic ray hits are statistically independent in time ( poisson statistics , see [ sssec : coscand ] ) .the most likely situation is that a particle will strike only a single pixel , making most hits easy to distinguish from other artefacts ( see [ sssec : cosaffect ] ) .equ data ) .the inner circle centred on the north pole is void of information , since the maximum geographic latitude to be reached by most is , not .spacecraft passages over greenland are clearly associated with higher stray - light intensity , which introduces aliases to the fourier spectra . ]any off - axis bright source such as the earth will produce bright diffraction rims around the primary and secondary mirror even for perfectly baffled optics .these rims are clearly visible in figs .[ f : rmsfabryimage ] and [ f : rms_fourier_noise ] .since stray - light effects are imposed on us by nature and not peculiar to most , any follow - up space project will have to deal with stray - light contamination , and it is essential to find reliable correction techniques . the most orbit plane inclination of 98.72 results in a dusk dawn polar orbit because of the orbit plane precession due to the non - spherical terrestrial mass distribution .this precession compensates for the annual movement of our sun along the celestial equator .the deviation of the orbit plane from orthogonality to the terrestrial equator together with the obliquity of the ecliptic of aproximately 23.5 results in passages of most over the illuminated north polar region during summer , which causes increased stray light .in extreme cases , the excessive stray light at certain most orbit phases can reduce the duty cycle from nearly to . for space imaging or photometry missions which need to point to fields at low solar elongations , scattered light from the sun directly into the telescope or via the zodiacal light background would be the biggest challenge .the most mission was deliberately designed so that target fields are always in the opposite part of the sky from the sun .the lowest solar elongation possible for most observations is about .direct solar light rejection is not a concern for most photometry . for any leo astronomical photometry satellite , scattered light from the illuminated portion of the earthis a major source of sky background , and stray - light rejection is a major consideration in mission design and planning .due to restrictions on the payload envelope in the most design , the instrument could not have a large external baffle .three approaches were adopted to reduce the influence of scattered earthshine on most photometry : 1 .the sun - synchronous dawn - dusk orbit , combined with the choice of target fields in a continuous viewing zone ( cvz ) pointing away from the sun , means that the most telescope tends to look out over the shadowed limb of the earth .the telescope and camera are equipped with internal baffles and anti - reflective coatings .the spacecraft bus was intended to be a light - tight housing allowing light to enter only through the instrument aperture .none of these approaches is perfect . 1 .as was mentioned in sec.[sss : orbit ] , the most orbit is sun - synchronous . since the obliquity of the ecliptic is about 23.5 , most orbits over the earth s terminator only at times close to the equinoxes . during the solstices ,a significant fraction of the bright earth limb is visible to the most telescope .2 . the internal baffles and coatings appear to be working properly . butsome light is scattered into the optics by parts of the the external door mechanism ( which protected the most optics during launch and release into orbit and acts as a safeguard if there is a danger of the telescope pointing directly at the sun ) , whose coatings may have been damaged on launch .one of the most spacecraft s external panels was `` shimmed '' when mounted to the bus ( i.e. , a thin piece of metal was inserted to compensate for a misalignment of mounting holes ) .this gap in the spacecraft structure allows light to enter the instrument from an unanticipated direction . as a result ,the major component of sky background in most photometry is earthshine , modulated with the satellite orbital period of 101.4 min ( frequency = 14.2d = 165 ) and , to a lesser extent , with a period of approximately 1 day as most s orbit returns it to a similar position over the earth ( and hence , a similar reflection feature in the terrestrial albedo ) on a daily cycle .an example of the presence of stray light in raw most photometry is shown in fig.[fig : decorsteps ] .the amplitude and shape of the stray - light modulation is highly dependent on the season of observations , the `` roll '' of the spacecraft ( i.e. , the position angle of the telescope about its optical axis ) , and the location of the target field within the cvz .the relative contribution of maximum stray light also depends on the brightness of the target star .for procyon , even though the observations are obtained near winter solstice and hence near the worst geometry for light scattered from the bright earth limb , the stray light never exceeds of the stellar signal ( ) .for the star equ ( ) , observed near the summer solstice , the stray light reaches a maximum of about the stellar signal ( see fig.[fig : decorsteps ] ) .fig.[f : stray light ] represents the variations of stray light as a function of most position over the earth for many orbits during the equ run .the stray light reaches maxima over the north pole and arctic regions a combination of the high albedo of those areas which are continuously exposed to sunlight near summer solstice .( the antarctic is in shadow , and does not dominate the stray light until times closer to the winter solstice . ) in extreme cases , excessive stray light in most photometry can reduce its duty cycle from nearly to about per orbit .such periodic gaps introduce aliases at a spacing of 14.2d or 165 . for comparison, the wire startracker ( buzasi et al . 2000 ) typically achieves an orbital duty cycle of about , despite being in a similar orbit to most .this is because the wire platform must switch directions each half - orbit , to keep its batteries at the proper operating temperature , and the startracker focal plane receives even more severe scattered earthshine .( keep in mind that this instrument was designed as an acs startracker , not a photometric instrument . )measurements on march 6 , at a target position only from the full moon caused massive contamination by direct stray light . ] because most targets are concentrated in a cvz which is roughly equatorial ( ) , the moon can occasionally come very close to a most target field during observation . and because most observations are made in the anti - sun direction , the phase of the moon is close to full when this happens .two examples of a close passage of the moon during most photometry occurred for the commissioning science target , cet , and the early primary science target , vir . in march 2004 ,the centre of the moon came within of vir during most monitoring of that star .the effects on the mean background are shown in fig.[f : betvirmoon ] .the most stray - light rejection system was never designed to cope with such a close passage of such a bright source .fortunately , lunar encounters like this are relatively rare and short - lived . during observations of ceti in october 2004 ,the moon came to within 20 of the star , and as it approached , it underwent a total lunar eclipse .this was an excellent opportunity to gauge the contribution of moonshine to most stray light at lunar elongations which would be somewhat more common in most observations .the mean background intensities for this encounter are shown in fig.[f : lunareclipse ] .this and other tests of most photometry with respect to lunar phase and elongation indicate that the moon does not have a noticeable influence on most background measurements until it approaches within about 28 of the most target field . generally , the moon will affect the background at the level of a few percent .although a typical most observing run is about a month long ( covering only about one lunar synodic period ) , there is no evidence to date that the moon introduces a periodic component to the most photometric background .cet . at an angular separation of 28 degrees , during an orbit phase of rather low earthshine , lunar stray light begins to show up in the data . on oct .27 , 2004 , a lunar eclipse occurred during two orbits , producing a decay of stray - light intensity by at an angular separation of 20 degrees .the background intensity keeps increasing for two orbits after the eclipse , until the angle between moon and target attains a minimum . ]the most ccds are passively cooled to a temperature around and maintained at a given operating temperature by a trim heater with an accuracy of . without regulation ,the temperature would be modulated with the orbital period with a peak - to - peak amplitude of about . at temperatures below about , the dark noise remains below electrons / pixel / sec and does not affect the fabry imaging photometry .the ccd focal plane temperature is monitored continuously to an accuracy of .all most photometry is checked for any correlation with ccd temperature and other operational parameters .no system is optically perfect , and that is true for the fabry microlenses as well . if the star beam passes through a small flaw in the microlens , a distorted image of the entrance pupil will result .the deviation of an individual target image from the mean doughnut shape is used as a rejection criterion ( see [ ssec : shape ] ) .this also is an indicator of when the pointing has moved too close to the edge of the fabry field stop and some of the starlight is obstructed .pixel intensities are considered to form a 3dimensional array , a data cube , where the index shall consistently refer to the time axis , and and represent the horizontal and vertical coordinates on the exposure . the number of exposures , i.e. the maximum value for the time index , is denoted , the upper limits for spatial indices are ( width ) and ( height ) . in our case . in this context , indexing refers to virtual pixels , each virtual pixel intensity representing a bin of physical pixel intensities .statistical moments are denoted by an operator , where only indices inside the bracket are used for summation , and the order of the moment is given as an exponent .e.g. , the temporal variance of intensity for the pixel is denoted .multivariate moments are written in the same sense , e.g. denotes the covariance of pixel with all other pixels on the image in the time series of exposures .a special case is the computation of moments through summation over all target pixels . in this casethe abbreviation is used instead of using two indices .correspondingly , denotes the intensity summed over all background pixels .the idea of identifying and correcting outliers in all our applications is based on data smoothing by a distance - weighted average ( dwa ) , similar to the whittaker filter or hodrick - prescott filter .the dwa will consistently be applied if data are to be smoothed in the time direction , according to as well as for image coordinates , where we write with ^{-\frac{\lambda}{2}}\ , .\ ] ] for these filters , we permit , to be positive semi - definite , real numbers , which help to adjust the degree of data smoothing . if these parameters are zero , the filter functions reduce to unweighted arithmetic means , respectively . in our application , the dwa is used in the time direction to identify and correct pixel intensities contaminated by cosmic ray impacts ( [ sssec : coscand ] , [ sssec : cosaffect ] )the residual correlation between the integrated light curve and spacecraft position ( [ sssec : satpos ] ) is performed using a dwa in terms of sub - satellite geographic latitude .the spatial filter ( in terms of image coordinates ) was used in context with the local dispersion model ( [ ssec : ldm ] ) , which did not prevail in the final reduction procedure and is mentioned here for completeness .the first step of the data reduction procedure is the assignment of each pixel to either target or background , hence defining an aperture to be used for photometry .this procedure is performed interactively .the acs provides a set of -errors for each image .these error values are defined on the interval $ ] in discrete steps of .if the true acs error exceeds this range , the maximum value is returned , indicating that the target was outside the nominal fabry lens area at least during some fraction of the integration time .consequently , readings with extreme acs error values like these are automatically identified and rejected by the software .both pointing problems and inhomogeneities of the fabry lens may cause the image to be deformed .a three - step procedure has been chosen to correctly identify images with a deviating shape . 1 . [ ssec : shape : step1 ]all intensities are normalised according to where it is sufficient to consider target pixels only . 2 .[ ssec : shape : step2 ] the mean normalised fabry image , , is computed .fig.[f : meanfabryimage ] displays the mean fabry image for .[ ssec : shape : step3 ] then the condition to keep the image is denoting the number of target pixels .all exposures not satisfying this condition are rejected .the reduction procedure actually applied to the most observations uniquely uses , consistent with a criterion .this choice is a result of pure practical experience .since the rejection of images in step [ ssec : shape : step3 ] influences the mean normalised fabry image , steps [ ssec : shape : step2 ] and [ ssec : shape : step3 ] are performed as a loop and terminated if no new rejection is performed in step [ ssec : shape : step3 ] .fig.[f : rmsfabryimage ] represents the rms deviations , , for in the first iteration .the inhomogeneous distribution of deviations on the subraster illustrates the importance of the pixel - resolved statistics described above .cosmic ray impacts are considered statistically independent and hence uncorrelated along the time axis . a cosmic ray hitting a given pixel does not influence subsequent readings of the same pixel .the principle of detecting cosmic - ray candidates is to compare each pixel intensity to one computed using a temporal dwa , which is conducted in three steps . 1 . for a given pixel intensity at the time ,a smoothed intensity is computed .the temporal rms deviation of the pixel intensities , , is evaluated .3 . the pixel is flagged as cosmic - ray candidate at the time , if this procedure is conducted as a loop , only taking into account those readings of a pixel which have not been flagged as cosmic - ray candidates in the previous iterations .the loop terminates , if no new cosmic - ray candidates are found .our application to most photometry consistently uses . among the previously flagged cosmic - ray candidates ,spatial information is used to decide whether a pixel represents a cosmic ray or not .the separation of cosmic rays and short - term stray - light variations is performed using the adjacent pixels on the same exposure .a cosmic - ray candidate is confirmed , if not more than adjacent pixels are cosmic - ray candidates , too .fig.[f : cosmics ] displays the detection rates of cosmic - ray candidates and cosmic rays ( boo , april 2005 ) . whereas the candidate detection rates form a doughnut shape ( _ left _ ) , which is due to the misidentification of short - term stray - light artefacts as cosmic - ray candidates , the rates of confirmed impacts ( _ right _ ) are nearly uniformly distributed over the fabry image .a value of turned out useful for practical application . only confirmed pixels are corrected .boo , april 2005 ) .the distribution of candidates reflects a misinterpretation of short - term stray - light effects , displaying a doughnut - shaped profile . as a significant improvement, the distribution of confirmed impacts is close to uniform , in agreement with the theoretical expectation for cosmic ray impacts . ]intensities of pixels confirmed as cosmic rays are substituted by the value of the temporal dwa computed in [ sssec : coscand ] .evidently , each cosmic ray correction introduces additional errors to the data point referring to a considered frame .hence , if the impact rate on a single frame exceeds a preselected threshold , the entire image is rejected . from our experience with most data ,this threshold is set to 10 .the principle is to consider the stray - light sources as superposition of point sources .each point source would produce a characteristic response function on the ccd image , and the combined stray - light contamination may be described as the integral of all these response functions .the resulting technique applied within our data reduction relies on consecutively resolving linear correlations between intensities of target and background pixels and will subsequently be called _decorrelation_. for a single point source ,the shape of the related response function should be invariant to changing stray - light intensity , and the changes of pixel intensities are proportional to the stray light .the stellar signal will not influence the decorrelation procedure systematically unless its period and phase are too close to the orbit . in this casethe period would be interpreted as stray light and removed .depending on the brightness of the target as well as stray light and stellar amplitudes , this has massive implications on the detection of intrinsic stellar variability with a period comparable to the most orbit or a harmonic . , but with background pixel ( 14/1 ) .colour coding refers to orbital phase , illustrating that the loop is associated with the motion of the spacecraft over one hemisphere and that ( in this case ) the satellite is susceptible to stray light depending on lateral illumination .the _ solid line _ refers to a modelled solution incorporating a stray - light pattern that moves across the detector , as described in [ sssec : lag ] . ] plotting the intensity of a target or background pixel versus that of a background pixel ( where no stellar signal is present ) yields a linear relation ( fig.[fig : corr ] ) . in case of one or more _ extended _ stray - light sources , the intensity - intensity diagram for two individual pixels may display a loop ( fig.[fig : corrloop ] ) , indicating an influence from several point sources , but dominating at different orbital phases , according to the possibilities of light being scattered through the aperture or lateral light leaks , and referring to the discussions of stray - light sources in [ ssec : stray light ] and [ sssec : lag ] . hence , the contamination by stray light may be corrected by consecutively decorrelating a given pixel with respect to all background pixels .this is achieved by computing the slope of the linear regression in the intensity - intensity diagram .the correction of the linear trend is performed through preservation of the mean intensity .fig.[fig : decorsteps ] illustrates how the background contamination due to stray light is being significantly reduced when correlating target pixels with ( in the case of this figure , up to 95 ) background pixels .the portion of the stray light in the intensity - intensity diagram which can not be removed entirely in one decorrelation step i.e. systematic deviations from the trend line , for example a loop is likely to re - appear as a linear trend when a different background pixel is used in a subsequent step .this property provides the chance to correct for most of the stray light without explicitly introducing the orbital period or phase .the present method turns out to be considerably less invasive towards the characteristics of spectral noise than a frequently used method based on the residual intensity determined from a phase plot with the orbital period .finally , to take long - term stray - light changes into account , the complete dataset is divided into a sequence of subsets ( e.g. , three days long in the case of equ , figs.[fig : corr ] and [ fig : corrloop ] ) , and the stray - light correction is performed for these subsets individually . in the optimum case ,all of the stray light will be removed , but each decorrelation step necessarily reduces the signal in the target pixels .signals of 200ppm amplitude at 4 different frequencies were additively synthesized in each of the target pixels .the decay of these amplitudes for the sum of all target pixels as a function of number of decorrelation steps is given in fig.[fig : lossosignal ] .obviously , the first decorrelation steps dramatically reduce the stray - light amplitude and after about 100 decorrelations there is a constant , but less significant decrease of the stray - light amplitude for each new decorrelation . butalso the `` intrinsic '' signal is reduced in amplitude , independent of frequency .the amplitude decrease per decorrelation step is approximately constant for all decorrelation steps and corresponds to that for the stray light after many steps .this property allows one to reconstruct the initial amplitude even after a large number of decorrelations .this decrease in amplitude is due to the fact that in the case of pure noise the slope of a linear regression in the intensity - intensity diagram will scatter about zero , and that a trend correction will work for both positive and negative deviations . as a consequence, the decorrelation process with a non - zero slope always tends to reduce but never to increase the target pixel intensities .the expected value of the fraction of noise that is removed will be proportional to the rms deviation of the underlying probability distribution of regression slopes , where the terminology `` noise '' applies to signal at periods different from the orbital as well , if intensity is plotted vs. intensity .simultaneously , the point - to - point scatter dramatically decreases with the first decorrelation steps , but shows asymptotically a shallower decay than the amplitudes .hence , an optimum signal - to - noise ratio is obtained by terminating the decorrelation loop long before all iterations have been performed , and to use the later steps only to determine the slope of a linear trend for reconstructing the amplitude ( solid line in fig.[fig : lossosignal ] ) .the extent of the fabry image is determined by the entrance pupil of the instrument , i.e. well - defined compared to direct - field images .however , we can not be absolutely sure to have pure background pixels in what we consider as background .the question addresses a problem of aperture photometry in general , and possibly the decay of signal amplitudes with increasing number of iterations is to some extent due to target intensity leaking into our reference pixels ( due to , e.g. , scattered light in impurities of the microlens ) .the decorrelation technique is by far more sensitive to this leakage than `` classical '' subtraction of background intensity .a background pixel containing target information would cause the entire target signal to be removed in a single step of the decorrelation cascade .we would immediately see this effect , e.g. in fig.[fig : lossosignal ] .the fact that the procedure works is the best indicator that the influence by the target on the background pixels is negligible .( _ dotted lines _ ) artificially introduced into the procyon data ( jan . to feb .2004 ) , with increasing number of decorrelations . _dashed line : _ point - to - point scatter as a noise level estimator ._ solid line : _ the linear decay of stray - light amplitudes at later decorrelation steps is exactly according to the trend in the amplitudes of synthetic signal . this property is used for reconstruction of the stellar signal . ] a more subtle approach to the stray - light situation is to consider the projection of each point source moving across the detector , corresponding to the motion of the satellite above ground .this motion would cause two different pixels on the detector to be affected at different times , and the resulting time lag is expected to be larger for distant pixels .the formal consequence is to introduce a time lag between the correlation of two pixel intensities as a new free parameter and to perform an optimisation in the time direction through minimising the rms deviation of pixel intensities from the trend line in the intensity - intensity diagram ( fig.[fig : corr ] ) .we modelled theoretical intensity - intensity diagrams by computing the superposition of stray - light profiles of type ^2}\ , , \ ] ] denoting orbital phase , with adjustable amplitude , orbital phase at maximum , and width .a time lag is introduced by a different choice of for each of the two pixels compared .the solid line in fig.[fig : corrloop ] represents a theoretical intensity - intensity diagram for a superposition of two stray - light patterns : 1 .the loop is modelled by a stray - light bump sampled at both pixels with a lag of in terms of orbit phase .the width of the bump is set according to a fwhm in phase of .2 . a structure of fwhm in terms of orbit phase sampled at both pixels synchronously ( i.e. without a time lag ) produces the sharp peak below the loop .this example illustrates the plausibility of moving stray - light patterns to be responsible for the loop - shaped intensity - intensity diagrams. however , when the idea was applied to most measurements , the increased number of free parameters did not lead to reasonable results .although the postulate of static stray - light sources ( permitting synchronous variations of pixel intensities only ) is not claimed to be more reliable than the existence of moving stray - light artefacts since animated `` movies '' ( using mdm , see section [ sec : visual ] ) of the raw fabry images clearly show moving structures the quantitative description of this motion by a constant time lag between pixel intensities leads to a less efficient stray - light correction than the technique introduced in [ sssec : decorrelation ] for mainly two reasons : 1 .the number of decorrelation steps required to achieve optimum stray - light correction is not reduced substantially , 2 .the portion of signal removed with each decorrelation step ( fig.[fig : lossosignal ] ) increases dramatically , and the final extrapolation to initial amplitudes leads to a higher noise level . equ data ._ left : _ relative rms error ( rms error divided by the mean intensity of the pixel light curve ) .the higher relative scatter at the borders of the target image is due to photon noise , since the mean intensity in these regions is lower .in addition , some pixels inside the doughnut show a higher rms error than the rest . _right : _ amplitude noise level in the frequency range from 816 to 824 .the absence of pixels with abnormally high spectral noise inside the doughnut indicates a stray - light reminiscence . ]as is illustrated in fig.[f : rms_fourier_noise ] , the scatter of the target pixel light curves is distributed inhomogeneously in the fabry image .the left frame shows the distribution of the relative rms error ( standard deviation divided by mean intensity ) inside the doughnut .the annular areas with higher relative rms error are due to the lower mean intensity at the borders of the doughnut .the higher rms error in various areas inside the doughnut is due to remaining stray light , but also due to a higher point - to - point scatter in the pixel light curve .this is shown in the right frame of fig.[f : rms_fourier_noise ] for the amplitude noise level of the target pixel light curves .the frequency range from 816 to 824d ( the nyquist frequency is for the data ) chosen for noise computation seems to be void of orbital , 1-day sidelobes , instrumental , and intrinsic peaks .as illustrated by fig.[f : compare_noise ] , some target pixels are `` noisy '' enough to decrease the quality of the total light curve . hence a proper selection of the target pixels used for the final light curve decreases the stray - light signal and the noise level in the frequency domain .equ light curve ._ solid line : _ sorted by relative rms error ._ crosses : _ sorted by relative rms error and used only when improving the resulting amplitude noise . _ bullets : _ sorted by individual amplitude noise and used only when improving the resulting amplitude noise ._ dotted line : _ amplitude noise of the best target pixel divided by the square root of the number of used pixels a measure for the photon noise . _ dashed line : _ amplitude in the dft spectrum at the orbit frequency divided by a factor of 20 for better visibility . ] in the case of , the solid line in fig.[f : compare_noise ] indicates the evolution of the amplitude noise level ( in the frequency range from 816 to 824 ) with increasing number of target pixels used for the resulting light curve .the target pixels are sorted by increasing relative rms value .obviously , the amplitude noise level in the amplitude spectrum decreases rapidly when adding the signal of the `` best '' pixels . adding more pixels of lower quality up to about 180 pixelsdoes not improve the noise level in the resulting amplitude spectrum .further increasing the number of pixels starts to increase the noise level in the resulting light curve .this is mainly due to pixels at the border of the doughnut with low signal and adverse photon statistics .a closer inspection shows that occasionally the amplitude noise level increases although a rather good pixel has been added ( e.g. , close to the 20 pixel ) .the reason for such a paradoxial result is random constructive interference leading to higher mean amplitude in the chosen frequency range , even if two low - noise datasets are combined .using only those pixels which _ improve _ the resulting amplitude noise level leads to a significantly better quality of the final light curve ( see crosses in fig.[f : compare_noise ] ) .the relative rms error as a superposition of signal and point - to - point scatter is obviously not the best criterion for the `` noise '' of a target pixel light curve .another examined sorting criterion is the mean amplitude level ( here again in the frequency range of 816 to 824 ) .the dots in fig.[f : compare_noise ] display the evolution of this noise level when the target pixels are sorted by their individual amplitude noise and used for constructing the final light curve only if adding such a pixel improves the total light curve , i.e. reduces the amplitude level in the mentioned frequency interval .a total of 106 target pixel light curves were accepted in the given case , leading to an amplitude noise of in the relevant frequency range. however , there is no significant difference in the resulting light curve between amplitude noise and relative rms error in the time domain ( crosses in fig.[f : compare_noise ] ) as sorting criteria .the resulting light curve is also insensitive to the choice of this frequency range , if void of stellar or instrumental signal .the dashed line in fig.[f : compare_noise ] is associated the fourier amplitude at the orbit frequency of illustrates that the influence of the choice of incorporated target pixels on the remaining stray - light peaks is negligible . in fig.[f : compare_noise ] , this amplitude is rescaled by a factor for better visibility . ''symbols indicate pixels used for the final light curve .the line refers to the predicted photon noise for these 421 pixels . ]the dotted line in fig.[f : compare_noise ] illustrates the theoretical improvement of the amplitude noise level with an increasing number of used target pixels assuming the same quality for all these pixels . only the first ( i.e. best ) 30 target pixels appear to be photon noise limited .but what we see is caused by the different mean intensity level of the various target pixels .fig.[f : phot_noise ] illustrates the dependence of the amplitude noise level of target pixel light curves on their mean intensity ( dots ) .those target pixels finally used are marked by `` '' symbols , and it is obvious that most ( about ) of the target pixels are close to the photon noise limit .equ light curve based on all ( _ grey _ ) and on the 106 selected target pixels ( _ black _ ) .the arrows indicate the 7th and 8th harmonics of the orbit period . ]the improvement of the light curve due to a proper pixel selection is illustrated in fig.[f : ampspec ] by showing the amplitude spectrum of in a frequency range with already reported pulsation modes .the grey line in the background indicates the dft of the light curve based on all target pixels .this spectrum is crowded with instrisic , orbital overtone , and 1-day sidelobe peaks .the black line corresponds to the dft of the light curve based on the selected target pixels and shows almost exclusively peaks not associated to any orbit harmonic ..comparison of stray - light amplitudes and frequency - domain noise between raw and reduced data for most targets observed from oct 2003 to march 2005 , _ date : _ time of observation ( yymmdd ) ; _ target : _ target name ; _ orbit ( raw ) , orbit ( red ) : _ mean amplitude ( mmag ) of orbital frequency and harmonics up to order , for raw and reduced data , respectively ; _ interval : _ frequency interval ( d ) used for calculation of amplitude noise ; _ noise ( raw ) , noise ( red ) : _ amplitude noise ( mmag ) , for raw and reduced data , respectively .[ cols="^,<,>,>,^ , > , > " , ] after correcting for stray light , which is the most important aspect of the data reduction for most , a weak correlation of the photometry with various spacecraft parameters may still be present . among the variety of parameters available and examined , residual correlations with the photometryare only found for the sub - satellite geographic latitude , the ccd temperature ( only before summer 2004 ) , and the acs deviation ( only for commissioning and very early science targets ) . in the case of ,stray - light correction and proper target pixel selection reduce the stray - light contribution to the photometric signal by a factor 1700 , from to .a comparison of the raw and corrected mean target pixel intensities is given in fig.[f : geolat ] .the top panel shows the raw data ( mean of all pixels per frame , scaled to fit the figure size by dividing by 100 , and offset by 2 adu per second for better visibility ) as a function of geographic latitude .the bottom panel displays the stray - light corrected photometry after stray - light correction and target pixel selection .long - term variations were removed from the data by subtracting a moving average .only a marginal dependence of intensity on the sub - satellite geographic latitude remains .this residual correlation is corrected by subtracting a dwa ( grey line ) in the intensity vs. latitude plot and has no influence on the amplitude noise level , but reduces the amplitudes of the higher orbit harmonics .equ photometry versus sub - satellite geographic latitude ._ _ top : mean target pixel intensities of the raw data ( divided by 100 and offset by for better visibility ) show a stong stray - light component at northern latitudes ._ bottom : stray - light corrected photometry after optimum target pixel selection . only a marginal dependency of intensity on sub - satellite geographic latitudes remains ( grey line ) ._ _ _ ] before summer 2004 , an on - board clocking problem caused a beat between the acs and science clocks . as a consequence of this beat ,a spurious signal with a frequency of and harmonics of it are found in the most photometry obtained before mid 2004 ( see fig.[f : ccdtemp ] , bottom ) .this variation can also be found in a time series of the corresponding ccd temperature readings ( fig.[f : ccdtemp ] , top ) .the apparent temperature signal is an artefact of the timing beat and does not represent a real ccd temperature variation , so there is no correlation with ccd signal .hence , this instrumental effect is eliminated by prewhitening frequency by frequency , being aware of the instrumental origin of the corresponding peaks in the amplitude spectrum .equ light curve ( _ _ bottom ) and the corresponding ccd temperature time series ( _ top ) .the photometric data show a variation with a frequency of about ( plus the first and second overtone ) which is also visible in the dft of the ccd temperature . _ _ _ ] the acs pointed the telescope at with an average accuracy of about , which is fairly below the physical pixel size of and much better than what was achieved , e.g. , for the first most science target , . as illustrated in fig.[f : acsmean ] , there is no correlation between the photometric data and the mean acs error radius .the latter is the rms deviation of the line - of - sight from the acs centre during one integration and an estimator for the pointing stability .fig.[f : acsmean ] also proves the homogeneity of the ccd sensitivity on a subpixel scale .equ photometry versus mean acs error radius .the horizontal bar represents the physical pixel size of most . ]the idl program most data movies ( mdm ) is a visualisation tool for sds2 formatted data . in particular during the early attempts of understanding the instrumental properties it proved very helpful to `` see '' the effect of various reduction steps for individual frames as well as for pixel light curves .in addition , we wanted to correlate the most position with peculiarities in frames and light curves . for a convenient usage of mdm ,several options are provided , like various scaling , color coding , speed adjustment for the animated frame sequence , and toggling stepwise back- or forward in time . the most helpful option for optimising the reductionroutine is the simultaneous display of two data sets .for example , one can immediately compare the original with the corrected light curve .fig.[fig : mdm ] shows the original and reduced light curves of , the original and reduced fabry images ( doughnuts ) of a selected instance ( marked by a pointer below the light curve ) in the left bottom corner , next the acs errors reported for this given exposure and the respective sub - satellite position on ground , colour coded ( from red to black ) with the magnetic field measured on - board .boo 2005 ) .scaling of time and magnitude , as well as animation range and speed are adjustable .furthermore , the software provides monitoring of a variety of features in the lower panels ( in this case , from left to right : raw sds2 image , reduced sds2 image , acs error track , spacecraft track ) . ]the mdm software is available on request .the sds1 data format represents row and column sums of pixel intensities rather than data resolved in two dimensions , which has several implications on the reduction procedure .* there are only 8 binned pixels ( i.e. the 2 pixels at each margin of row and column sums ) that may safely be defined as background . *the image geometry may be treated as similar to sds2 data ( see [ ssec : shape ] ) , but in general , local bumps in the doughnut can not be resolved in the binned data .* reliable automated identification of cosmic ray hits ( according to [ ssec : cosmicscor ] ) is nearly impossble , because in the sds1 pixel binning , one or a few pixels affected by a cosmic ray will only produce a small excess in the total integrated signal .of course , this also means that most cosmic ray hits produce relatively small outliers in the most sds1 photometry . *the restriction to 4 background pixels permits only 4 decorrelation steps in the stray - light correction procedure ( as described in [ ssec : strayl ] ) .these aspects of sds1 data limit the amount of processing possible on the ground , which was always recognised as the trade - off for being able to back up still useful photometry on board for many days to avoid possible gaps in the high - duty - cycle time series . instead of applying the standard procedure described above to the sds1 data , a more promising idea was to compute the differences between the reduced target intensity and the integrated intensity of the raw image for all sds2 frames and to interpolate these differences in time to obtain an estimated background correction for the sds1 measurements .the noise level of these is still expected to be higher than that of the sds2 light curve , but the larger number of readings may nevertheless reduce the noise level of the fourier amplitudes . as an example , in the procyon 2005 photometry , the number of data points in sds2 format is only , compared when sds1 data are included .the corresponding amplitude noise level of sds1 data is reduced by about relative to sds2 alone .fig.[fig : sds1 ] compares the sds1 and sds2 light curves for procyon 2005 .the fact that we gain only 20% in terms of amplitude noise is due to the poorer quality of the sds1 exposures .while approaching what we think is now a well - developed reduction tool for space ccd photometry , we followed various ideas which we finally discarded , but which may be interesting in a different context , or may be traps for researchers in a similar situation as we were .for example , we investigated various methods for stray - light correction .the simplest approach , similar to classical aperture photometry , was to subtract a mean background determined independently for each frame .not surprisingly , this approach was too simple because of the complex stray - light pattern produced by the fabry lens . the stray - light complication led us to develop a stray - light model taking the neighbouring background pixels into account .we used the spatial dwa ( according to eq.[eqwhittakerspace ] ) to describe the correlation between one pixel and its environment in the image and called this stray - light model a _ local dispersion model _ ( ldm ) .fig.[f : ldm ] shows some ldm background models for the data with different smoothing parameters .choosing as the only free parameter did not lead to a satisfactory stray - light correction .( 0 , 2 , 5 , and 10 ) . ]another stray - light correction scheme took advantage of an obvious direct correlation between the mean background and the target intensity , which suggested that one should compute the mean background as the normalised bivariate moment of order zero .a correction of the mean background by calculating a regression of the target intensity relative to a mean background and correcting for a slope led to clearly better results than the ldm described in [ ssec : ldm ] .this encouraged us to investigate higher order correlations using where indices , refer to background pixels only .the result is a bivariate polynomial fit of order to the background pixel intensities , which is obtained by setting applying various moment orders led to quite impressive results , but the stray - light peaks still dominated the light curve ( fig.[f : moments ] ) .the `` lens profile '' is intended to indicate optical distortions and disturbances by flawed areas on the fabry lens .it is obtained by an analysis of individual most - images and allows estimating the influence of lens properties on the light curve as well as eventually finding an algorithm to correct for lens inhomogeneities . impurities in the lens , like small inclusions or bubbles ,will result in a reduced signal every time light passes through such blemishes . in a first step we simulated a constant light source , which was performed for by prewhitening the light curve with 20 frequencies .then we correlated the residual intensity to the acs error values stored in the image headers , indicating the deviation of the line - of - sight from the ideal position referring to the centre of the frame .usually there exists more than one acs error value for an image , because the acs system acquires a reading every second , while the typical integration time for a single frame is seconds and longer . in the case of ( seconds integration time ) , up to acs error values describe the movement of the target across the aperture during a given exposure .it is therefore impossible to immediately assign an erroneous data point to a specific acs value . to overcome this problem , we used a grid of acs error values and classified the boxes as `` good '' or `` bad '' according to a chosen threshold for the residual to the `` constant '' light curve .`` good '' boxes are those where at least one data point has been obtained with a value below the chosen threshold and which contains an acs value corresponding to the given box .`` bad '' boxes obviously are those where no such frames are found . changing the criterion from `` one '' good data point to `` '' good data points for defining a good box does not change the result substantially .the lens profile based on data obtained for ( fig.[fig : flens ] ) shows that the given fabry lens is perfect except for the outermost area , where the acs is at its limit .the pointing precision has significantly improved since the beginning of most observations .now it is practically impossible to produce a reliable lens profile using this technique .however , in case the acs gets worse and lenses get damaged we might have to return to this tool for properly identifying bad data points .for the time of this report and as a conclusion , we do not have indications of fabry lenses influencing the data quality .for most targets ( table [ tabquality ] ) observed from october , 2003 , to march , 2005 , the typical fraction of exposures transmitted in the sds2 format was 41% . about 52% of the pixels in a fabry framewere initially used for defining the target aperture mask . out of all sds2 exposures, % had to be rejected due to acs problems ( [ ssec : rejection ] ) , and % due to a distorted fabry image geometry ( [ ssec : shape ] ) .close to 0.47% of all sds2 exposures contained pixels apparently affected by cosmic rays ( [ sssec : coscand ] ) , but only 47% of these conspicuous pixels finally turned out to be corrupted by cosmic rays ( [ sssec : cosaffect ] ) .5% of sds2 exposures had to be rejected due to an excessive cumulation of cosmic rays ( [ sssec : cosrates ] ) . between 18 and 224 decorrelation stepswere used for the final reduction ( [ sssec : decorrelation ] ) with typically 56% of the pixels defined in the initial target aperture mask used for the resulting light curves ( [ ssec : pixsel ] ) .the reduction method described in this paper was applied to most fabry image photometry and relies on the following steps : * construction of a `` data cube '' of all ccd frames including all fits header information to which all software components consistently refer * definition of target and background pixels * rejection of images deviating significantly from the average * cosmic ray correction ( correction of individual pixels or elimination of an entire image , if the number of pixels to be corrected is above a chosen threshold ) * stray - light correction - the core of our reduction tool .this correction is based on the assumption that the stray - light sources superpose for a given pixel as individual point sources .this property allows one to compensate for the orbit - modulated stray light by decorrelation of background and target pixel intensities thus avoiding any period folding techniques .the amplitudes of orbit harmonics are reduced by up to four orders of magnitude . *compensation for reduced intrinsic amplitudes due to repeated decorrelations * selection of best suited pixel light curves for merging into the final target star light curve * check for remaining correlation of photometry with spacecraft parameters the software package is very efficient and provides the reduction of a large amount of data in a pipe - line style .the improvement in data quality achievable with our method is displayed in table [ tabquality ] ( p. ) containing a comparison of both stray - light amplitude and frequency - domain noise for raw vs. reduced data .a comparison with other reduction techniques clearly indicates that no artefacts are introduced in the photometry .this is a considerable advantage when interpreting complex frequency spectra with amplitudes close to the noise level .simple binning of consecutive orbits and a corresponding trendline correction will perform at least as well as the decorrelation technique in the sense of noise .but the noise level is not the only ( and also not the best ) quality estimator .time - domain binning leads to the introduction of a periodic frequency - domain filter function .in other words , the spectral noise level is not uniformly distributed in frequency , which has serious implications on the detailed analysis of individual frequencies .the noise level can be pushed down considerably while paying the price of distortion , which we consider less convenient than the slightly higher , but undistorted noise .although there are frequency regimes and timescales of intrinsic stellar variation where the better photon statistics of binned data may be more desirable for the task at hand than e.g. eliminating pixels from the fabry image and increasing the overall poisson noise , we are generally convinced that a low noise level is not the only thing to strive for .this is why we completely omit manipulations of the total light curve ( like binning of orbits ) without carefully examining the origin of the contamination we correct for .d.f . , m.g . , d.h ., t.k . , d.p .s.s . , and w.w.w .received financial support from the austrian ffg areonautics and space agency and by the austrian science fonds ( fwf - p17580 ) . the natural sciences and engineering research council of canadasupports the research of d.b.g ., a.f.j.m ., s.m.r . , and g.a.h.w .; a.f.j.m .is also supported by fcar ( quebec ) .is supported by the canadian space agency .the most ground station in vienna was developed and is operated in cooperation with the vienna university of technology ( w. keim , a. scholtz , v. kudielka ) .99 appourchaux , t. , catala , c. , cornelisse , j. , frandsen , s. , fridlund , m. , frhlich , c. , gough , d.o . , hoyng , p . ,jones , a. , lemaire , p. , roxbourgh , i.w . , tondello , g. , volonte , s. , weiss , w.w .1993 , _ report on the phase a study _ ,esa sci(93)3 baglin a. , auvergne m. , barge p. , buey j .-, catala c. , michel e. , weiss w.w . , and the corot team 2004 , _ proceedings of the first eddington workshop _ , f. favata , i.w .roxburgh , d. galadi eds ., esa - sp , 485 , p. 17borucki , w.j . , koch , d. , basri , g. , brown , t. , caldwell , d. , devore , e. , dunham , e. , gautier , t. , geary , j. , gilliland , r. , gould , a. , howell , st ., jenkins , j. 2003 , _ proceedings of the conference on towards other earths : darwin / tpf and the search for extrasolar terrestrial planets _ , m. fridlund , t. henning eds . , esa sp-539 , p. 69brown , t.m . ,cox , a.n .1986 , in _ stellar pulsation ; proceedings on the conference held as a memorial to john cox at the los alamos national lababoratory _ , a.n .cox , w.m .sparks , s.g .starrfield eds . ,lecture notes in physics , 274 , 415 brown , t.m . , torres , g. , latham , d.w .1995 , baas , 27 , 1381 buzasi , d. , catanzarite , j. , laher , r. , conrow , t. , shupe , d. , gautier iii , t.n . ,kreidl , t. , everett , d. 2000 , apj , 532 , l133 carroll , k.a . , rucinski , s. , zee , r.e .18th annual aiaa / usu conference on small satellites_. favata , f. , roxburgh , i.w ., christensen - dalsgaard , j. 2000 , _ assessment study report _ , esa sci(2000)8 fridlund , m. , gough , d.o . , jones , a. , appourchaux , t. , badiali , m. , catala , c. , frandsen , s. , grec , g. , roca cortes , t. , schrijver , k. 1995 , in _gong94 : helio- and asteroseismology from the earth and space _ ,ulrich , e.j .rhodes jr . ,w. dppen eds ., asp conf .ser . , 76 , 416 groccott , s.c.o . , zee , r.e . ,matthews , j.m .2003 , in _17th aiaa / usu conference on small satellites _ guenther , d.b . ,kallinger , t. , reegen , p. , weiss , w.w . ,matthews , j.m . ,kuschnig , r. , marchenko , s. , moffat , a.f.j . ,rucinski , s.m . , sasselov , d. , walker , g.a.h .2005 , apj , in press hodrick r.j ., prescott e.c .1997 , journal of money , credit , and banking , 29 , 1 kjeldsen , h. , bedding , t.r . ,frandsen , s. , dall , t.h . ,thomsen , b. , christensen - dalsgaard , j. , clausen , j.v . ,petersen , j.o ., andersen , m.i .1999 , in _ stellar structure : theory and test of connective energy transport _ , a. gimenez , e.f .guinan , b. montesinos eds . , asp conf . ser ., 173 , 353 kuschnig r. , et al .2005 , in preparation mangeney a. , praderie f. 1984 , in _ space research prospects in stellar activity and variability _ , proceedings of a workshop from feb . 29 , to march 2 , 1984 , observatoire de paris - meudon , a. mangeney & f. praderie eds .matthews , j.m .2004 , aas 205 , 13401 matthews , j.m ., kusching , r. , guenther , d.b . ,walker , g.a.h . ,moffat , a.f.j . ,rucinski , s.m . , sasselov , d. , weiss , w.w .2004 , nature , 430 , 51 rowe , j. , et al .2005 , in preparation rucinski , s.m . ,walker , g.a.h . ,matthews , j.m . ,kuschnig , r. , shkolnik , e. , marchenko , s. , bohlender , d.a . ,guenther , d.b . ,moffat , a.f.j . , sasselov , d. , weiss , w.w .2004 , pasp .116 , 1093 schou , j. , scherrer , p.h . ,brown , t.m . ,frandsen , s. , horner , s.d ., korzennik , s.g ., noyes , r.w . ,tarbell , t.d . ,title , a.m. , walker , a.b.c.ii , weiss , w.w . ,bogart , r.s . , bush , r.i ., christensen - dalsgaard , j. , hoeksema , j.t . , jones , a. , kjeldsen , h. 1998 , in _ structure and dynamics of the interior of the sun and sun - like stars _ , soho , 6 , 401 vuillemin , a. , tynok , a. , baglin , a. , weiss , w.w ., auvergne , m. , repin , s. , bisnovatyi - kogan , g. 1998 , experimental astronomy , 8/4 , 257 walker g. , matthews j. , kuschnig r. , johnson r. , rucinski s. , pazder j. , burley g. , walker a. , skaret k. , zee r. , groccott s. , carroll k. , sinclair p. , sturgeon d. , harron j. 2003 , pasp , 115 , 1023 weiss , w.w .1993 , asp conf .708 whittaker e.t .1923 , _ on a new method of graduation _edinburgh math .soc . , 41 , 63
the most ( microvariability & oscillations of stars ) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles . astronomical photometry or imaging missions in low earth orbits , like most , are especially sensitive to scattered light from earthshine , and all these missions have a common need to extract target information from voluminous data cubes . they consist of upwards of hundreds of thousands of two - dimensional ccd frames ( or sub - rasters ) containing from hundreds to millions of pixels each , where the target information , superposed on background and instrumental effects , is contained only in a subset of pixels ( fabry images , defocussed images , mini - spectra ) . we describe a novel reduction technique for such data cubes : resolving linear correlations of target and background pixel intensities . this stepwise multiple linear regression removes only those target variations which are also detected in the background . the advantage of regression analysis versus background subtraction is the appropriate scaling , taking into account that the amount of contamination may differ from pixel to pixel . the multivariate solution for all pairs of target / background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to , e.g. , stray light . the technique is tested and demonstrated with both simulated oscillation signals and real most photometry . methods : data analysis space vehicles : instruments techniques : photometric .
e - commerce and social - media offer their users facilities to buy , review , sell and share online items , such as amazon , e - bay , facebook and tencent https://en.wikipedia.org/wiki/tencent_qq[qq ] .the adoption of social media is increasing by day and night so as e - commerce .because of its widely adoption , it is gaining attention of the organizations , to know people s demand and trend.researchers have also found correlation between social media and e - commerce industry .social media is showing a great opportunity by carrying huge amount of user generated data for exploiting and finding the outcome of interest ( asur , huberman , and others 2010 ) . in e - commerce site , organizations are directly interested what are the response from users , for their products .they also want to know which kind of item will be in demand so that they can make profit . since the number of products are expanding constantly so online merchants changes their strategies from traditional marketing advertisement ( like tv , news paper etc ) to viral marketing , i.e. customers are suggested to share the product information with their friends on social media such as facebook and twitter ( leskovec , adamic , and huberman 2007 ) . from infotainment to trade , everything is done on internet and online contents have become valuable internet asset ( tatar et al .2014 ) that is useful to producers and consumers both , hence to know the future popularity of online content has become an important area of attention and interest .popularity prediction is a complex task depending on different factors like quality and individual s interest.content popularity may fluctuate with time ( eisler , bartos , and kertesz 2008 ) , increase over time or be limited within communities .it is difficult to capture the relationship between real world event and web content , in the prediction model for example during indian election many joke goes viral and they might not explicity inlude the political symbol.some contents get extreme popularity because of its prior popularity , also known as _ cascading effect _ ( cheng et al .2014 ) , and it becomes hard to predict which content will stop this cascading effect . in presence of cascading effect other `` potential items '' ( exhibit a sudden increase of popularity at a certain duration of time ,i.e. can be popular , but not now ) are suppressed.most recently researchers have found that the popularity of online contents like news , blog posts , videos , mobile app download ( gleeson et al . 2014 ) in online discussion forums and product reviews exhibits temporal dynamics .it turns out that user interests toward web items , vary with time ; an extensive study of how the popularity of online media s temporal patterns grow and fade , over time are presented in ( yang and leskovec 2011 ) , ( leskovec , backstrom , and kleinberg 2009 ) . besides the works on temporal scale, other researchers found that the contents popularity is influenced significantly by consumers social relationship ( zeng et al .2013 ) , ( szabo and huberman 2015 ) , the social network of consumer can enhance the prediction performance ; and social dynamics of consumer , influence the social media content s popularity even more significantly . because the models of both temporal dynamics and social dynamics are always complex and parameterized , it is hard to apply those models to real online systems .the rest of the paper is organized as follows . in section 2we formally defined problem . in section 3we introduced the baseline method .insection 4 we proposed new method to solve the problem . in section 5we discuss the mthods and materials for experiment and we also discuss insights from the results.in section 6 we have concluded the paper with possible future works .in our model we have considered the bipartite network which consists of a set of users ( ) and a set of objects ( ) . the popularity of a node or item or object is the total link recieved by the object ( ) .a bipartite network can be represented by an adjacency matrix , if user , ( ) have consumed object , ( ) . to consider the temporal effect on objects final popularity we take snapshots of the network at different time point .let denote the adjecency matrix of the snap - shotted network at time , then matrix ( ) containes edges between user ( ) and object ( ) before time ( ) only .the user and object degree can be computed by and , respectively. is future time window , so popularity or increment in degree of an object is given by- the popularity prediction problem can be defined as follows . given a data - set s which includes information about ratings by users that have rated or consumed the item / object with time - stamp ( e.g. user i d , item i d , time ) , after arranging the data - set in ascending order by time , we divided the data - sets into training time t and future time window .this is obvious that interest towards object , varies with type or category such as online video , object is different than items on amazon .considering these facts we have chosen the training time that helps us to predict with better accuracy in future time window .a predictor exploits the information before time ( ) and make prediction for future time window ( ) . is the real score of object ( ) . a predictor s performance is measured by calculating it s accuracy by comparing its ranking from predicted ranking and real rankingwe have considered state - of - the art for predicting new entries as a baseline method as for as our knowledge goes .( zeng et al . 2013 ) has proposed popularity based predictors(pbp ) .it is based on a well known _preferential attachment _ theory , which states that popularity increases cumulatively ; the rate of new link ( either item recieves rating in case of movielens , or a friend like or comments in case of facebook wall post activity ) formation for any node is proportional to the observed number of links which node has recieved in past .if an item is popular at time , then it will probably become popular due to the condition that current degree of an item is a good predictor of its future popularity .further ( ( gleeson et al . 2014 ) , ( zeng et al . 2013 ) ) have found that current degree is a good predictor of items future popularity.(zeng et al .2013 ) proposes to calculate the prediction score of an item at time can be given as follows- - where is the rating / links recieved in past time window from . } $ ] , note that gives the total popularity and for it gives recent popularity .throught the script by popularity we mean number of ratings or links recieved by item or node .it is obvious that popularity of any item on social media does nt last forever.in addition decay rate may vary from item to item e.g life cycle of popularity of a movie will be different from news or items on digg or facebook.every type of item have its own decay rate like ( parolo et al .2015 ) have found research article citation rate decays after some time .mathew effect or preferential attachement is a well known phenomena seen almost in large scale networks that shows power law distribution ( et.al 1999 ) .these theories explains that rich will get richer and poor get poorer . as researchers have found that degree distribution of every network ends up to long tailed.this is also true in case of e - commerce user - item bipartite networks.people chase the popularity of items to optimize their time and energy .there are good or fit items ( matus medo manuel s. mariani and zhang 2015 ) , ( bianconi and barabsi 2001 ) to be consumed but under the influence of preferential attachment those are ignored . under its influence finding the fit or potential item is one of the important tasks among the researchers.researchers have also found that network changes structure due to aging factor over time [ ( h .zhu , wang , and zhu 2003 ) ] . considering these factorswe propose that the recent gain in popularity as well as decay in popularity together are a good predictor for its future popularity.considering the competing behavior in networks ( bianconi and barabsi 2001),if some items are loosing their populairty then other items should be gaining attentions of the consumer .therefore decaying factors with recent popularity will help us in detecting `` potantial items '' .recent popularity is one of the important factors in discovering the final popularity of objects are discussed in ( j .- p .onnela and reed - tsochas 2010 ) .we also know that considering all features that affect popularity of content is really a difficult task . if is prediction score at time given past time window .we can say- the above equation states that score of object is proportional to recent gain in popularity . is tunable prameter between recentness and total popularity.it can take values in [ 0,1 ] interval .as the researchers also found aging phenomina in item or node so we can formulate it as follows- where denotes the time at which user consumed the object and is free parameter.since recent popularity will be good predictor if decay rate is constant .so now we can write as follows- again we can write- where is normalization constant and can be estimated using following equation .for testing our proposed predictor s efficiecy we have considered popularity based predictor ( pbp ) as a base predictor by ( zeng et al . 2013 ) .we took average of 10 results .three evaluation metrics are adopted to measure the accuracy of the proposed model including _ _ precision__,__novelty__ and _ area under recieving operating characteristic_( ) .* _ precision _ is defined as the fraction of objects that are predicted also lie in the top object of true ranking ( herlocker et al .2004 ) . where is the number of common objects between predicted and real ranking . is the size of list to be ranked.its value ranges in [ 0,1 ] , higher value of is better . * _ novelty( ) _ is a metric to measure the ability of a predictor to rank the items in top position that was not in top position in previous time window.we call these new entries as `` potential items '' throughout the script . if we denote the predicted object as ( ) and potential true object as , then the novelty of a model is given by- * _ auc _ measures the relative position of the predicted item and true ranked items. suppose predicted item list is ( ) and real item list is ( ) .if and is score of object in predicted then _ auc _ is given by- where , to test the predictors accuracy we have used different data sets . like movielens , netflix , facebook wall post datasets etc .movielens and netflix data sets contain movie ratings and facebook data set contains users wall post relationships .movielens is provided by link : www.grouplens.org[grouplens ] project at university of minnesota .the data description can be found on the website.while data preparation for our model we have selected small subset from each by randomly choosing users who have rated atleast movies .the original rating was in the form of numarical , we have considered the link between the user and object which object have recieved higher than two ratings.for all the three datasets facebook , movielens and netflix the time is considered in days . the data description is as follows- * * netflix * data contains users, movies and links , data was collected during(1st jan dec 2005 ) . ** movielens * dataset contains movies , links and users and data was collected during( jan jan 2005 ) . * * facebook * data contains set of users and their wall post activity and links , during period of ( 14 sep jan ) . if user has posted on a wall there will be a link between the user and the wall , self influenced is removed by removing the link between user and its own wall post . to evaluate the performance of our predictors we have selected 10 random for each data sets .selection of t is considered in such a way that predictor have enough history information .since predictors are based on objects history , we have selected only those object that have recieved atleast one link before time ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 1 : the performance of the proposed method for different values of ( top n items in a list ) . for three data sets movielens , netfix and facebook wall post . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the above [ figure 1 ] we have shown the proposed predictors performance under different values of , i.e considering the recent popularity with variying history lenght because means objects total popularity and means objects recent popularity .we have consider the decay rate as fix , because we have found gives better result.from the figure [ figure 1 ] we can see is much affected by than ( novelty ) and .all the three metrics improves with in other words recent populairty is a good predictor than the object s total popularity .higher values shows that proposed predictor have better ability to predict popularity of objects than the base method while our predictor have ability to predict better the popularity of novel items(),i.e the items that were not popular in the past time .these items are `` potential items '' , these items will help in abating the centrality of item s degree distribution . generally these items are suppressed by items that have already gained populairty .our proposed predictor have also shown improvment over the base method . in decay rate ( ) andrecent popularity analysis we have found decay rate is very low for all the three datasets . considering decay with recentpopulartiy improves accuracy .we have also found considering decay helps more in digging new entries .although aging factor improves the accuracy but still recent behavior dominates .we have also found that in the presence of quality item people lose interest in old items that is why improves when considering decay factor with recent poularity.even if the item is not globally popular people like the items that were liked by the peers recently .empirical results are as follows- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 2 : the performance of our proposed predictor for different values of and ( decay rate).we have considered and both as days . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in search of our predictor s behavior we have considered different values of and plotted [ figure 2 ] the accuracy against . although is free parameter but to know its effect we considered 10 values in [ 0,1 ] .we have tried recentness of edge collection by the node with decay rate to see the effect together .we have found popular items last longer on movielens since precision is better for .we have found people adopts new items from recent behaviours of their peers . in other words for trying new things they rely on their peer s behaviour.since not much affected by decay rate as compare recent behaviours . in cas of netflixwe found the same nature as movielens popular items last for long time . in case of facebookwe have found that people rely more on the age of the post than the recent behaviours of their friends . while more recent post shared by their friends are more entertained.on facebook not only share or comment on their friend s post but also they can create their own post and other can share or comment.that is why potantial items on facebook , a user not only depends on the age and recent activity of the peers but also node cetrality . who have shared the postalso matters.that is why predicting new popular items on facebook needs more feature consideration such as centrality of the node , time etc .the [ table 1 ] gives detailed comparison of the two predictors .we have considered future time window as well as past time window both as days .the second column is accuracy of our proposed predictor while pbp column is the base predictor.the three numbers in both the columns are basically for top 50,100 and 200 items respectively from left to right .the `` type '' column describe the accuracy type , every accuracy is compared for three cases : n=50,100 and 200 items.it is easy to see that proposed predictor has shown improvement . for comparing our proposed predictor with base predictorwe have considered past time window ( ) and future time window ( ) as days . for comparisonwe have selected the top n ranked items from predicted list and compare them against the real items for both the predictors .emprical results are as follows- .perfomrance table for both the predictor considering tp and tf as 30 days .the volues are for top 50,100 and 200 items respectively . [cols="<,<,<,<",options="header " , ] we have compared our results with the base method considering top 50,100,200 list . for comparing we have considered training windown ( ) as 30 days and we have tested the predictor for the same future time lenght days . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 3 : the performance comparison between proposed method and base method for movielens dataset . for the above comparison we have considered and days . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ figure 3,figure 4,figure 5 ] show the comparitive performence of proposed method over base method .it is easy to find that our proposed method out performs the base method in all the situation .for all the datasets is better for all values of . also out performs as compare to the base predictor .if the item has low time span , the prediction made by total popularity is not good while the prediction made by recent popularity method for the same is good.results also show using recent popularity works good for all the datasets ; movielens , netflix and facebook , and for all the situations . when the future window length is short.prediction by total popularity works good in case of item has already gained long term popularity . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 4 : the performance comparison between proposed method and base method for netflix dataset . for the above comparisonwe have considered and days . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ charecteristric of all the three datasets are not same , such as facebook content may not be alive after few weeks while on movielens and netflix content may never die . further morethe rate at which node attract new links also differ such as the most popular node will recieve more attention than less popular ones .the content on facebook may not be always intersting to friends so that they can share with their friends such as friends may not like content on politics . while content on movielens and netflix are appealing to viewers .that is the reason our predictor s accuracy for facebook dataset as not good as movielens and netflix . in [ figure 6 ]we find precision get better with for all the datasets suggest that people like the item that their peers are watching or liking in recent time ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 5 : the performance comparison between proposed method and base method for facebook . for the above comparisonwe have considered and days ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ our predictor also outperforms the base method ( based on recent popularity ) also shows that content popularity are affected by it s age also since we have considered aging in our model . as aging phenomenais present in every network therefore new entries get chance to become popular .we can think in two ways , either due to aging of old popular items new entries get attention or due to quality or fitness of item .if node achieve popularity due to its fitness suggest it is showing competitive behaviour.scientist have found both the phenomena in real networks . in our casewe can argue new entries become popular not only because old entries aging effect but also items fitness because there are so many entries to watch or consumed .if people are watching or liking any item it is because of it s innate quality not because they do nt have enough entries to watch or consume .we know there are plenty of new entries available for all the cases namely movielens , netflix and facebook .therefore we can say that in the presence of quality item they attract link from the popular item to become popular by showing competitve behaviour .in [ figure 7 ] we have shown the performance of our predictor against the base predictor for different values of future time window.for proposed predictor and for pbp and past time window length days as the author has used in his paper .it is easy to discover that smaller lenght helps in predicting for short time while long helps for predicting long term trend . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ figure 7 : the above figure shows the performance of the predictor for diferent values on future time window .red line shows the performance of our proposed predictor while blue shows the base pbp predictor ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we have chosen the same as in case of pbp the author has considered . as we can see from the [ figure 7 ]our proposed predictor has better performance in predicting the long term trend prediction.we have considered the future time window for days to evaluate the performance of our predictor .we have found that in case of movielens people tend to copy the behaviours of their peers as well as they also like to explore new entries.in case of netflix we have found precision shows improvement for around days but after that pbp performed better .although in enquiry of novel items our predictor is better .this phenomena also suggest that people explore new itmes on netlix and also rely on their peer s recent activity .facebook activity data shows similar nature .our predictor shows significant precision for upto 200 days .it also predicts the behaviour of consumers for exploring or trying `` new thing '' nature .our works describe presence `` potantial items '' those are generally subdued in the presence of other already popular items .in this manuscript we came up with one model to make prediction of object on online social media specially considering it s temporal behavior.we created a model by considering object s recent popularity as well as it s aging or decay of popularity.epirical results show that our proposed method out performs the base method i.e popularity based predictor given by ( zeng et al .2013).we have found that people tend to copy recent behaviours of their peer consumers not the whole popularity of items .we have also found that in presence of quality items recent popular items loses it s popularity or we can say that on these kinds of network * competitive behavior * ( described by ( bianconi and barabsi 2001 ) for social network ) also found.we have considered only temporal effects of the node s attracting new link.we have found it one of the important feature for making prediction.in future work one can also consider other effects like human dynamics , item category , node centrality etc .asur , sitaram , bernardo huberman , and others . 2010 .`` predicting the future with social media . '' in _ web intelligence and intelligent agent technology ( wi - iat ) , 2010 ieee / wic / acm international conference on _ , edited by ieee , 1:49299 .doi : http://dx.doi.org/10.1109/wi - iat.2010.63[10.1109/wi - iat.2010.63 ] .cheng , justin , lada a. adamic , p. alex dow , jon m. kleinberg , and jure leskovec . 2014 .`` can cascades be predicted ? '' in _23rd international world wide web conference , www 14 , seoul , republic of korea , april 7 - 11 , 2014 _ , edited by chin - wan chung , andrei z. broder , kyuseok shim , and torsten suel , 92536 .doi : http://dx.doi.org/10.1145/2566486.2567997[10.1145/2566486.2567997 ] .eisler , zoltan , imre bartos , and janos kertesz .2008 . `` fluctuation scaling in complex systems : taylors law and beyond1 . '' _ advances in physics _ 57 ( 1 ) .informa uk limited : 89142 .doi : http://dx.doi.org/10.1080/00018730801893043[10.1080/00018730801893043 ] .et.al , a. barabasi .1999 . `` emergence of scaling in random networks . '' _ science _ 286 ( 5439 ) .american association for the advancement of science ( aaas ) : 50912 .doi : http://dx.doi.org/10.1126/science.286.5439.509[10.1126/science.286.5439.509 ] .gleeson , james p , davide cellai , jukka - pekka onnela , mason a porter , and felix reed - tsochas .`` a simple generative model of collective online behavior . '' _ proceedings of the national academy of sciences _ 111 ( 29 ) .national acad sciences : 1041115 .doi : http://dx.doi.org/10.1073/pnas.1313895111[10.1073/pnas.1313895111 ] .herlocker , jonathan l. , joseph a. konstan , loren g. terveen , and john t. riedl .2004 . `` evaluating collaborative filtering recommender systems . '' _ acm transactions on information systems _ 22 ( 1 ) .association for computing machinery ( acm ) : 553 .doi : http://dx.doi.org/10.1145/963770.963772[10.1145/963770.963772 ] .leskovec , jure , lada a adamic , and bernardo a huberman .the dynamics of viral marketing . ''_ acm transactions on the web ( tweb ) _ 1 ( 1 ) .doi : http://dx.doi.org/10.1145/1232722.1232727[10.1145/1232722.1232727 ] .leskovec , jure , lars backstrom , and jon m. kleinberg .`` meme - tracking and the dynamics of the news cycle . '' in _ proceedings of the 15th acm sigkdd international conference on knowledge discovery and data mining , paris , france , june 28 - july 1 , 2009 _ , edited by john f. elder iv , franoise fogelman - soulie , peter a. flach , and mohammed javeed zaki , 497506 .doi : http://dx.doi.org/10.1145/1557019.1557077[10.1145/1557019.1557077 ] .matus medo manuel s. mariani , an zeng , and yi - cheng zhang .`` identification and modeling of discoverers in online social systems . ''http://arxiv.org/pdf/1509.01477.pdf .onnela , j .-p . , and f. reed - tsochas .`` spontaneous emergence of social influence in online systems . '' _ proceedings of the national academy of sciences _ 107 ( 43 ) .proceedings of the national academy of sciences : 1837580 .doi : http://dx.doi.org/10.1073/pnas.0914572107[10.1073/pnas.0914572107 ] .parolo , pietro della briotta , raj kumar pan , rumi ghosh , bernardo a. huberman , kimmo kaski , and santo fortunato .`` attention decay in science . ''_ j. informetrics _ 9 ( 4 ) : 73445 .doi : http://dx.doi.org/10.1016/j.joi.2015.07.006[10.1016/j.joi.2015.07.006 ] .szabo , gabor , and bernardo a. huberman .`` predicting the popularity of online content . ''_ communications of the acm_. http://doi.acm.org/10.1145/1787234.1787254 .doi : http://dx.doi.org/10.1145/1787234.1787254[10.1145/1787234.1787254 ] .tatar , alexandru , marcelo dias de amorim , serge fdida , and panayotis antoniadis .2014 . `` a survey on predicting the popularity of web content . '' _j internet serv appl _ 5 ( 1 ) .springer science business media .doi : http://dx.doi.org/10.1186/s13174 - 014 - 0008-y[10.1186/s13174 - 014 - 0008-y ] .yang , jaewon , and jure leskovec .`` patterns of temporal variation in online media . '' in _ proceedings of the fourth acm international conference on web search and data mining _ , edited by acm , 17786 .new york , ny , usa : acm .doi : http://dx.doi.org/10.1145/1935826.1935863[10.1145/1935826.1935863 ] .zeng , an , stanislao gualdi , matu medo , and yi - cheng zhang . 2013 .`` trend prediction in temporal bipartite networks : the case of movielens , netglix , and digg . '' _ advances in complex systems _ 16 ( 04n05 ) .world scientific pub co pte lt : 1350024 .doi : http://dx.doi.org/10.1142/s0219525913500240[10.1142/s0219525913500240 ] .zhu , han , xinran wang , and jian - yang zhu .2003 . `` effect of aging on network structure . ''_ physical review e _ 68 ( 5 ) .american physical society ( aps ) .doi : http://dx.doi.org/10.1103/physreve.68.056121[10.1103/physreve.68.056121 ] .
predicting the future popularity of online content is highly important in many applications . preferential attachment phenomena is encountered in scale free networks.under it s influece popular items get more popular thereby resulting in long tailed distribution problem . consequently , new items which can be popular ( potential ones ) , are suppressed by the already popular items . this paper proposes a novel model which is able to identify potential items . it identifies the potentially popular items by considering the number of links or ratings it has recieved in recent past along with it s popularity decay . for obtaining an effecient model we consider only temporal features of the content , avoiding the cost of extracting other features . we have found that people follow recent behaviours of their peers . in presence of fit or quality items already popular items lose it s popularity . prediction accuracy is measured on three industrial datasets namely movielens , netflix and facebook wall post . experimental results show that compare to state - of - the - art model our model have better prediction accuracy .
smaller is stronger . this is the most general conclusion that can be drawn from numerous experimental and theoretical studies investigating the plastic flow behaviour of metallic materials .examples are the empirical hall - petch relationship , strain - gradient strengthening , indentation size - effects , and the most recent observation of a sample size - effect due to the reduction of the external dimensions . whilst size - affected plastic flow as a result of a finite sample size has been reported sporadically ever since g.f .taylor s work in 1924 , intensely focused research emerged first in the past two decades , primarily motivated by production routes and test systems that allow systematic and well controllable experiments at the micron- and nano - scale .the central finding in recent developments on finite sample - size effects is an empirical power - law scaling of the type , , with the characteristic length - scale of the sample and a power - law exponent .summarising size - dependent strengths for fcc metals in a plot containing the strength normalised by the shear - modulus and the sample dimension normalised by the burgers vector yields a surprisingly general trend for all data with a power - law exponent of typically around . for bcc crystal systems this power - law scaling holds as well , but here the normalized data exhibits a less universal trend , with ranging between 0.3 and 0.8 , depending on the metal . fig .[ figexpdata ] reproduces some selected data for various fcc metals , clearly demonstrating a quite general strength - size scaling with respect to finite sample size that covers more than two orders of magnitude in both strength and size .a more complete set of data for both fcc and bcc metals can be found in refs . . without doubt , the scaling depicted in fig .[ figexpdata ] represents a truly remarkable result .how can one explain this general trend for such a variety of experimental studies ?in fact , why does the scaling survive the large variations in microstructure , crystal orientation , strain hardening response , testing condition and other strength influencing factors ? it is noted that fig .[ figexpdata ] contains data from focused ion beam ( fib ) prepared single crystals , multi - grained electroplated crystals , multi - grained crystals prepared by embossing , nominally dislocation free crystals , bi - crystalline fib prepared crystals , nano - porous structures , and also nano - wires , all of which are expected to contain very different local environments for the operating dislocations . in addition , some of the data contained in fig .[ figexpdata ] is highly affected by geometrical strain hardening , because of low aspect ratios and side - wall taper .this hardening is not only reflected by different slopes of the flow curves , but can also be correlated with the formation of dislocation substructures in tapered samples . on the other hand , a rather constant dislocation density at constant sample sizeis observed during straining in non - tapered geometries in compression and in tension . despite these differences in micro structural evolution ,the strength values , which typically are derived at arbitrary strains between 1% and 20% , as selectively indicated , tend to fall similarly onto fig .[ figexpdata ] . when studying more carefully individual data sets within fig .[ figexpdata ] , it can be shown for some studies that the scaling exponent is dependent on the strain at which the strength is derived . modelling flow responses of micron sized samples has also evidenced that the scaling exponent is sensitive to the initial underlying dislocation density and structure , which subsequently was supported by experimental findings . yet , all these influences are blurred by the plotted data in fig .[ figexpdata ] , which means that fine details in the microstructure are yielding variations in the value of , but the empirical power - law remains the describing functional form irrespective of the micro structural richness covered within fig .[ figexpdata ] .first explanations for the trend depicted in fig .[ figexpdata ] revolved around the scarcity of available dislocation sources and mobile dislocations , as well as the balance between the dislocation escape rate and the dislocation nucleation rate .more recently , further understanding has been gained via detailed and specific mechanisms ( or change in mechanisms ) , suggesting that a range of `` non - universal '' explanations underlie the experimental trend seen in fig .[ figexpdata ] , as discussed by kraft and co - workers . here , the governing dislocation mechanism changes with decreasing sample size from dislocation multiplication in the micron regime , to nucleation controlled plasticity of full ( 100 - 1000 nm ) and partial dislocations ( 10 - 100 nm ) .this contemporary viewpoint is well motivated by experimental data obtained at all these scales , but still raises the question of how very different underlying effects and mechanisms lead to the very impressive double - logarithmic scaling ?obviously , the data itself suggests one regime without mechanistic transitions .indeed , it has been argued that the size effect originates from a simple restriction of the available space for dislocation source operation which , although quite general , results in an exponent restricted to unity .the exception to the lack of a change of mechanism is in the regime of very small sample sizes , where the power - law scaling seems to level off .a reduction in the scaling at very small sizes , corresponding to extremely low defect densities , or even dislocation free systems , can be explained by the relative ease of partial dislocation nucleation as compared to the nucleation of full dislocations or dislocation multiplication processes , but has been shown to arise in micron - sized systems as well . besides the scarce experimental data in the sub 100 nm regime , several atomistic studies have predicted either the break - down of the ubiquitously observed power - law or a reduction in scaling exponent due to mobile dislocation exhaustion at the far left end in fig.1 . in the extreme case of fully dislocation free systems ,flow stresses are said to depend on the atomic roughness on the surface , yielding a weak intrinsic size - dependence or no size - scaling at all a topic which remains to be fully explored . with the above at hand, it becomes clear that the trend in fig .[ figexpdata ] comprises a wealth of underlying details , that involves a complex convolution of micro structural properties at the detailed level of individual dislocations or even point defects , without mentioning the numerous external experimental factors that have been discussed extensively in the literature . in terms of detailed structural mechanisms ,the discussed size - scaling opens a practically un - explorable multi - dimensional space of parameters that , however , seem not to substantially affect the uniform trend .such a situation suggests an entirely probabilistic description of plasticity which considers only the statistics of both stress and plastic strain , and how this might change as a function of sample volume .indeed , zaiser in his review of intermittent plasticity proposed that any size dependence will most likely emerge from a change in sampling statistics .the above viewpoint has been followed in a number of recent works .for example demir _et al _ have assumed a distribution of source lengths which , when combined with the stress to bow out such a dislocation source , results in a distribution of critical stresses .for the bulk regime all source lengths admitted by the distribution are possible and the critical stress scale is set by the mean value of the distribution .however as the sample volume reduces the source length distribution must be truncated and at a sufficiently small sample length scale the bulk mean field picture breaks down , with the critical stress scale being set by the statistics of small sources and corresponding high critical stresses . on the other hand pharr andco - workers assumed that the yield strength depends only on the spatial distribution of dislocations and on the distribution of their activation strengths . despite the lack of specific dislocation mechanisms ,an averaging of the resulting yield strength over a certain system size range demonstrates a cross over between a bulk strength at larger system sizes and close to the theoretical strength at very small system sizes .the transition range is the regime of the apparent power - law scaling , which in the work of phani et al . is uniquely determined by the theoretical strength and the bulk strength , and not by the dislocation density or specimen size . in the current papera quite different probabilistic approach will be taken .in particular , using extreme value statistics and an assumed distribution of critical stresses characterized by an algebraic exponent , sec .[ secstress ] derives a very general size effect scaling for stress .[ secstrain ] then combines this stress scaling with the known scaling exponent , , of plastic strain magnitudes for the two regimes of a dominant internal length scale and external length scale . for the case of an internal length scaleno size effect emerges .however for the case of an external length scale , a power law in strength emerges where the exponent is given by .the applicability of the size effect to a range of materials whose underlying microstructures are expected to be different in their details , motivates the need for a quite general approach that can not depend too strongly on the specifics of a particular material .one starting point is to acknowledge that bulk plasticity arises from irreversible structural transformations whose core regions generally have a finite spatial extent , and that each leads to a global plastic strain increment which , however small , is discrete .this latter aspect is motivated by the early torsion experiments of tinder _et al _ who , with a strain resolution of , where able to observe discrete plasticity in bulk metallic samples .such structural transformations may be characterised , in the first instance , by a critical stress needed for the plastic event to occur with a particular degree of certainty .this may be done by assuming that the model material is defined by a probability distribution of such critical stresses , and that the corresponding number density of irreversible structural transformations is given by the product of this distribution with the total number of distinct structural transformations admitted by the system , .the actual distribution of critical stresses will embody the details of the particular material through its underlying low energy potential energy landscape .this latter contribution arises directly from the assumption that plasticity occurs via thermal activation and as a result the distribution will have an implicit strain rate and temperature dependence . in the present work , the aforementioned distribution is assumed to have the form =\frac{\delta}{\gamma\left(\frac{\alpha+1}{\delta}\right)\sigma_{0}}\left(\frac{\sigma}{\sigma_{0}}\right)^{\alpha}\exp\left[-\left(\frac{\sigma}{\sigma_{0}}\right)^{\delta}\right ] \label{eqprobdist}\ ] ] where , and are positive non - zero numbers , and is the gamma function .such a distribution is called the generalized gamma distribution and spans a range of well known positive valued distributions such as the weibull distribution ( and therefore the rayleigh distribution ) with and the gamma distribution ( and therefore the chi and chi - squared distributions ) with .the sample volume , , enters the model via since is an extensive quantity where . here is the density of available irreversible structural excitations . for a particular realisation of the material system , critical stress values are sampled from the stress distribution .the applied stress at which the first plastic event occurs will equal the lowest critical stress of these critical stresses . to proceed further, an assumption has to be made as to how the distribution changes with increasing plastic activity .presently , it assumed that the analytical form of the distribution and do not change .these assumptions will be discussed in sec .[ secdiscussion ] .thus , the next step necessarily involves re - sampling the same distribution to return the system to its values of critical stress .this re - sampling is performed until a value is found that is greater than the current applied stress implying that , although the form of the intrinsic distribution does not change , the distribution naturally becomes truncated and correspondingly renormalised as the stress increases . for the generalized gamma distribution , eqn .[ eqprobdist ] ) for , and where a ) plots the probability distribution function and b ) plots the corresponding cumulative distribution function , scaledwidth=90.0% ] to generate a particular ordered list of critical stresses that would be contained in a stress - strain curve , the above procedure is iterated resulting in the following algorithm : 1 .the applied stress , , is set to zero . values are sampled from the critical stress distribution and sorted to produce an ordered list . is set equal to the lowest critical stress of the ordered list .this lowest critical stress is removed from the ordered list .a new critical stress is sampled from the distribution until one is found which is larger than the current applied stress .this new critical stress is then added to the ordered list .steps 3 to 5 are repeated until the desired size of the critical stress sequence is reached . without loss of generality ,a weibull distribution is chosen for eqn .[ eqprobdist ] ( i.e. ) giving the probability distribution shown in fig .[ figwd]a for three values of .the figure shows single peaked distributions whose peak critical stresses limit to for increasing . fig .[ figdata]a displays three stochastic realizations of stress - sequences derived using the above algorithm , spanning four orders of magnitude in .inspection of these stress sequences reveals that for low there is strong scatter in the curves indicating a high degree of stochasticity .this scatter decreases with increasing , and with equal to 100000 , the stress sequence almost converges to a smooth curve for all sample realisations .in addition to the degree of stochasticity , inspection of fig . [ figdata]a reveals that as decreases ( reducing system size ) the scale of the critical stress sequence increases .such a size effect in stress can be rationalised via the fact that for decreasing , the minimum critical stress will approach the most probable value of the distribution in sampling the distribution once ( the extreme limit of ) the most likely value that is obtained will clearly be the most probable value ( ) . on the other hand , for large , the minimal critical stress will be determined largely by the extreme value properties of the distribution .this offers a quite general proposition to the statistical origin of a size effect _ in stress _ small sample sizes probe ( on average ) the mean strength of a critical stress distribution and with increasing size the ( smaller ) extreme value stresses of the distribution are increasingly probed .this picture forms the basis of a statistical analysis of fracture in ceramics and has been used as a basis for the derivation of a size effect in stress at which the first athermal plastic event occurs .equal to 100 , 1000 , 10000 and 100000 , and b ) corresponding average values obtained using eqn .[ eqappb13 ] .stress versus plastic strain when c ) the plastic strain increment scales inversely with sample volume and d ) when the plastic strain increment scales according to finite size scaling predictions of dislocation avalanche phenomena.,scaledwidth=90.0% ] an analytical expression for the _ average _ of the ordered stress list produced numerically in fig .[ figdata]a will now be obtained as a function of .the first step is to determine the first average critical stress , as a function of .this may be obtained via the relation =1 , \label{eqevs}\ ] ] where is the cumulative distribution probability ( cdf ) or repartition probability : =\int_{0}^{\sigma}d\sigma'\,p[\sigma']=1-\int_{\sigma}^{\infty}d\sigma'\,p[\sigma ' ] .\label{eqcdf}\ ] ] fig .[ figwd]b shows the cumulative distribution probability ( cdf ) for the weibull distribution .[ eqevs ] expresses the fact that there exists an average minimum stress at which the integrated number density equals unity , that is , one ( minimum ) critical stress exists with certainty . clearly as increases, decreases approaching zero as . for the generalized gamma distribution , =q\left[\frac{\alpha+1}{\delta},\left(\frac{\sigma}{\sigma_{0}}\right)^{\delta}\right]\ ] ] where ] .thus eqn .[ eqevs ] has the solution \label{eqevsfirstsoln}\ ] ] where ] is the gamma function . for logarithmic accurracy ( in fig .[ figexpdata ] ) all prefactors depending on the moments of the distribution need not be considered .the above result demonstrates that the size effect in stress is only influenced by the exponent of the generalized gamma distribution an intuitive result given that the extreme value statistics regime will depend primarily on the low stress tail of the critical shear stress distribution .more generally , this implies that the leading order form , eqn . [ eqcentralresult ], will be valid for a much broader class of distributions , all of which are algebraic in the limit of zero stress .in a crystal , plastic strain is mediated by the sequential motion of dislocations or collections of dislocations , each one being referred to as a plastic event .historically , in bulk crystals the individual events have been considered local when compared to the size of the material .such a viewpoint has its theoretical origins in the early ideas of nabarro and eshelby where the corresponding far field plastic strain due to each plastic event scales inversely with sample volume . at the scale of an individual dislocation segment, this viewpoint also forms the basis of modern small strain plasticity dislocation dynamics simulations and a variety of coarse grained models of plastic deformation ( see for example refs . and references therein ) .acoustic emission experiments revealed the distribution of these plastic strain magnitudes to have an algebraic component indicating scale - free physics underlies the collective motion of dislocations .such scale free behaviour , or avalanche phenomena , indicates an underlying non - trivial complexity of dislocation based microstructure and suggests that parts of the dislocation network are in a state of self - organised criticality .thus plasticity belongs to a class of universal phenomenon often described as crackling noise , which encompasses such diverse phenomena as the statistics of earthquakes and that of magnetic switching .like all critical phenomenon , pure algebraic behaviour occurs only for systems without a length scale , and when a length - scale does exist the signature of approximate scale - free behaviour is how it is modified with respect to this length scale .this change in behaviour is manifested by a non - universal scaling or cut - off function which constitutes the non - algebraic part ( pre - factor ) of the distribution . within this framework ,the plasticity of micron sized single crystal sample volumes ( investigated by both experiment and dislocation dynamics simulations ) has revealed such avalanche phenomenon , where now the relevant length - scale is an external dimension . on the other hand , dislocation dynamics simulations of very long dipolar mats in which only the mobile dislocation content is explicitly modelled with the internal microstructure being fixed by a static mean - field description ,also show avalanche behaviour and provides the alternative example of an internal ( rather than external ) length scale controlling the scaling function . in this regime of more bulk - like behaviour , where external length - scales are much larger than any internal length scale, plastic events may be again viewed as a local phenomenon with respect to sample size with their corresponding far field plastic strain scaling inversely with system size .thus for the bulk limit ( with an internal length scale ) , the characteristic plastic strain magnitude , , of a system with volume , will be giving the mean plastic strain at the critical stress as eqn .[ eqesh ] is a natural result of the eshelby inclusion picture and eqn .[ eqbulkstrain ] exploits this fact by stating that , to logarithmic accuracy , the total plastic strain is a product of the plastic strain events and this inverse volume scaling .again , since fig . [ figexpdata ] is a log - log plot , only logarithmic accuracy is needed to describe the general size effect allowing for the omission of all irrelevant prefactors in the above and in what follows .substitution of eqn .[ eqbulkstrain ] into eqn .[ eqcentralresult ] gives and the result that there exists no size effect in the bulk limit .this is demonstrated in fig .[ figdata]c which displays the average stress sequence now with the corresponding strain given by eqn .[ eqbulkstrain ] on the horizontal axis .the data corresponding to the different values of all collapse on a universal stress - plastic strain curve demonstrating that the size effect in stress is offset by a comparable size effect in plastic strain . on the other hand , for a sample size regime , in which the scaling dimension corresponds to an external dimension , quite a different result is obtained .both experiment and simulation reveal that the the distribution of plastic strain increments , , has the asymptotic form \frac{1}{\delta\varepsilon^{\tau } } , \label{eqnstraindist}\ ] ] where =\exp\left[-x^{2}\right] ] is the incomplete gamma function . in the above defines the small strain applicability limit of the distribution in eqn .[ eqnstraindist ] .the last similarity in eqn .[ eqavalanche ] is obtained by using the leading order approximation to the incomplete gamma function , \simeq\gamma[a]-x^{a}(1/a+\dots)$ ] , and the knowledge that the exponent is approximately equal to the mean field value of ( see sec .[ secdiscussion ] ) . due to the geometry of a typical dislocation event , , is itself inversely proportional to a cut off length scale , .a cube of volume gives this length scale as thus giving the characteristic plastic strain magnitude of the system , , as substitution of eqn .[ eqmicronstrain ] into eqn .[ eqcentralresult ] gives the result and a true size effect in strength emerges exhibiting power - law behaviour as a function of an inverse length - scale .[ figdata]d displays the corresponding stress versus plastic strain curves for the stress sequences of figs .[ figdata]a - b and shows quite a distinct size effect .note that since , , eqn .[ eqcentralcentralresult ] is applicable to any sample volume shape in which , where is the external scaling dimension which is ( only ) varied .the developed probabilistic approach results in a surprisingly simple derivation of a power law in strength with respect to sample volume . given a distribution of critical stresses and the number of structural transformations available to a material , the two main results of the present work leading , in part , to are 1 .intermittency in stress has its origins in the discreteness of the sequence .the intermittency therefore vanishes in the bulk limit of .the stress of this intermittency scales inversely with and therefore sample volume .these two results arise directly from the extreme value statistics of the critical stress distribution , where for large enough the above universal properties emerge _ independent _ of the actual distribution used . using these developments in conjunction with the known and established results of dislocation avalanche behaviour for the plastic strain distribution , results finally in eqn .[ eqse ] .what is the applicability regime of this procedure ? for the extreme value statistics approach to be valid must be large enough , but need not be too large .a value of is already enough .thus it is only in the limit of very small sample volumes where the present derivation is expected to break down . indeed , as discussed in the introduction , experimentally it is known that the strength tends to saturate with system size , a regime where dislocations are largely absent and surface geometry strongly influences , via dislocation nucleation , plasticity . thus , sample volume should be small , but not too small for the size effect to be operative .this is entirely compatible with seminal work of uchic _et al _ who comment on the surprisingly large sample volumes ( microns ) in which the size effect is still observed to occur .there will also exist an upper limit in for the applicability of eqn .[ eqse ] . when the sample size becomes sufficiently large , internal length scales within the dislocation networkwill naturally emerge .this will reduce the importance of the external length scale and for large enough sample volumes it will dominate . in this regime , avalanche behaviour still occurs , however it is now the internal length scale which controls the finite length scale effect . from this perspective, plasticity now becomes a localized phenomenon and plastic strain will depend inversely on sample volume .[ secstrain ] demonstrates that when this occurs , the size effect is absent .this limit emphasizes that the present work is entirely compatible with the change - in - mechanism approach proposed by many authors when the external dimensions of the system enter the micron regime , since the transition must ultimately manifest itself as some dislocation based mechanism .the current work does however indicate that the size effect in strength is a more general phenomenon , quite independent of any one ( or few ) microscopic mechanisms a result entirely consistent with broad applicability of fig .[ figexpdata ] . and size effect exponent ( ) predictions according to eqn . [ eqcentralcentralresult ] ( ) where the mean field value is used , for three different values of which characterizes the low stress regime of the critical shear distribution.,scaledwidth=80.0% ]the applicability of the derived model is also well reflected in various experimental reports that address the extrinsic size - effect as well as the intermittency of plastic flow with respect to internal length scales . as - prepared single crystals in the size regime of some hundred nanometresare known to fall onto the trend depicted in fig .[ figexpdata ] , but irradiation or the introduction of dispersed obstacles in the same size range have been shown to erase the size effect in strength . in these casesdislocation - defect interaction determined by the internal length scale governs strength with the size effect now being decoupled from the external length scale of the sample . moreover , within this regime of a dominating internal length scale , small scale mechanical testing has qualitatively shown that the strain increment magnitudes reduce when investigating similar sample sizes that contain larger pre - existing dislocation- or defect - densities .as such , the developed model is able to fully encompass some of the numerous experimentally observed stress - strain characteristics particular to some of the material systems covered in fig .[ figexpdata ] . with the power - law exponent equalling , ,the two free parameters of the model are the exponent of the low - critical stress end of the critical stress distribution and the scaling exponent .mean field calculations have demonstrated that for plastic events dominated by single slip , .simulations have however shown that this value may also be applied to multi - slip plastic events . assuming the mean field value for , is the one free parameter of the model . taking the values used in fig .[ figwd]a ( , and ) gives respectively the size effect exponents , , and .the corresponding power laws are plotted in fig .[ figfinalplot ] , along with the experimental data of fig .[ figexpdata ] showing good overall agreement .it should be emphasized that fitting to _ all _ of the data in fig .[ figexpdata ] is not a useful task , since is known to vary from material to material and also on the initial dislocation density and structure . in the present theorythis would be reflected by variations in and also possibly in .[ figfinalplot ] does however demonstrate that should be larger than 2 , giving a critical stress distribution that rises slowly from its zero value at zero stress .in addition to the assumed existence of a distribution of critical shear stresses the work assumes that as plasticity evolves , the distribution becomes truncated and renormalized , but overall it retains its intrinsic form .[ figdata]c puts this assumption into its appropriate context .the figure demonstrates that from the perspective of bulk plasticity , the current theory places the deformation curve of a micro - pilar experiment into the domain of micro - plasticity several tens of plastic events in a micro - deformation stress - strain curve will correspond to only the very early stages of bulk deformation ( see for example the very early torsion experiments of tinder and co - workers ) .this situation is equally valid for the size affected plasticity in fig .[ figdata]d , but it is somewhat hidden due to the lack of appropriate pre - factors on the strain axis .micro - plasticity is a deformation regime where significant structural evolution and hardening are largely absent , indeed , there now exists a growing body of evidence that this is in fact the case for micro - deformation experiments .the aforementioned reference have in common that they experimentally show or suggest that there is little to no change in dislocation structure and density beyond the transition to extensive plastic flow .it is this perspective that gives justification to the assumption of an unchanging ( but continuously truncated ) critical stress distribution . put in other words, it is recognized that upon a discrete plastic event occurring the internal structure of the sample volume changes non - negligibly , however the next critical shear stress which characterises this new configuration , is still drawn from the same critical stress distribution since the characteristic internal length scale and dislocation density can not change in any significant way due to this event occurring this is the essence of the micro - plastic regime . finally , the assumption that does not change with plastic evolution turns out not to be necessary for the derivation of eqn .[ eqse ] . indeed either decrease or fluctuate around some mean value with respect to the plastic evolution and the same scaling result would be obtained .in summary , the size effect paradigm `` smaller is stronger '' , as embodied by the power law , , is shown to originate from a combination of a size effect in stress derived from the extreme value statistics of an assumed distribution of critical stresses , and a size effect in strain derived from the finite scaling associated with scale - free dislocation activity .both contributions may be considered universal , depending little on the fine details of a particular material . in particular , , where is the leading order algebraic exponent of the low stress regime of the critical stress distribution and is the scaling exponent associated with the distribution of plastic strain magnitudes . 99 e. o. hall , proceedings of the physical society of london section b 64 ( 1951 ) p.747. n. j. petch , journal of the iron and steel institute 174 ( 1953 ) p.25 .n. a. fleck , j. w. hutchinson , in advances in applied mechanics , 33 ( 1997 ) p.295 .m. zaiser , e. c. aifantis , scripta materialia 48 ( 2003 ) p.133 .x. zhang , k. e. aifantis , materials science and engineering a 528 ( 2011 ) p.5036 .w. d. nix , h. j. gao , journal of the mechanics and physics of solids 46 ( 1998 ) p.411 .m. r. begley , j. w. hutchinson , journal of the mechanics and physics of solids 46 ( 1998 ) p.2049 .m. d. uchic , d. m. dimiduk , j. n. florando , w. d. nix , science 305 ( 2004 ) p.986 .d. m. dimiduk , m. d. uchic , t. a. parthasarathy , acta materialia 53 ( 2005 ) p.4065 . g. f. taylor , physical review 23 ( 1924 ) p.655 .a. s. schneider , d. kaufmann , b. g. clark , c. p. frick , p. a. gruber , r. mnig , o. kraft , and e. arzt , physical review letter 103 ( 2009 ) p.105501 .m. d. uchic , p. a. shade , d. m. dimiduk , annu .39 ( 2009 ) p.361 .j. r. greer , j. t. m. de hosson , progress in materials science 56 ( 2011 ) p.654 .o. kraft , p. a. gruber , r. mnig , d. weygand , ann .40 ( 2010 ) p.293 .r. dou , b. derby , scripta materialia 61 ( 2009 ) p.524 .k. s. ng , a. h. w. ngan , acta materialia 56 ( 2008 ) p.1712 .d. kiener , c. motz , t. schoberl , m. jenko , g. dehm , adv .8 ( 2006 ) p.1119 . c. a. volkert , e. t. lilleodden , philosophical magazine 86 ( 2006 ) p.5567 . j. r. greer , w. d. nix , physical review b 73 ( 2006 ) p.245410. j. r. greer , w. c. oliver , w. d. nix , acta materialia 53 ( 2005 ) p.1821 . c. p. frick , b. g. clark , s. orso , a. s. schneider , e. arzt , materials science and engineering a 489 ( 2008 ) p.319 .r. maass , m. d. uchic , acta materialia 60 ( 2012 ) p.1027 .z. w. shan , r. k. mishra , s. a. s. asif , o. l. warren , a. m. minor , nature materials 7 ( 2008 ) p.115 .r. maass , s. van petegem , d. ma , j. zimmermann , d. grolimund , f. roters , h. van swygenhoven , d. raabe , acta materialia 57 ( 2009 ) p.5996 .r. maass , s. van petegem , d. grolimund , h. van swygenhoven , d. kiener , g. dehm , applied physics letters 92 ( 2008 ) p.071905 .m. dietiker , s. buzzi , g. pigozzi , j. f. lffler , r. spolenak , acta materialia 59 ( 2011 ) p.2180 .s. buzzi , m. dietiker , k. kunze , r. spolenak , j. f. lffler , philosophical magazine 89 ( 2009 ) p.869 g. richter , k. hillerich , d. s. gianola , r. monig , o. kraft , c. a. volkert , nano lett . 9( 2009 ) p.3048 .r. maass , l. meza , b. gan , s. tin , j. r. greer , small 8 ( 2012 ) p.1869 .a. kunz , s. pathak , j. r. greer , acta materialia 59 ( 2011 ) p.4416 .a. m. hodge , j. biener , j. r. hayes , p. m. bythrow , c. a. volkert , a. v. hamza , acta materialia 55 ( 2007 ) p.1343 . j. biener , a. m. hodge , j. r. hayes , c. a. volkert , l. a. zepeda - ruiz , a. v. hamza , f. f. abraham , nano lett .6 ( 2006 ) p.2379 .r. dou , b. derby , scripta materialia 59 ( 2008 ) p.151 .h. zhang , b. e. schuster , q. wei , k. t. ramesh , scripta materialia 54 ( 2006 ) p.181 .d. kiener , c. motz , g. dehm , materials science and engineering a 505 ( 2009 ) p.79 .r. maass , s. van petegem , c. n. borca , h. van swygenhoven , materials science and engineering : a 524 ( 2009 ) p.40 .d. m. norfleet , d. m. dimiduk , s. j. polasik , m. d. uchic , m. j. mills , acta materialia 56 ( 2008 ) p.2988 .f. mompiou , m. legros , a. sedlmayr , d. s. gianola , d. caillard , o. kraft , acta materialia 60 ( 2012 ) p.977 .r. maass , phd thesis nr .4468 2009 , ecole polytechnique fdrale de lausanne .s. i. rao , d. m. dimiduk , t. a. parthasarathy , m. d. uchic , m. tang , c. woodward , acta materialia 56 ( 2008 ) p.3245 . c. motz , d. weygand , j. senger , p. gumbsch , acta materialia 57 ( 2009 ) p.1744 .j. a. el - awady , m. d. uchic , p. a. shade , s .- l .kim , s. i. rao , d. m. dimiduk , c. woodward , scripta materialia 68 ( 2013 ) p.207 .a. s. schneider , d. kiener , c. m. yakacki , h. j. maier , p. a. gruber , n. tamura , m. kunz , a. m. minor , c. p. frick , materials science and engineering a 559 ( 2013 ) p.147 .t. a. parthasarathy , s. i. rao , d. m. dimiduk , m. d. uchic , d. r. trinkle , scripta materialia 56 ( 2007 ) p.313 .d.j . dunstan and a.j .bushby , international journal of plasticity 40 ( 2013 ) p.152 . t. zhu , j. li , a. samanta , a. leach , k. gall , physical review letters 100 ( 2008 ) p.100 .f. sansoz , acta materialia 59 ( 2011 ) p.3364. t. zhu , j. li , progress in materials science 55 ( 2010 ) p.710 .h. bei , s. shim , e. p. george , m. k. miller , e. g. herbert , g. m. pharr , scripta materialia 57 ( 2007 ) p.397 .p. a. shade , r. wheeler , y. s. choi , m. d. uchic , d. m. dimiduk , h. l. fraser , acta materialia 57 ( 2009 ) p.4580 . m. zaiser , adv . phys .55 ( 2006 ) p.185 .r. demir , d. raabe , f. roters , acta materialia 58 ( 2010 ) p.1876 .p. s. phani , k. e. johanns , e. p. george , g. m. pharr , acta materialia 61 ( 2013 ) p.2489 .r. f. tinder and j. washburn , acta .12 ( 1964 ) p.129 .r. f. tinder and j. p. trzil , acta metall .21 ( 1973 ) p.975 .b. lawn , _ fracture of brittle solids_. 2nd ed .( cambridge university , 1993 ) .k. sieradzki , a. rinaldi , c. friesen and p. peralta , acta materialia 54 ( 2006 ) p.4533 .a. rinaldi , p. peralta , c. friesen and k. sieradzki , acta materialia 56 ( 2007 ) p.511 .j. senger , d. weygand , c. motz , p. gumbsch , o. kraft , acta materialia 59 ( 2011 ) p.2937 .w. wang , y. zhong , k. lu , l. lu , d. l. mcdowel , t. zhu , acta materialia 60 ( 2012 ) p.3302 .bouchaud , and m. mzard , j. phys . a : math .( 1997 ) p.7997 .e. j. gumbel , _ statistics of extremes _ ( columbia university press , 1958 ) f. r. n. nabarro , proc .soc . a 175 ( 1940 )j. d. eshelby , proc .soc . a 241 ( 1957 )l. bulatov and w. cai , _computer simulations of dislocations _( oxford university press , 2006 ) a. s. argon , _ strengthening mechanisms in crystal plasticity _ ( oxford university press , 2008 )k. a. dahmen , y. ben - zion , and j. t. uhl , phys .lett 103 ( 2009 ) p.175501 .c . miguel , a. vespignani , s. zapperi , j. weiss and j.r .grasse , nature 410 ( 2001 ) p.667 .j. weiss and d. marsan , science 89 ( 2003 ) p.299 .d. s. fisher , k. dahmen , s. ramanathan , and y. ben - zion , phys .78 ( 1997 ) p.4885 .k. dahmen , d. ertas , and y. ben - zion , phys .e 58 ( 1998 ) p.1494 .a. p. mehta , k. a. dahmen , and y. ben - zion , phys .e 73 ( 2006 ) p.056104 .j. p. sethna , k. a. dahmen , and c. r. myers , nature ( london ) 410 ( 2001 ) p.242 .d. k. dimiduk , c. woodward , r. lesar and m. d. uchic , science 312 ( 2006 ) p.1188 .d. k. dimiduk , e. m. nadgorny , c. woodward , m. d. uchic and p. a. shade , phil . mag . 90( 2010 ) p.3621 .m. zaiser , j. schwerdtfeger , a. s. schneider , c. p. frick , b. g. clark , p. a. gruber and e. arzt , phil . mag .88 ( 2008 ) p.3861 .p. d. ispnovity , i. groma , g. gyrgyi , f. f. csikor and d. weygand , phys .( 2010 ) p.085503 .f. f. csikor , c. motz , d. weygand , m. zaiser and s. zapperi , science 318 ( 2007 ) p.318 .p.m. derlet and r. maa , mod .sim . mat .( 2013 ) p.035007 .m. zaiser , n. nikitas , j. stat .( 2007 ) p.p04013 d. kiener , p. hosemann , s. a. maloy , a. m. minor , nature materials 10 ( 2011 ) p.608 .b. girault , a. s. schneider , c. p. frick , e. arzt , adv .12 ( 2010 ) p.385 .k. e. johanns , a. sedlmayr , p. s. phani , r. moenig , o. kraft , e. p. george , g. m. pharr , j. mater .27 ( 2012 ) p.508 .s. shim , h. bei , m. k. miller , g. m. pharr , e. p. george , acta materialia 57 ( 2009 ) p.503 .s. w. lee , s. m. han , w. d. nix , acta materialia 57 ( 2009 ) p.4404 .young , j. appl .32 ( 1961 ) p.1815 .g. vellaikal , acta metall .17 ( 1969 ) p.1145 .t. j. koppenaal , acta metall .11 ( 1963 ) p.86 .s. h. oh , m. legros , d. kiener , g. dehm , nature materials 8 ( 2009 ) p.95 . r. maass , p. m. derlet and j. r. greer , scripta materialia 69 ( 2013 ) p.586 .
in this work , the well known power - law relation between strength and sample size , , is derived from the knowledge that a dislocation network exhibits scale - free behaviour and the extreme value statistical properties of an arbitrary distribution of critical stresses . this approach yields , where reflects the leading order algebraic exponent of the low stress regime of the critical stress distribution and is the scaling exponent for intermittent plastic strain activity . this quite general derivation supports the experimental observation that the size effect paradigm is applicable to a wide range of materials , differing in crystal structure , internal microstructure and external sample geometry .
the goal of this chapter is to apply the techniques of loop quantum gravity ( lqg ) to cosmological spacetimes .the resulting framework is known as loop quantum cosmology ( lqc ) .this chapter has a two - fold motivation : to highlight various developments on the theoretical and conceptual issues in the last decade in the framework of loop quantum cosmology , and to demonstrate the way these developments open novel avenues for explorations of planck scale physics and the resulting phenomenological implications . from the theoretical viewpoint ,cosmological spacetimes provide a very useful stage to make significant progress on many conceptual and technical problems in quantum gravity .these geometries have the advantage of being highly symmetric , since spatial homogeneity reduces the infinite number of degrees of freedom to a finite number , significantly simplifying the quantization of these spacetimes .difficult challenges and mathematical complexities still remain , but they are easier to overcome than in more general situations .the program of canonical quantization of the gravitational degrees of freedom of cosmological spacetimes dates back to wheeler and de witt . in recent years, lqc has led to significant insights and progress in quantization of these mini - superspace cosmological models and fundamental questions have been addressed .these include : whether and how the classical singularities are avoided by quantum gravitational effects ; how a smooth continuum spacetime emerges from the underlying quantum theory ; how do quantum gravitational effects modify the classical dynamical equations ; the problem of time and inner product ; quantum probabilities ; etc .( see for reviews in the subject ) .spacetimes where detailed quantization has been performed include friedmann - lemaitre - robertson - walker ( flrw ) , bianchi and gowdy models , the latter with an infinite number of degrees of freedom .a coherent picture of singularity resolution and planck scale physics has emerged based on a rigorous mathematical framework , complemented with powerful numerical techniques .this new paradigm has provided remarkable insights on quantum gravity , and allowed a systematic exploration of the physics of the very early universe . on the other hand ,simplifications also entail limitations . since the formulation and the resulting physics is most rigorously studied in the mini - superspace setting , it is natural to question its robustness when infinite number of degrees of freedom are present , and whether the framework captures the implications from the full quantum theory .the problem of relating a model with more degrees of freedom to its symmetry reduced version is present even at the mini - superspace level . in this setting important insightshave been gained on the relation between the loop quantization of bianchi - i spacetime and spatially flat ( ) isotropic model , which provide useful lessons to relate quantization of spacetimes with different number of degrees of freedom .moreover , the belinskii - khalatnikov - lifshitz ( bkl ) conjecture that the structure of the spacetime near the singularities is determined by the time derivatives and spatial derivatives become negligible , which is substantiated by rigorous mathematical and numerical results , alleviates some of these concerns and provides a support to the quantum cosmology program . finally , recently there has been some concrete progress on the relation between lqc and full lqg , discussed briefly in section 6 . from the phenomenological perspective, we are experiencing a fascinating time in cosmology .the observational results of wmap and planck satellites have provided strong evidence for a primordial origin of the cmb temperatures anisotropies .there is no doubt that the excitement in early universe cosmology is going to continue for several more years , providing a promising opportunity to test implications of quantum gravity in cosmological observations .this chapter provides a review , including the most recent advances , of loop quantization of cosmological spacetimes and phenomenological consequences .it is organized as follows .section [ sec2 ] provides a summary of loop quantization of the spatially flat , isotopic and homogeneous model sourced with a massless scalar field .this model was the first example of the rigorous quantization of a cosmological spacetime in lqc .because the quantization strategy underlying this model has been implemented for spacetimes with spatial curvature , anisotropies and also in presence of inhomogeneities , we discuss it in more detail . after laying down the classical framework in ashtekar variables ,we discuss the kinematical and dynamical features of loop quantization and the way classical singularity is resolved and replaced by a bounce .this section also briefly discusses the effective continuum spacetime description which provides an excellent approximation to the underlying quantum dynamics for states which are sharply peaked . for a specific choice of lapse , equal to the volume , and for the case of a massless scalar field one obtains an exactly solvable model of lqc ( slqc ) which yields important robustness results on the quantum bounce . in sec . 3, we briefly discuss the generalization of loop quantization and the resulting planck scale physics to spacetimes with spatial curvature , bianchi , and gowdy models .section [ sec4 ] is devoted to cosmological perturbations .we review the formulation of a quantum gravity extension of the standard theory of gauge invariant cosmological perturbations in lqc .these techniques provide the theoretical arena to study the origin of matter and gravitational perturbations in the early universe .this is the goal of section [ sec5 ] where we summarize the lqc extension of the inflationary scenario and discuss the quantum gravity corrections to physical observables .due to space limitations , it is difficult to cover various topics and details in this chapter .these include the earlier developments in lqc , the path integral formulation of lqc , entropy bounds , consistent quantum probabilities , application to black hole interiors , and various mathematical and numerical results in lqc .issues with inverse triad modifications , limitations of the earlier quantizations in lqc and the role of fiducial scalings , and issues related to quantization ambiguities and the resulting physical effects are also not discussed . for a review of some of these developments and issues in lqc ,we refer the reader to ref . and the above cited references .we are also unable to cover all the existing ideas to study lqc effects on cosmic perturbations .see for different approaches to that problem .further information can be found in the chapter `` loop quantum gravity and observations '' by barrau and grain in this volume , and in the review articles .related to lqc , there have been developments in spin foams and group field theory , for which we refer the reader to refs. .our convention for the metric signature is , we set but keep and explicit in our expressions , to emphasize gravitational and quantum effects .when numerical values are shown , we use planck units .in this section , we illustrate the key steps in loop quantization of homogeneous cosmological models using the example of spatially flat flrw spacetime sourced with a massless scalar field . though simple , this model is rich in physics and provides a blueprint for the quantization of models with spatial curvature , anisotropies and other matter fields .loop quantization of this spacetime was first performed in refs . where a rigorous understanding of the quantum hamiltonian constraint , the physical hilbert space and the dirac observables was obtained , and detailed physical predictions were extracted using numerical simulations .it was soon realized that this model can also be solved exactly .this feature serves as an important tool to test the robustness of the physical predictions obtained using numerical simulations . in the following , in sec .2.1 , we begin with the quantization of this cosmological model in the volume representation .we discuss the classical and the quantum framework , and the main features of the quantum dynamics .we also briefly discuss the effective spacetime description which captures the quantum dynamics in lqc for sharply peaked states to an excellent approximation and provides a very useful arena to understand various phenomenological implications .the exactly solvable model is discussed in sec .2.2 . in the following ,we outline the classical and the quantum framework of lqc in the spatially flat isotropic and homogeneous spacetime following the analysis of refs . . in literaturethis quantization is also known as ` quantization ' or ` improved dynamics ' . in the first partwe introduce the connection variables , establish their relationship with the metric variables , find the classical hamiltonian constraint in the metric and the connection variables and obtain the singular classical trajectories in the relational dynamics expressing volume as a function of the internal time .this is followed by the quantum kinematics , properties of the quantum hamiltonian constraint in the geometric ( volume ) representation , the physical hilbert space and a summary of the physical predictions . a comparison with the wheeler - dewitt theory is also provided both at the kinematical and the dynamical level . an effective description of the quantization performed here , following the analysis of refs. is discussed in sec . 2.1.3 .the spatially flat homogeneous and isotropic spacetime is typically considered with a spatial topology or of a 3-torus . for the non - compact spatial manifold extra careis needed to introduce the symplectic structure in the canonical framework because of the divergence of the spatial integrals . for the non - compact case one introduces a fiducial cell , which acts as an infra - red regulator .physical implications must be independent of the choice of this regulator , which is the case for the present analysis .such a cell is not required for the compact topology .the spacetime metric is given by s^2 = - t^2 + a^2 _ ab x^a x^b where is the proper time , denotes the scale factor of the universe and denotes the fiducial metric on the spatial manifold.with the matter source as the massless scalar field which serves as a physical clock in our analysis , instead of proper time it is natural to introduce a harmonic time satisfying since satisfies the wave equation .this corresponds to the choice of the lapse . the spacetime metric then becomes s^2 = - a^6 ^2 + a^2 ( x_1 ^ 2 + x_2 ^ 2 + x_3 ^ 2 ) . in terms of the physical spatial metric ,the physical volume of the spatial manifold is , where is the comoving volume of the fiducial cell in case the topology is , or the comoving volume of in case the topology is compact . due to the underlying symmetries of this spacetime , the spatial diffeomorphism constraint is satisfied and the only non - trivial constraint is the the hamiltonian constraint .let us first obtain this constraint in the metric variables .in such a formulation , the canonical pair of gravitational phase space variables consists of the scale factor and its conjugate , with ` dot ' denoting derivative with respect to the proper time .these variables satisfy .the matter phase space variables are and , which satisfy . in terms of the metric variables ,the hamiltonian constraint is given by [ hc1 ] c_h = - + 0 , which yields the classical friedman equation in terms of the energy density , , for the spatially flat frw model : [ classicalfried ] ( ) ^2 = . in order to obtain the classical hamiltonian constraint in terms of the variables used in lqg : the ashtekar - barbero su(2 ) connection and the conjugate triad , we first notice that due to the symmetries of the isotropic and homogeneous spacetime , the connection and triad can be written as a^i_a = c v_o^-1/3 ^i_a , e^a_i = p v_o^-2/3 ^a_i , where and denote the isotropic connection and triad , and and are the fiducial triads and co - triads compatible with the fiducial metric .the canonically conjugate pair satisfies , and is related to the metric variables as and , where is the barbero - immirzi parameter in lqg , whose value is set to using black hole thermodynamics . the modulus sign over the triad arises because of the two possible orientations , the choice of which does not affect physics in the absence of fermions .it is important to note that the above relation between the triad and the scale factor is true kinematically , whereas the relation between the isotropic connection and the time derivative of the scale factor is true only for the physical solutions of gr .it turns out that in the quantum theory , it is more convenient to work with variables and which are defined in terms of and as : [ bv ] : = , : = sgn(p ) , where sgn(p ) is depending on whether the physical and fiducial triads have the same orientation ( + ) , or the opposite ( - ) .the conjugate variables and satisfy , and in terms of which the classical hamiltonian constraint becomes [ hc3 ] c_h = - ^2 || + 0 .for a given value of and for a given triad orientation , hamilton s equations yield an expanding and a contracting trajectory , given by [ traj ] = + _ c where and are integration constants .both trajectories encounter a singularity . in the classical theory ,the existence of a singularity either in past of the expanding branch or in the future of the contracting branch is thus inevitable . to pass to the quantum theory ,the strategy is to promote the classical phase variables and the classical hamiltonian constraint to their quantum operator analogs .for the metric variables , this startegy leads to the wheeler - dewitt quantum cosmology . since, we wish to obtain a loop quantization of the cosmological spacetimes based on lqg we can not use the same strategy for the connection - triad variables . in lqg ,variables used for quantization are the holonomies of the connection along edges , and the fluxes of the triads along 2-surfaces .( see _ chapter 1_. ) for the homogeneous spacetimes , the latter turn out to be proportional to the triad .the holonomy of the symmetry reduced connection along a straight edge with fiducial length is , h_k^ ( ) = ( ) + 2 ( ) _ k where is a unit matrix and , where are the pauli spin matrices . due to the symmetries of the homogeneous spacetime ,the holonomy and flux are thus captured by functions of , and the triads respectively .since can take arbitrary values , are _ almost periodic functions _ of the connection . the next task is to find the appropriate representation of the abstract -algebra generated by almost periodic functions of an unrestricted real variable is almost periodic if holds to an arbitrary accuracy for infinitely many values of , such that translations are spread over the whole real line without arbitrarily large intervals .] of the connection : , and the triads .it turns out that there exists a unique kinematical representation of algebra generated by these functions in lqc .this result has parallels with existence of a unique irreducible representation of the holonomy - flux algebra in full lqg .the gravitational sector of the kinematical hilbert space underlying this representation in lqc is a space of square integrable functions on the bohr compactification of the real line : .use of holonomies in place of connections does not directly affect the matter sector .for this reason , the matter sector of the kinematical hilbert space is obtained by following the methods in the fock quantization .it is important to note the difference between the gravitational part of , and the one obtained by following the wheeler - dewitt procedure where the gravitational part of the kinematical hilbert space is . in lqc ,the normalizable states are the countable sum of , which satisfy : , where is a kronecker delta .this is in contrast to the wheeler - dewitt theory where one obtains a dirac delta .thus , the kinematical hilbert space in lqc is fundamentally different from one in the wheeler - dewitt theory .the intersection between the kinematical hilbert space in lqc and the wheeler - dewitt theory consists only of the zero function . since the system has only a finite degrees of freedom, one may wonder why the the von - neumann uniqueness theorem , which leads to a unique schrdinger representation in quantum mechanics , does not hold .it turns out that for the theorem to be applicable in lqc , should be weakly continuous in .this condition is not met in lqc , and the von - neumann theorem is bypassed .( for further details on this issue , we refer the reader to ref . ) . the action of the operators and on states is by multiplication and differentiation respectively . on the states in the triad representationlabelled by eigenvalues of , the action of is translational : n _ ( ) = ( + ) , where is a constant instead of to avoid confusion with the argument of the wavefunction .] , and acts as : p ( ) = ( ) .before we proceed to the quantum hamiltonian constraint , we note that the change in the orientation of the triads which does not lead to any physical consequences in the absence of fermions corresponds to a large gauge transformation by a parity operator which acts on as : .the physical states in the absence of fermions are therefore required to be symmetric , satisfying . to obtain the dynamics in the quantum theory , we start with the hamitonian constraint in full lqg in terms of triads and the field strength of the connection : and }^je^a_ie^b_j ] takes place entirely in ; it does not involve inhomogeneous perturbations .dynamics in is generated by a true hamiltonian , which is obtained from the second order piece of the scalar constraint of general relativity by keeping only terms which are quadratic in first order perturbations .this hamiltonian has the form , where in fourier space [ pert - ham ] _2^()[n]= d^3 k ( |^()_|^2 + |_|^2 ) , [ pert - hams ] _ 2^()[n]= d^3 k ( |^()_|^2 + a ( k^2 + ) |_|^2 ) . here ^2 ] in contrast acts on both states , since it contains background as well as perturbation operators . the first term in each side of the previous equalitycancel out by virtue of the evolution of the background state ( [ qhc5 ] ) , and we are left with _ hom(i _ _pert)= ^()_2[n _ ] ( _ hom_pert ) this equation tells us that in the test field approximation the right hand side is proportional to , and we therefore can take the inner product of this equation with without losing information .the last equation then reduces to [ qftqst ] i _ _ pert=^()_2[n _ ] _ pert , where the expectation value is taken in the physical hilbert space of the homogeneous sector . in other words ,as long as the test field approximation holds , the evolution of perturbation is obtained from ] .this is equivalent of saying that the amount of expansion from the bounce to the present time is such that a wavelength of size twice the planck length at the bounce is red - shifted to approximately at the present time .although there are no mechanisms based of precise arguments to explain why such coincidence should happen , ref . has provided concrete physical principles that lead to this situation .furthermore , observations have detected deviations from the standard featureless scale invariant spectrum for the low region of the power spectrum , indicating that new physics may be needed to account for the observed anomalies .although the associated statistical significant of these anomalies is inconclusive , it is quite tempting to think that the may be visible traces of new physics , as indeed emphasized in .a natural question is then whether the lqc bounce preceding inflation provides a suitable mechanism to quantitatively account for the observed anomalies .the planck team has paid particular attention to two anomalous features in the cmb , namely : i ) a dipolar asymmetry arising from fact that the averaged power spectrum is larger in a given hemisphere of the cmb than in the other , and ; ii ) a power suppression at large scale , corresponding to a deficit of correlations at angular multipoles as compared to the predictions of a scale invariant spectrum .we now briefly summarize existing ideas related to these anomalies in lqc .a primordial dipolar asymmetry requires _ correlations _ between different wave - numbers in the power spectrum .such correlations do not arise at leading order in models for which the background is homogenous , as the scenario discussed in the last two sections : the two - point function in fourier space is diagonal .this motivated the authors of ref . to go beyond leading order and discuss the corrections the primordial spectrum acquires from the three - point function ( i.e. corrections from non - gaussiantiy ) . as first point out in , non - gaussian effects in the two - point functioncould indeed be responsible of the observed dipolar modulation in the cmb . in ref . this idea was implemented in lqc .the non - gaussianity that inflation generates as a consequence of the pre - inflationary lqc bounce were computed and its effect on the primordial power spectrum were obtained .the result is that there exist values of the free parameters the value of the inflaton field at the bounce ( or equivalently , ) and its mass make the non - gaussian modulation of the power spectrum to induce a scale dependent dipolar modulation in the cmb that agrees with the observed anomaly .furthermore , this mechanism also offers the possibility to account for the power suppression , since a _ monopolar _modulation appears , in addition to the dipole , at large angular scales , which could reverse the enhancement of power shown in figure [ spectrum ] .the analysis in included the non - gaussianity generated during inflation , but a contribution to the three - point function from the bounce is also expected .however , this contribution to non - gaussianity is significantly more challenging to compute , even numerically , because of the absence of the slow - roll approximation normally used to simplify the computations in inflation .work is in progress to complete this computation with the goal of establishing that the non - gaussian modulation in lqc is a viable mechanism to simultaneously account for the two observed anomalies .other ideas have also recently appear to account for the power suppression at large scales .they are related to the choice of initial state for scalar perturbation at the time of the bounce mentioned above in this section .the statement in these works is that one can find physical criteria to select a preferred notion of ground state at the bounce which , when evolved until the end of inflation , produce a power spectrum which is suppressed compared to the standard scale invariant result for low values of .lqc provides a remarkable example of successful quantization of the sector of classical gr spacetimes with symmetries observed at cosmological scales .it is based on a precise mathematical framework , supplemented with sophisticated state of the art numerical techniques .one starts by showing that the requirement of background independence is strong enough to uniquely fix the quantum representation , just as the poincar symmetry symmetry fixes the representation of the observable algebra in the standard quantum theory of free fields .one then uses this preferred representation .this procedure was first applied to a spatially flat flrw background and the resulting quantum geometry was analyzed in detail . as described in this chapter , the final picture realizes many of the intuition that physicists , starting from wheeler , have had about non - perturbative quantum gravity .furthermore , interesting questions can now be answered in a precise fashion in lqc .of particular interest is the way in which quantum effects are able to overwhelm the gravitational attraction and resolve the big bang singularity . while the lqc non - perturbative corrections dominate the evolution in the planck regime and remove the big bang singularity , they disappear at low energies restoring agreement with the classical descriptionthis is a non trivial result .the analysis has been extended to more complicated models containing spacial curvature , anisotropies , and even models with infinitely many degrees of freedom such as the gowdy spacetime , adding significant robustness to the emergent physical picture . using effective spacetime description of lqc, the problem of singularities in general has been addressed , which provides important insights on the generic resolution of strong curvature singularities .one can further extend the regime of applicability of lqc by including cosmological perturbations .in standard cosmology one describes scalar and tensor curvature perturbations by quantum fields propagating in a classical flrw spacetime .this is the theoretical framework qft in classical spacetimes on which the phenomenological explorations of the early universe rely , e.g. in the inflationary scenario . in this chapterwe have reviewed how such a framework can be generalized by replacing the classical spacetime by the quantum geometry provided by lqc .this framework provides a rich environment to analyze many interesting questions both conceptually and at the phenomenological level .it offers the theoretical arena to explore the evolution of scalar and tensor perturbations in the early universe , and to provide a self - consistent quantum gravity completion of the standard cosmological scenarios .it is our view that the level of detail and mathematical rigor attained in lqc is uncommon in quantum cosmology .the new framework has become a fertile arena to obtain new mechanisms that could explain some of the anomalous features observed in the cmb , which indicate that physics beyond inflation is required to understand the large scale correlations in the cmb . since lqc is a quantization of classical spacetimes with symmetries that are appropriate to cosmology , the theoretical framework shares the limitations of the symmetry reduced quantization strategy .symmetry reduction often entails a drastic simplification , and therefore one may loose important features of the theory by restricting the symmetry prior to quantization .this is an important issue which has attracted efforts from different fronts .first let us recall that the bkl conjecture further supports the idea that quantum cosmological models are very useful in capturing the dynamics of spacetime near the singularities .within lqc itself , the concern was initially alleviated by checking that models with larger complexity , such as anisotropic bianchi i model , correctly reproduced the flrw quantization previously obtained , when the anisotropies are ` frozen ' at the quantum level .this test is even more remarkable when applied to models that have infinitely many degrees of freedom to begin with , as it is the case of the gowdy model .more generally , there are interesting recent results on establishing a connection between lqc and lqg .these include _ quantum - reduced _ loop quantum gravity , where the main idea is to capture symmetry reduction at the quantum level in lqg and then pass to the cosmological sector , and group field theory cosmology .promising results have been obtained in these approaches . as examples , improved dynamics as the one used in isotropic lqc has been found in quantum - reduced loop quantum gravity , and evidence of lqc like evolution and bounce have been reported in group field theory cosmology .it is rather encouraging that results from different directions seem to yield a consistent picture of the planck scale physics as has been extensively found in lqc another important ingredient in lqc is the process of de - parameterization . in the absence of a fundamental time variable in quantum gravity , in lqcone follows a relational - time approach in which one of the dynamical variables plays the role of time , and one studies the evolution of other degrees of freedom with respect to it . as explained in section [ sec2 ] , in most of the lqc literature oneuses a massless scalar field as time variable .an important question is how the physical results depend on the variable chosen as a time , i.e. if quantum theories constructed from different relational times are unitarily related .this is an age old question in quantum cosmology , but so far has not been systematically addressed .it is worth commenting on some of the directions where significant progress has been made in lqc , in contrast to the earlier works in quantum cosmology . the first one deals with a rigorous treatment of fundamental questions in quantum cosmology about the probability of events such as the probability for encountering a singularity or a bounce .these are hard questions whose answers had been elusive due to the lack of sufficient control over the physical hilbert space structure , including properties of observables and a notion of time to define histories .thanks to the quantization of isotropic and homogeneous spacetimes using a scalar field as a clock , a consistent histories formulation can be completed both in the wheeler - dewitt theory and lqc .a covariant generalization of these results has also been pursued . using exactly soluble model of slqc computation of class operators , decoherence functional and probability amplitudes can be performed .it turns out that in the wheeler - dewitt theory the probability for bounce turns out to be zero even if one considers an arbitrary superposition of expanding and contracting states .the probability of bounce turns out to be unity in lqc .these developments show that not only lqc has been successful in overcoming problem of singularities which plague wheeler - dewitt theory , it has also established an analytical structure which has been used to answer foundational questions both in lqc and the wheeler - dewitt theory . the second direction where developments in lqc are expected to have an impact beyond lqg are in the development of sophisticated numerical algorithms to understand the evolution in deep planck regime for a wide variety of initial states , including with very large spreads .some of these techniques have been exported from traditional numerical relativity ideas which are modified and applied in the quantum geometric setting . using high performance computing, these methods promise to yield a detailed picture of the physics of the planck scale .these techniques can be replicated in a straightforward way for other quantum gravity approaches .more importantly they provide a platform to understand the structure of quantum spacetime analogous to the numerical works in classical gravity . a deeper understanding of how quantum gravitational effects modify the bkl conjecture and change our understanding of approach to singularity in the classical theory is a promising arena .interesting results in this direction have started appearing , including on singularity resolution in bianchi models and quantum kasner transitions across bounces and selection rules on possible structures near the to be classical singularities .finally , we note that sometimes the limitations of lqc have been used to shed doubts on its results . these arguments , mainly articulated by the authors of , claim that a fully covariant approach with validity beyond symmetry reduced scenarios produces physical results inequivalent to those obtained from lqc .in particular , it is argued that , in presence of inhomogeneities , there is an unavoidable change of signature , from lorentzian to euclidean , in an effective theory .the authors of this chapter disagree with the conclusions reached in and subsequent papers along these lines .it our view , although the conceptual points raised by those authors are indeed interesting , their analysis relies in a series of assumptions and approximations that make their results far from being conclusive .furthermore , recent results on the validity of the effective theories show that care must be taken in generalizing certain conclusions from the effective description to the full quantum theory .we are grateful to abhay ashtekar , aurelien barau , chris beetle , boris bolliet , b. bonga , alejandro corichi , david craig , peter diener , jonathan engel , rodolfo gambini , julien grain , brajesh gupt , anton joe , wojciech kaminski , alok laddha , jerzy lewandowski , esteban mato , miguel megevand , jose navarro - salas , william nelson , javier olmedo , leonard parker , tomasz pawlowski , jorge pullin , sahil saini , david sloan , victor taveras , kevin vandersloot , madhavan varadarajan , sreenath vijayakumar , and edward wilson - ewing for many stimulating discussions and insights .this work is supported in part by nsf grants phy1068743 , phy1403943 , phy1404240 , phy1454832 and phy-1552603 .this work is also supported by a grant from john templeton foundation .the opinions expressed in this publication are those of authors and do not necessarily reflect the views of john templeton foundation .a. ashtekar , t. pawlowski and p. singh , quantum nature of the big bang , phys .lett . * 96 * 141301 ( 2006 ) , a. ashtekar , t. pawlowski and p. singh , quantum nature of the big bang : an analytical and numerical investigation , phys . rev . *d73 * 124038 ( 2006 ) .m. martin - benito , g. a. mena marugan , t. pawlowski , loop quantization of vacuum bianchi i cosmology , phys .rev . d**78 * * 064008 ( 2008 ) ; + physical evolution in loop quantum cosmology : the example of vacuum bianchi i , phys .rev . d**80 * * 084038 ( 2009 ) .m. martin - benito , l. j. garay and g. a. mena marugan , hybrid quantum gowdy cosmology : combining loop and fock quantizations , phys .rev . d**78 * * 083516 ( 2008 ) ; + l. j. garay , m. martin - benito , g. a. mena marugan , inhomogeneous loop quantum cosmology : hybrid quantization of the gowdy model , phys .rev . d**82 * * 044048 ( 2010 ) .a. ashtekar , m. campiglia and a. henderson , casting loop quantum cosmology in the spin foam paradigm , class .* 27 * , 135020 ( 2010 ) ; path integrals and the wkb approximation in loop quantum cosmology , phys .d * 82 * , 124043 ( 2010 ) .m. fernandez - mendez , g.a .mena marugan , and j. olmedo , effective dynamics of scalar perturbations in a flat friedmann - robertson - walker spacetime in loop quantum cosmology , phys .d * 89 * ( 2014 ) 044041 .e. wilson - ewing , lattice loop quantum cosmology : scalar perturbations , class.quant.grav .29 215013 ( 2012 ) ; the matter bounce scenario in loop quantum cosmology , ` arxiv:1211.6269 ` . m. bojowald and g. m. hossain , loop quantum gravity corrections to gravitational wave dispersion , phys .d * 77 * 023508 ( 2008 ) .j. grain , a. barrau , t. cailleteau and j. mielczarek , observing the big bounce with tensor modes in the cosmic microwave background : phenomenology and fundamental lqc parameters , phys .rev . d**82 * * 123520 ( 2010 ) . c. rovelli and f. vidotto , on the spinfoam expansion in cosmology , class .* 27 * , 145005 ( 2010 ) ; e. bianchi , c. rovelli and f. vidotto , towards spinfoam cosmology , phys . rev .d * 82 * , 084035 ( 2010 ) .a. ashtekar , j. lewandowski , d. marolf , j. mouro and t. thiemann , quantization of diffeomorphism invariant theories of connections with local degrees of freedom .* 36 * 64566493 ( 1995 ) d. cartin , g. khanna , matrix methods in loop quantum cosmology , proceedings of quantum gravity in americas iii , penn state ( 2006 ) .http://igpg.gravity.psu.edu/events/conferences/quantumgravityiii/proceedings.shtm a. ashtekar and t. a. schilling , geometrical formulation of quantum mechanics . in : _ on einstein s path : essays in honor of engelbert schcking _ , harvey , a. ( ed . ) ( springer , new york ( 1999 ) ) , 2365 , ` arxiv : gr - qc/9706069 ` j. b. hartle , spacetime quantum mechanics and the quantum mechanics of spacetime , _ gravitation and quantizations : proceedings of the 1992 les houches summer school _ , ed .by b. julia and j. zinc - justin , north holland , amsterdam ( 1995 ) .b. gupt and p. singh , quantum gravitational kasner transitions in bianchi - i spacetime , phys .d * 86 * , 024034 ( 2012 ) ` arxiv:1205.6763 ` .a. ashtekar , a. henderson and d. sloan , hamiltonian formulation of general relativity and the belinksii , khalatnikov , lifshitz conjecture , class .* 26 * 052001 ( 2009 ) ; + a hamiltonian formulation of the bkl conjecture , phys .rev . d**83 * * 084024 ( 2011 ) .r. gambini , j. pullin , hawking radiation from a spherical loop quantum gravity black hole , class .grav . 31 115003 ( 2014 ) ; a scenario for black hole evaporation on a quantum geometry , arxiv:1408.3050 .i. agullo and l. parker , non - gaussianities and the stimulated creation of quanta in the inflationary universe , phys .rev . d**83 * * 063526 ( 2011 ) ; stimulated creation of quanta during inflation and the observable universe gen .43 , 2541 - 2545 ( 2011 ) .i. agullo , j. navarro - salas and l. parker , enhanced local - type inflationary trispectrum from a non - vacuum initial state , jcap * 1205 * , 019 ( 2012 ) .j. d. barrow , the premature recollapse problem in closed inflationary universes , nucl . phys .b296 ( 1988 ) 697?709. j. d. barrow and s. cotsakis , inflation and the conformal structure of higher order gravity theories , phys .b 214 , 515 ( 1988 ) . j. engle , relating loop quantum cosmology to loop quantum gravity : symmetric sectors and embeddings , class .* 24 * , 5777 ( 2007 ) ; piecewise linear loop quantum gravity , class .* 27 * , 035003 ( 2010 )
in the last decade , progress on quantization of homogeneous cosmological spacetimes using techniques of loop quantum gravity has led to insights on various fundamental questions and has opened new avenues to explore planck scale physics . these include the problem of singularities and their possible generic resolution , constructing viable non - singular models of the very early universe , and bridging quantum gravity with cosmological observations . this progress , which has resulted from an interplay of sophisticated analytical and numerical techniques , has also led to valuable hints on loop quantization of black hole and inhomogeneous spacetimes . in this review , we provide a summary of this progress while focusing on concrete examples of the quantization procedure and phenomenology of cosmological perturbations .
in the large scale region of income , profits , assets , sales and etc ( ) , the cumulative probability distribution function ( pdf ) obeys a power - law for which is larger than a certain threshold : this power - law and the exponent are called pareto s law and pareto index , respectively .the power - law distribution is well investigated by using various models in econophysics . recently , fujiwara et al . find that pareto s law can be derived kinematically from the law of detailed balance and gibrat s law which are also observed in the large scale region . in the proof ,they assume no model and only use these two laws in empirical data .the detailed balance is time - reversal symmetry ( ) : here and are two successive incomes , profits , assets , sales , etc . and is the joint pdf .gibrat s law states that the conditional pdf of growth rate is independent of the initial value : here growth rate is defined as the ratio and is defined by using the pdf and the joint pdf as . in ref . , the kinematics is extended to dynamics by analyzing data on the assessed value of land in japan . in the non - equilibrium systemwe propose an extension of the detailed balance ( detailed quasi - balance ) as follows from gibrat s law ( [ gibrat ] ) and the detailed quasi - balance ( [ detailed quasi - balance ] ) , we derive pareto s law with annually varying pareto index . the parameters , are related to the change of pareto index and the relation is confirmed in the empirical data nicely .these findings are important for the progress of econophysics . above derivations are , however , valid only in the large scale region where gibrat s law ( [ gibrat ] ) holds .it is well known that pareto s law is not observed below the threshold .the reason is thought to be the breakdown of gibrat s law .the breakdown of gibrat s law in empirical data is reported by stanley s group .takayasu et al . and aoyama et al . also report that gibrat s law does not hold in the middle scale region by using data of japanese companies . in ref . , gibrat s law is extended in the middle scale region by employing profits data of japanese companies in 2002 and 2003 .we approximate the conditional pdf of profits growth rate as so - called tent - shaped exponential functions by measuring we have assumed the dependence to be and have estimated the parameters as from the detailed balance ( [ detailed balance ] ) and extended gibrat s law ( [ t ] ) ( [ mu ] ) , we have derived the pdf in the large and middle scale region uniformly as follows where .this is confirmed in the empirical data . in this study , we prove that the dependence of ( [ t ] ) with is unique if the pdf of growth rate is approximated by tent - shaped exponential functions ( [ tent - shaped1 ] ) , ( [ tent - shaped2 ] ) .this means , consequently , that the pdf in the large and middle scale region ( [ handm ] ) is also unique if the dependence of is negligible .we confirm these approximations in profits data of japanese companies 2003 and 2004 and show that the pdf ( [ handm ] ) fits with empirical data nicely by the refined data analysis .in the database , pareto s law ( [ pareto ] ) is observed in the large scale region whereas it fails in the middle one ( fig .[ profitdistribution ] ) . at the same time, it is confirmed that the detailed balance ( [ detailed balance ] ) holds not only in the large scale region and but also in all regions and ( fig .[ profit2003vsprofit2004 ] ) .is different from one in ref .the reason is that the identification of profits in 2002 and 2003 in ref . was partly failed . as a result ,the pdfs of profits growth rate are slightly different from those in this paper .the conclusion in ref . is , however , not changed . ]the breakdown of pareto s law is thought to be caused by the breakdown of gibrat s law in the middle scale region .we examine , therefore , the pdf of profits growth rate in the database . in the analysis , we divide the range of into logarithmically equal bins as $ ] thousand yen with . in fig .[ profitgrowthratell ] [ profitgrowthrateh ] , the probability densities for are expressed in the case of , , and , respectively . the number of the companies in fig .[ profitgrowthratell ] [ profitgrowthrateh ] is " , " , " and " , respectively .here we use the log profits growth rate . the probability density for defined by related to that for by from fig .[ profitgrowthratell ] [ profitgrowthrateh ] , is approximated by linear functions of as follows these are expressed as tent - shaped exponential functions ( [ tent - shaped1 ] ) , ( [ tent - shaped2 ] ) by .in addition , the dependence of ( ) is negligible for .the validity of these approximations should be checked against the results .in this section , we show that the dependence of ( [ t ] ) is unique under approximations ( [ tent - shaped1 ] ) , ( [ tent - shaped2 ] ) ( ( [ approximation1 ] ) , ( [ approximation2 ] ) ) . due to the relation of under the change of variables from to ,these two joint pdfs are related to each other . by the use of this relation , the detailed balance ( [ detailed balance ] )is rewritten in terms of as follows : substituting the joint pdf for the conditional probability , the detailed balance is expressed as under approximations ( [ tent - shaped1 ] ) and ( [ tent - shaped2 ] ) , the detailed balance is reduced to for . by using the notation , the detailed balance becomes by expanding eq .( [ de0 ] ) around , the following differential equation is obtained \tilde{p}(x ) + x~ { \tilde{p}}^{'}(x ) = 0,\end{aligned}\ ] ] where denotes .the same differential equation is obtained for .the solution is given by where and . in order to make the solution ( [ handm3 ] ) around satisfies( [ de0 ] ) , the following equation must be valid for all : \ln r~. \label{kouho}\end{aligned}\ ] ] the derivative of eq .( [ kouho ] ) with respect to is \ln r~. \label{kouho2}\end{aligned}\ ] ] by expanding eq .( [ kouho2 ] ) around , following differential equations are obtained + { t_{+}}^{'}(x)+{t_{-}}^{'}(x)=0~,\\ & & 2~{t_{+}}^{'}(x)+{t_{-}}^{'}(x)-3x~{t_{-}}^{''}(x ) -x^2~\bigl[{t_{+}}^{(3)}(x)+2~{t_{-}}^{(3)}(x ) \bigr]=0~. \end{aligned}\ ] ] the solutions are given by to make these solutions satisfy eq .( [ kouho ] ) , the coefficients must be and .finally we conclude that is uniquely expressed as eq .( [ t ] ) with .under approximations ( [ tent - shaped1 ] ) and ( [ tent - shaped2 ] ) ( ( [ approximation1 ] ) and ( [ approximation2 ] ) ) , we obtain the profits pdf where we use the relation ( [ mu ] ) confirmed in ref . . in fig .[ x1vst ] , hardly responds to for .this means that gibrat s law holds in the large profits region .on the other hand , linearly increases and linearly decreases symmetrically with for .the parameters are estimated as eq .( [ alphah ] ) and ( [ alpham ] ) with ( and thousand yen .because the dependence of ( ) is negligible in this region , the profits pdf is reduced to eq .( [ handm ] ) .we observe that this pdf fits with the empirical data nicely in fig .[ profitdistributionfit ] .notice that the estimation of in fig .[ x1vst ] is significant .if we take a slightly different , the pdf ( [ handm ] ) can not fit with the empirical data ( or in fig .[ profitdistributionfit ] for instance ) .in this paper , we have shown the proof that the expression of extended gibrat s law is unique and the pdf in the large and middle scale region is also uniquely derived from the law of detailed balance and the extended gibrat s law . in the proof , we have employed two approximations that the pdf of growth rate is described as tent - shaped exponential functions and that the value of the origin of growth rate is constant .these approximations have been confirmed in profits data of japanese companies 2003 and 2004 .the resultant pdf of profits has fitted with the empirical data with high accuracy .this guarantees the validity of the approximations . for profits data we have used, the distribution is power in the large scale region and log - normal type in the middle one .this does not claim that all the distributions in the middle scale region are log - normal types .for instance , the pdf of personal income growth rate or sales of company is different from tent - shaped exponential functions . in this case, the extended gibrat s law takes a different form .in addition , we describe no pdf in the small scale region . because the dependence of in this region is not negligible ( fig .[ profitgrowthratell ] ) . against these restrictions , the proof andthe method in this paper is significant for the investigation of distributions in the middle and small scale region . we will report the study about these issues in the near futurethe author is grateful to the yukawa institute for theoretical physics at kyoto university , where this work was initiated during the yitp - w-05 - 07 on `` econophysics ii physics - based approach to economic and social phenomena '' , and especially to professor h. aoyama for the critical question about the author s presentation .thanks are also due to dr .y. fujiwara for a lot of useful discussions and comments .99 v. pareto , cours deconomique politique , macmillan , london , 1897 . r.n .mategna , h.e .stanley , an introduction to econophysics , cambridge university press , uk , 2000 . y. fujiwara , w. souma , h. aoyama , t. kaizoji , m. aoki , cond - mat/0208398 , physica a321 ( 2003 ) 598 ; + h. aoyama , w. souma , y. fujiwara , physica a324 ( 2003 ) 352 ; + y. fujiwara , c.d .guilmi , h. aoyama , m. gallegati , w. souma , cond - mat/0310061 , physica a335 ( 2004 ) 197 ; + y. fujiwara , h. aoyama , c.d .guilmi , w. souma , m. gallegati , physica a344 ( 2004 ) 112 ; + h. aoyama , y. fujiwara , w. souma , physica a344 ( 2004 ) 117 .r. gibrat , les inegalites economiques , paris , sirey , 1932 .a. ishikawa , annual change of pareto index dynamically deduced from the law of detailed quasi - balance , physics/0511220 , to appear in physica a ; + a. ishikawa , dynamical change of pareto index in japanese land prices , physics/0607131 .badger , in : b.j .west ( ed . ) , mathematical models as a tool for the social science , gordon and breach , new york , 1980 , p. 87 ; + e.w .montrll , m.f .shlesinger , j. stat .32 ( 1983 ) 209 .stanley , l.a.n .amaral , s.v .buldyrev , s. havlin , h. leschhorn , p. maass , m.a .salinger , h.e .stanley , nature 379 ( 1996 ) 804 ; + l.a.n .amaral , s.v .buldyrev , s. havlin , h. leschhorn , p. maass , m.a .salinger , h.e .stanley , m.h.r .stanley , j. phys .( france ) i7 ( 1997 ) 621 ; + s.v .buldyrev , l.a.n .amaral , s. havlin , h. leschhorn , p. maass , m.a .salinger , h.e .stanley , m.h.r .stanley , j. phys .( france ) i7 ( 1997 ) 635 ; + l.a.n .amaral , s.v .buldyrev , s. havlin , m.a .salinger , h.e .stanley , phys .80 ( 1998 ) 1385 ; + y. lee , l.a.n .amaral , d. canning , m. meyer , h.e .stanley , phys .81 ( 1998 ) 3275 ; + d. canning , l.a.n .amaral , y. lee , m. meyer , h.e .stanley , economics lett .60 ( 1998 ) 335 .h. takayasu , m. takayasu , m.p .okazaki , k. marumo , t. shimizu , cond - mat/0008057 , in : m.m .novak ( ed . ) , paradigms of complexity , world scientific , 2000 , p. 243 .h. aoyama , ninth annual workshop on economic heterogeneous interacting agents ( wehia 2004 ) ; + h. aoyama , y. fujiwara , w. souma , the physical society of japan 2004 autumn meeting .a. ishikawa , physics/0508178 , physica a367 ( 2006 ) 425 .a. ishikawa , physics/0506066 , physica a363 ( 2006 ) 367 .tokyo shoko research , ltd ., http://www.tsr - net.co.jp/. a. dr , v.m .yakovenko , cond - mat/0103544 , physica a299 ( 2001 ) 213 ; + a.c .silva , v.m .yakovenko , europhys .69 ( 2005 ) 304 .
we report the proof that the expression of extended gibrat s law is unique and the probability distribution function ( pdf ) is also uniquely derived from the law of detailed balance and the extended gibrat s law . in the proof , two approximations are employed that the pdf of growth rate is described as tent - shaped exponential functions and that the value of the origin of growth rate is constant . these approximations are confirmed in profits data of japanese companies 2003 and 2004 . the resultant profits pdf fits with the empirical data with high accuracy . this guarantees the validity of the approximations . pacs code : 04.60.nc + keywords : econophysics ; pareto law ; gibrat law ; detailed balance
in this article we review a method for the evaluation of a certain class of integrals which occurr in many physical problems .the method that we propose has been used to obtain arbitrarily precise approximations to the period of a classical oscillator , to the deflection angle of light by the sun and to the precession of the perihelion of a planet in general relativity , to the spectrum of a quantum potential and to certain mathematical functions , such as the riemann zeta function .this paper is organized in three sections : in section [ method ] we outline the method and explain its general features ; in section [ appli ] we discuss different applications of the method and present numerical results ; finally , in section [ conclu ] we draw our conclusions .we consider the problem of calculating integrals of the form : _ = _x_-^x_+ ^ g(x ) dx [ eq_1_1 ] where and for .we also ask that so that the singularities are integrable .integrals of this kind occurr for example in the evaluation of the period of a classical oscillator or in the application of the wkb method in quantum mechanics .we wish to obtain an analytical approximation to with arbitrary precision .the idea behind the method that we propose is quite simple : we introduce a function , which depends on one or more arbitrary parameters ( which we will call ) and define . although the form of can be chosen almost arbitrarily , we ask that the integral of eq .( [ eq_1_1 ] ) with and can be done analytically . in the spirit of the linear delta expansion ( lde ) we interpolate the original integral as follows : _ ^ ( ) = _x_-^x_+ ^ g(x ) dx .[ eq_1_2 ] this equation reduces to eq .( [ eq_1_1 ] ) in the limit , however it yields a much simpler integral when .we therefore write eq .( [ eq_1_2 ] ) as : _ ^ ( ) = _x_-^x_+ ^ ^ g(x ) dx .[ eq_1_3 ] where we have defined ( x ) .[ eq_1_4 ] we can use the expansion ( 1 + x)^= _ n=0^ [ eq_1_5 ] which converges uniformly for . as a result we can substitute in eq .( [ eq_1_3 ] ) the series expansion of eq .( [ eq_1_5 ] ) provided that the constraint is met for any .in general , as we will see in the next section , this inequality provides restrictions on the values that the arbitrary parameter can take . under these conditions the integral can be substituted with a family of series ( each corresponding to a different ) : _ ^ ( ) = _n ^n [ eq_1_6 ] where _ n _ x_-^x_+^ ^n g(x ) dx .we assume that _ each of the integrals defining can be evaluated analytically_. although we have not yet specified the form of , which indeed will have to be chosen case by case , we already know that , if all the conditions that we have imposed above are met we have a family of series all converging to the exact value of the integral , after setting . since the rate of convergence of the series will clearly depend on the parameter , we can pick the series among all the infinite series representing the same integral which converges faster .in fact , although is a completely arbitrary parameter , which was inserted `` ad hoc '' in the integral , and therefore the final result _ can not _ depend upon it . when the series is truncated to a given finite order , we will observe a residual dependence upon .we invoke the principle of minimal sensitivity ( pms ) to minimize , at least locally , such spurious dependence and thus obtain the optimal series representation of the integral : = 0 , [ eq_1_7 ] where we have defined as the series of eq .( [ eq_1_6 ] ) truncated at and taking .we will see in the next section that this simple procedure allows to obtain series representation which converge fastly .interestingly , in general the optimal series obtained in this way display an exponential rate of convergence .in this section we consider different applications of the method described above . as a first application, we now consider the problem of calculating the period of a unit mass moving in a potential .the total energy is conserved during the motion .the exact period of the oscillations is easily obtained in terms of the integral : where are the inversion points , obtained by solving the equation . clearly , the integral of eq .( [ eq_2_1 ] ) is a special case of the integral considered in the previous section , corresponding to choosing , , and . in order to test our methodwe consider the duffing oscillator , which corresponds to the potential .we choose the interpolating potential to be and obtain \ .\label{eq_2_2}\ ] ] the series in eq .( [ eq_1_6 ] ) converges to the exact period for , since uniformly for such values of and .the period of the duffing oscillator calculated to first order using ( [ eq_1_6 ] ) is then \right\ } \label{eq_2_3}\ ] ] by setting and applying the pms we obtain the optimal value of , , which remarkably coincides with the one obtained in by using the lplde method to third order . the period corresponding to the optimal is and it provides an error less than to the exact period for any value of and .this remarkable result is sufficient to illustrate the nonperturbative nature of the method that we are proposing : in fact , a perturbative approach , which would rely on the expansion of some small _ natural _ parameter , such as , would only provide a polynomial in the parameter itself : therefore it would never be possible to reproduce the correct asymptotic behavior of the period in this way .given that it is possible to calculate analytically all the integrals , we are able to obtain the exact series representation : where is the hypergeometric function . since eq .( [ eq_2_6 ] ) is essentially a power series , it converges exponentially to the exact result , which is precisely what we observe in figure [ fig_1 ] , where we plot the error \ \times 100 ] , for and as a function of the order .the three sets are obtained by using the optimal value ( plus ) , a value ( triangle ) and ( square).,width=340 ] we consider now the nonlinear pendulum , whose potential is given by . by choosing the interpolating potential to be we obtain where is the amplitude of the oscillations . to first order our formula yields where is the bessel function of the first kind of order 1 .the optimal value of in this case is given by and the period to first order is then despite its simplicity eq .( [ eq_2_10 ] ) provides an excellent approximation to the exact period over a wide range of amplitudes .we now apply our expansion to two problems in general relativity : the calculation of the deflection of the light by the sun and the calculation of the precession of a planet orbiting around the sun .we use the notation of weinberg : the angle of deflection of the light by the sun is given by the expression ^{-1/2 } \frac{dr}{r } - \pi \label{eq_2_12}\ ] ] where is the closest approach . with the change of variable we obtain which is exactly in the form of eq .( [ eq_1_1 ] ) .we introduce the potential and obtain by performing the standard steps which are required by our method we obtain the optimal deflection angle to first order to be : corresponding to the optimal : the surface corresponding to the closest approach for which diverges is known as _ photon sphere _ and for the schwartzchild metric takes the value .it is remarkable that eq .( [ eq_2_16 ] ) , despite its simplicity , is able to predict a slightly smaller _ photon sphere _ , corresponding to .this feature is missed completely in a perturbative approach . and as function of the closest approach .the solid line is the exact ( numerical ) result , the dashed line is obtained with eq .( [ eq_2_16 ] ) , the dotted line is the post - post - newtonian result of , the dot - dashed line is the asymptotic result ( ) .the vertical line marks the location of the photon sphere , where the deflection angle diverges.,width=340 ] , and ( eccentricity ) .the scale of reference is taken to be the semimajor axis of mercury s orbit ( ) .the solid line is the exact result , the dashed line is the result of eq .( [ eq_2_19 ] ) and the dotted line is the leading term in the perturbative expansion.,width=340 ] in figure [ fig_2 ] we compare eq .( [ eq_2_16 ] ) with the exact numerical result , the post - post - newtonian ( ppn ) result of and with the asymptotic result for very small values of ( close to the photon sphere ) .we assume to correspond to the physical mass of the sun and to the physical value of the gravitational constant .this corresponds to a strongly nonperturbative regime , where the gravitational force is extremely intense .the reader can judge the quality of our approximation .we now consider the problem of calculating the precession of the perihelion of a planet orbiting around the sun .the angular precession is given by where and . are the shortest ( perielia ) and largest ( afelia ) distances from the sun . by the change of variable we can write eq .( [ eq_2_17 ] ) as where .once again the integral has the form required by our method .one obtains , \label{eq_2_19}\ ] ] where is the semimajor axis of the ellipse , given by , and is the _ semilatus rectum _ of the ellipse , given by . the optimal is in figure [ fig_3 ] we plot the precession of the orbit calculated through the exact formula ( solid line ) , through eq .( [ eq_2_19 ] ) ( dashed line ) and through the leading order result ( dotted line ) . once again we find excellent agreement with the exact result . in the method described in this paper was applied to the calculation of the spectrum of an anharmonic potential within the wkb method to order .the wkb condition is where to order is given by , , and ( set 1 ) and with and ( set 2 ) .the boxes and the pluses have been obtained with our method , the triangles correspond to the error calculated using the analytical formula of .,width=340 ] we have defined the integrals : where are the classical turning points .the spectrum of the potential can be obtained by solving eq .( [ eq_2_22 ] ) .the integrals appearing in the equations ( [ eq_2_23 ] ) , ( [ eq_2_24 ] ) and ( [ eq_2_25 ] ) are of the form required by eq .( [ eq_1_1 ] ) .we can test the method with the quantum anharmonic potential : in it was proved that the integrals above can be analytically approximated with very high precision with our method . by solving eq .( [ eq_2_21 ] ) once that the integrals have been approximated with our method one obtaines an _ analytical _formula for the spectrum of the anharmonic oscillator : where the first few coefficients are given by in fig .[ fig_5 ] we display the error over the energy defined as as a function of the quantum number .the boxes have been obtained using our formula eq .( [ eqn6 ] ) and assuming , , and . in this case are the energies of the anharmonic oscillator calculated with high precision in last column of table iii of .the jump corresponding to is due to the low precision of the last value of table iii of .the pluses and the triangles have been obtained using our formula eq .( [ eqn6 ] ) ( pluses ) and eq .( 1.34 ) of ( triangles ) and assuming and . in this case are the energies of the anharmonic oscillator numerically calculated through a fortran code .we can easily appreciate that our formula provides an approximation which is several orders of magnitude better than the one of eq .( 1.34 ) of .we also notice that the formula of yields a quite different asymptotic expansion in the limit of .we are not aware of expressions for the spectrum of the anharmonic oscillator similar to the one given by eq .( [ eqn6 ] ) .the method outlined above can be applied also to the calculation of the riemann zeta function .we consider the integral representation although eq .( [ s4_1 ] ) is not of the standard form of eq .( [ eq_1_2 ] ) , we can write it as : where is as usual an arbitrary parameter introduced by hand . in this case and the condition is fullfilled provided that ; one can expand the denominator in powers of and obtain : despite its appearance this series _ does not depend _ upon , as long as .this means that when the sum over is truncated to a given finite order a residual dependence upon will survive : such dependence will be minimized by applying the pms , i.e. by asking that the derivative of the partial sum with respect to vanish .to lowest order one has that and the corresponding formula is found : we want to stress that eq .( [ s4_4 ] ) is still an _ exact _ series representation of the riemann zeta function .this simple formula yields an excellent approximation to the zeta function even in proximity of where the function diverges . the rate of convergence of the series is greatly improved by applying the pms to higher orders . in fig .[ fig6 ] we plot the difference using eq .( [ s4_3 ] ) with ( solid line ) , ( dashed line ) and the series representation which corresponds to the dotted line in the plot .this last series converges quite slowly and a huge number of terms ( of the order of ) is needed to obtain the same accuracy that our series with reaches with just terms .we notice that a special case of eq .( [ s4_3 ] ) , corresponding to , was already known in the literature . as a function of the number of terms in the sum.[fig6],width=340 ] we can extend eq .( [ s4_3 ] ) to the critical line , , and write in fig .[ fig7 ] we have plotted the error ( in percent ) over the real part of the zeta function , i.e. \times 100 ] , as a function of the number of terms considered in the sum of eq .( [ eq_3 ] ) .the dashed curve is obtained using .[ fig7],width=340 ] in fig .[ fig8 ] we plot the difference as a function of for different values of . in this casethe series is limited to the first terms .the optimal value of is found close to . ) for and taking the first terms in the sum.[fig8],width=340 ] the two figures prove that our expansion is greatly superior to the one of .in this paper we have reviewed a method which allows to estimate a certain class of integrals with arbitrary precision .the method is based on the linear delta expansion , i.e. on the powerful idea that a certain ( unsoluble ) problem can be interpolated with a soluble one , depending upon an arbitrary parameter and then performing a perturbative expansion .the principle of minimal sensitivity allows one to obtain results which converge quite rapidly to the correct results .it is a common occurrence in calculations based on variational perturbation theory , like the present one , that the solution to the pms equation to high orders can not be performed analytically . here ,however , we do not face this problem , since we have proved that the method converges in a whole region in the parameter space : the convergence of the expansion is granted as long as the parameter falls in that region .although in this paper we have examined a good number of applications of this method to problems both in physics and mathematics , we feel that it can be used , with minor modifications , in dealing with many other problems .an extension of the method in this direction is currently in progress .the author acknowledges support of conacyt grant no .c01 - 40633/a-1 .he also thanks the organizing comitee of the _ dynamical systems , control and applications _ ( dysca )meeting for the kind invitation to participate to the workshop .k. knopp , `` 4th example : the riemann -function . ''theory of functions parts i and ii , two volumes bound as one , part ii .new york : dover , pp .51 - 57 , 1996 ; h. hasse , `` ein summierungsverfahren fr die riemannsche zeta - reihe . '' math . z. 32 , 458 - 464 , 1930; j. sondow , `` analytic continuation of riemann s zeta function and values at negative integers via euler s transformation of series . '' proc .120 , 421 - 424 , 1994 .
in many physical problems it is not possible to find an exact solution . however , when some parameter in the problem is small , one can obtain an approximate solution by expanding in this parameter . this is the basis of perturbative methods , which have been applied and developed practically in all areas of physics . unfortunately many interesting problems in physics are of non - perturbative nature and it is not possible to gain insight on these problems only on the basis of perturbation theory : as a matter of fact it often happens that the perturbative series are not even convergent . in this paper we will describe a method which allows to obtain arbitrarily precise analytical approximations for the period of a classical oscillator . the same method is then also applied to obtain an analytical approximation to the spectrum of a quantum anharmonic potential by using it with the wkb method . in all these cases we observe exponential rates of convergence to the exact solutions . an application of the method to obtain a fastly convergent series for the riemann zeta function is also discussed .
the shape of the spectrum of an astronomical source is highly informative as to the physical processes at the source .but often detailed spectral fitting is not feasible due to various constraints , such as the need to analyze a large and homogeneous sample for population studies , or there being insufficient counts to carry out a useful fit , etc . in such cases , a hardness ratio , which requires the measurement of accumulated counts in two or more broad passbands , becomes a useful measure to quantify and characterize the source spectrum .a hardness ratio is defined as either the ratio of the counts in two bands called the _ soft _ and _ hard _ bands , or a monotonic function of this ratio .we consider three types of hardness ratios , where and are the source counts in the two bands , called the _ soft _ and _ hard _ passbands .the simple formulae above are modified for the existence of background counts and instrumental effective areas .spectral colors in optical astronomy , defined by the standard optical filters ( e.g. , ubvrijk , u - b , r - i , etc ) , are well known and constitute the main observables in astronomy .they have been widely applied to characterize populations of sources and their evolution .the hardness ratio concept was adopted in x - ray astronomy for early x - ray detectors ( mainly proportional counter detectors ) which had only a limited energy resolution .the first application of the x - ray hardness ratio was presented in the x - ray variability studies with sas-3 observations by bradt et al .zamorani et al .( 1981 ) investigated the x - ray hardness ratio for a sample of quasars observed with the _ einstein _ x - ray observatory .they calculated each source intensity in two energy bands ( 1.2 - 3 kev and 0.5 - 1.2 kev ) and discussed a possible evolution of the hardness ratio ( the ratio of both intensities ) with redshift for their x - ray sample of 27 quasars .similar ratios have been used in surveys of stars ( vaiana et al . 1981 ) , galaxies ( e.g. , kim , fabbiano , & trinchieri 1992 ) and x - ray binaries ( tuohy et al . 1978 ) studied with the _ einstein _ and earlier observatories . in the case of x - ray binaries in particular ,they were used to define two different classes of sources ( z - track and atoll sources ; hasinger & van der klis , 1989 ) depending on their time evolution on v / s diagrams . sincethen the concept of x - ray hardness ratio has been developed and broadly applied in a number of cases , most recently in the analysis of large data samples from the _ chandra_x - ray observatory ( weisskopf et al .2000 ) and xmm-_newton _ ( jansen et al . 2001 ) .advanced x - ray missions such as _ chandra _ and xmm-_newton _ allow for the detection of many very faint sources in deep x - ray surveys ( see review by brandt & hasinger 2005 ) .for example , in a typical observation of a galaxy , several tens and in some cases even hundreds of sources are detected , most of which have fewer than 50 counts , and many have only a few counts .similar types of sources are detected in the champ serendipitous survey ( kim et al .they provide the best quality data to date for studying source populations and their evolution .the most interesting sources are the sources with the smallest number of detected counts , because they have never been observed before .further , the number of faint sources increases with improved sensitivity limits , i.e. , there are more faint sources in deeper observations . because these faint sources have only a few counts , hardness ratios are commonly used to study properties of their x - ray emission and absorption ( e.g. , alexander et al .2001 , brandt et al .2001a , brandt et al. 2001b , giacconi et al .2001 , silverman et al .2005 ) . with long observations the background counts increase ,so the background contribution becomes significant and background subtraction fails to work .still the background contribution must be taken into account to evaluate the source counts .the sensitivity of x - ray telescopes is usually energy dependent , and thus the classical hardness ratio method can only be applied to observations made with the same instrument .usually the measurements carry both detection errors and background contamination . in a typical hardness ratio calculationthe background is subtracted from the data and only net counts are used .this background subtraction is not a good solution , especially in the low counts regime ( see van dyk et al.2001 ) . in generalthe gaussian assumption present in the classical method ( see [ park : sec : classic ] and [ park : sec : verify ] ) is not appropriate for faint sources in the presence of a significant background .therefore , the classical approach to calculating the hardness ratio and the errors is inappropriate for low counts . instead , adopting a poisson distribution as we do here ( see below ) , hardness ratios can be reliably computed for both low and high counts cases .the bayesian approach allows us to include the information about the background , difference in collecting area , effective areas and exposure times between the source and background regions .our bayesian model - based approach follows a pattern of ever more sophisticated statistical methods that are being developed and applied to solve outstanding quantitative challenges in empirical astronomy and astrophysics .bayesian model - based methods for high - energy high - resolution spectral analysis can be found for example in kashyap & drake ( 1998 ) , van dyk et al .( 2001 ) , van dyk and hans ( 2002 ) , protassov et al .( 2002 ) , van dyk and kang ( 2004 ) , gillessen & harney ( 2005 ) , and park et al .( 2006 , in preparation ) . more generally , the monographs on statistical challenges in modern astronomy ( feigelson and babu , 1992 , 2003 , babu and feigelson , 1997 ) and the special issue of _ statistical science _devoted to astronomy ( may 2004 ) illustrate the wide breadth of statistical methods applied to an equal diversity of problems in astronomy and astrophysics . here ,we discuss the classical method and present the fully bayesian method for calculating the hardness ratio for counts data . ]a conventional hardness ratio is calculated as set out in equations [ e : rchr ] , where and are the `` soft '' and `` hard '' counts accumulated in two non - overlapping passbands . in general , an observation will be contaminated by background counts , and this must be accounted for in the estimate of the hardness ratios . the background is usually estimated from an annular region surrounding the source of interest , or from a suitable representative region on the detector that is reliably devoid of sources . the difference in the exposure time and aperture area of source andbackground observations are summarized by a known constant for which the expected background counts in the source exposure area are adjusted . with the background counts in the soft band ( ) and the hard band ( ) collected in an area of times the source region ,the hardness ratio is generalized to the adjusted counts in the background exposure area are directly subtracted from those in the source exposure area .the above equations can be further modified to account for variations of the detector effective areas by including them in the constant , in which case the constants for the two bands will be different . ) .] errors are propagated assuming a gaussian regime , i.e. , ^ 2 } \label{e : sigbgrchr}\end{aligned}\ ] ] where , , , and are typically approximated with the gehrels prescription ( gehrels 1986 ) where are the observed counts , and the deviation from is due to identifying the 68% gaussian ( 1 ) deviation with the percentile range of the poisson distribution . in addition to the approximation of a poisson with a faux gaussian distribution, classical error - propagation also fails on a fundamental level to properly describe the variance in the hardness ratios . because is strictly positive, its probability distribution is skewed and its width can not be described simply by .the fractional difference is better behaved , but the range limits of ] draw after sorting the draws in increasing order . to compute the posterior mode , we repeatedly bisect the monte carlo draws and choose the half with more draws until the range of the chosen half is sufficiently narrow .the midpoint of the resulting narrow bin approximates the posterior mode . with quadrature ( see [ park : sec : quad ] ) , we can obtain equally spaced abscissas and the corresponding posterior probabilities .thus , the posterior mean is computed as the sum of the product of an abscissa with their probabilities ( i.e. , the dot product of the vector of abscissa with the corresponding probability vector ) .the posterior median is computed by summing the probabilities corresponding to the ordered abscissa one - by - one until a cumulative probability of 50% is reached .the posterior mode is simply the point among the abscissa with the largest probability .unlike with point estimates above , there is no unique or preferred way to summarize the variation of a parameter .any interval that encompasses a suitable fraction of the area under the probability distribution qualifies as an estimate of the variation . of these , two provide useful measures of the uncertainty : the equal - tail posterior interval , which is the central interval that corresponds to the range of values above and below which lies a fraction of exactly of the posterior probability , is a good measure of the width of the posterior distribution ; and the highest posterior density ( hpd ) interval , which is the range of values that contain a fraction of the posterior probability , and within which the probability density is never lower than that outside the interval .the hpd - interval always contains the mode , and thus serves as an error bar on it . for a symmetric , unimodal posterior distribution , these two posterior intervals are identical .the equal - tail interval is invariant to one - to - one transformations and is usually easier to compute .however , the hpd - interval always guarantees the interval with the smallest length among the intervals with the same posterior probability .neither of these intervals is necessarily symmetric around the point estimate , i.e. , the upper and lower bounds may be of different lengths . for consistency, we refer to such intervals as _ posterior intervals _ ; others also refer to them as confidence intervals or credible intervals . for a monte carlo simulation of size , we compute either the equal - tail interval or an interval that approximates the hpd interval .( unless otherwise stated , here we always quote the equal - tail interval for the monte carlo method and the hpd - interval for the quadrature . )the equal - tail posterior interval in this case is computed by choosing the ^{\rm th} ] draws as the boundaries .an approximate hpd - interval is derived by comparing all intervals that consist of the ^{\rm th} ] draws and choosing that which gives the shortest length among them .when the posterior density is computed by the quadrature , we split parameter space into a number of bins and evaluate the posterior probability at the midpoint of each bin . in this case , a hpd - interval can be computed by beginning with the bin with the largest posterior probability and adding additional bins down to a given value of the probability density until the resulting region contains at least a fraction of the posterior probability .in order to compare the classical method with our bayesian method , we carried out a simulation study to calculate coverage rates of the classical and posterior intervals . given pre - specified values of the parameters , source and background countswere generated and then used to construct 95% classical and posterior intervals of each hardness ratio using the methods discussed in this article . from the simulated data we calculated the proportion of the computed intervals that contain the true value of the corresponding hardness ratio .( this is the coverage rate of the classical and probability intervals . ) in the ideal case , 95% of the simulated intervals would contain the true value of each hardness ratio . besides the coverage rate ,the average length of the intervals and the mean square error of point estimates were also computed and compared .the mean square error of the point estimate of is defined as the sum of the variance and squared bias for an estimator , i.e. , ={{\rm var}}(\hat{{\theta}})+[{{\rm e}}(\hat{{\theta}})-{\theta}]^2 $ ] a method that constructs shorter intervals with the same coverage rate and produces a point estimate with a lower mean square error is generally preferred .the entire simulation was repeated with different magnitudes of the source intensities , and .intrinsically , we are interested in the following two prototypical cases : in both cases we adopt a background - area to source - area ratio of ( see equation [ park : eq : b ] ) , i.e. , we take the observed counts in the background region to be 10 .note that these are written with reference to the counts observed in the `` source region '' , i.e. , the units of are all [ ct ( source area) , and that we have set here .the actual extent of the source area is irrelevant to this calculation .this simulation study illustrates two typical cases , i.e. , high counts and low counts sources : case i represents high counts sources for which poisson assumptions tend to agree with gaussian assumptions ; case ii represents low counts sources where the gaussian assumptions are inappropriate . in case i , the posterior distributions of the hardness ratios agree with the corresponding gaussian approximation of the classical method , but in case ii , the gaussian assumptions made in the classical method fail .this is illustrated in figure [ park : fig : comp ] , where we compare the two methods for a specific and particular simulation : we assume in case i and in case ii , compute the resulting posterior distributions of hardness ratios using the bayesian method , and compare it to a gaussian distribution with mean and standard deviation equal to the classical estimates .the right panels in figure [ park : fig : comp ] show that there is a clear discrepancy between the two methods . , , and with the classical gaussian approximation for high counts ( case i ; left column of the figure ) and for low counts ( case ii ; right column of the figure ) .the solid lines represent the posterior distributions of the hardness ratios and the dashed lines a gaussian distribution with mean and standard deviation equal to the classical estimates .a prior distribution that is flat on the real line ( , see [ park : sec : priors ] ) is adopted for the bayesian calculations .note that as the number of counts increase , the two distributions approach each other . at low countsthe two distributions differ radically , with the classical distributions exhibiting generally undesirable properties ( broad , and extending into unphysical regimes ) .[ park : fig : comp],width=432 ] in these simulations , we theoretically expect that the computed 95% posterior intervals contain the actual value with a probability of 95% .due to the monte carlo simulation errors , we expect most coverage rates to be between 0.93 and 0.97 which are three standard deviations away from 0.95 ; a standard deviation of the coverage probability for 95% posterior intervals is given by under a binomial model for the monte carlo simulation . table [ park : tbl : idxes1 ] presents the coverage rate and average length of posterior intervals for small and large magnitudes of source intensities .the key to this table is given in table [ park : tbl : legend ] : for each pair , the posterior intervals are simulated using different prior distribution indices ( , 1/2 , 1 ) and the summary statistics the coverage rate and the mean lengths of the intervals are displayed from top ( ) to bottom ( ) within each cell . the same information is shown in graphical form in figure [ fig : idxesfig ]. the use of tends to yield very wide posterior intervals and under - cover the true x - ray color when the source intensities are low . on the other hand, the other two non - informative prior distribution indices produce much shorter posterior intervals and maintain high coverage rates . with high counts ,however , the choice of a non - informative prior distribution does not have a noticeable effect on the statistical properties of the posterior intervals .the same results are summarized in graphical form in figure [ park : fig : cover ] , where the 95% coverage rates are shown for various cases .the empirical distributions of the coverage rate computed under either low count ( or ) or high count ( all of the remaining ) scenarios are shown , along with a comparison with the results from the classical method ( last column ) .the distinction between the bayesian and classical methods is very clear in this figure .note that the posterior intervals over - cover relative to our theoretical expectation , but to a lesser degree than the classical intervals .over - coverage is conservative and preferred to under - coverage , but should be minimal when possible . considering the low count and high count scenarios ,figure [ park : fig : cover ] suggests using the jeffrey s non - informative prior distribution ( i.e. , ) in general . at low counts , the shape ( and consequently the coverage ) of the posterior distributionis dominated by our prior assumptions , as encoded in the parameter .the coverage rate will also be improved when informative priors are used .( first ) , ( second ) , ( third ) .the fourth column depicts results from the classical method , and is presented for reference .( in calculating the coverage and the length of the intervals , we exclude simulations where or since is undefined in such cases . )the top panels correspond to the low count scenarios or and the bottom panels correspond to the remaining cases .coverage rate is the fraction of simulated intervals that contain the parameter under which the data were simulated . in this case , we compute the coverage rate based on the 95% posterior intervals for with different combinations of values for and . the histograms represent the empirical distribution of the coverage rate .the dotted vertical lines represent the expected coverage rate of 95% .[ park : fig : cover ] , width=672 ] .the symbols are located on a grid corresponding to the appropriate ( absissa ) and ( ordinate ) , for ( top ) , ( middle ) , and ( bottom ) .the shading of each symbol represents the coverage , with 100% being lightest and progressively getting darker as the coverage percentage decreases .the sizes of the symbols are scaled as the .note that for small values of , which correspond to a prior expectation of a large dynamic range in the source colors , the intervals are large when the counts are small ( i.e. , the choice of prior distribution has a large effect ) , and decrease to values similar to those at larger when there are more counts . [ fig : idxesfig],title="fig:",width=336 ] + .the symbols are located on a grid corresponding to the appropriate ( absissa ) and ( ordinate ) , for ( top ) , ( middle ) , and ( bottom ) .the shading of each symbol represents the coverage , with 100% being lightest and progressively getting darker as the coverage percentage decreases .the sizes of the symbols are scaled as the .note that for small values of , which correspond to a prior expectation of a large dynamic range in the source colors , the intervals are large when the counts are small ( i.e. , the choice of prior distribution has a large effect ) , and decrease to values similar to those at larger when there are more counts .[ fig : idxesfig],title="fig:",width=336 ] + .the symbols are located on a grid corresponding to the appropriate ( absissa ) and ( ordinate ) , for ( top ) , ( middle ) , and ( bottom ) .the shading of each symbol represents the coverage , with 100% being lightest and progressively getting darker as the coverage percentage decreases .the sizes of the symbols are scaled as the .note that for small values of , which correspond to a prior expectation of a large dynamic range in the source colors , the intervals are large when the counts are small ( i.e. , the choice of prior distribution has a large effect ) , and decrease to values similar to those at larger when there are more counts .[ fig : idxesfig],title="fig:",width=336 ] alexander , d.m . ,brandt , w.n ., hornschemeier , a.e . , garmire , g.p . ,schneider , d.p . ,bauer , f.e ., & griffiths , r.e . , 2001 ,aj , 122 , 2156 babu , g.j . , &feigelson , e.d . , 1997 , _ statistical challenges in modern astronomy _ , springer - verlag : new york bradt , h. , mayer , w. , buff , j. , clark , g.w . ,doxsey , r. , hearn , d. , jernigan , g. , joss , p.c . ,laufer , b. , lewin , w. , li , f. , matilsky , t. , mcclintock , j. , primini , f. , rappaport , s. , & schnopper , h. , 1976 , apj , 204 , l67 brandt , w.n . ,hornschemeier , a.e . ,alexander , d.m . ,garmire , g.p . ,schneider , d.p . ,broos , p.s . ,townsley , l.k . ,bautz , m.w . ,feigelson , e.d . , & griffiths , r.e . , 2001a , aj , 122 , 1 brandt , w.n . ,alexander , d.m . ,hornschemeier , a.e . ,garmire , g.p . ,schneider , d.p . ,barger , a.j . ,bauer , f.e . ,broos , p.s . ,cowie , l.l . ,townsley , l.k . ,burrows , d.n . ,chartas , g. , feigelson , e.d ., griffiths , r.e ., nousek , j.a . , & sargent , w.l.w . , 2001b , aj , 122 , 2810 brandt , w.n ., & hasinger , g. , 2005 , araa , 43 , 827 brown , e.f . ,bildsten , l. , & rutledge , r.e . , 1998 , apj , 504 , 95 campana , s. , colpi , m. , mereghetti , s. , stella , l. , & tavani , m. , 1998 , a&arv , 8 , 279 casella , g. & berger , r.l . , 2002 , statistical inference , 2nd edition .cowles , m.k . , &carlin , b.p ., 1996 , j. am ., 91 , 883 esch , d.n ., 2003 , ph.d .thesis , department of statistics , harvard university feigelson , e.d . , & babu , g.j ., 1992 , _ statistical challenges in modern astronomy _ , springer - verlag : berlin heidelberg new york feigelson , e.d ., & babu , g.j . , 2003 , _ statistical challenges in astronomy .third statistical challenges in modern astronomy ( scma iii ) conference _ , university park , pa , usa , july 18 - 21 2001 , springer : new york gehrels , n. , 1986 , apj , 303 , 336 geman , s. , & geman , d. , 1984 , ieee , 6 , 721 giacconi , r. , rosati , p. , tozzi , p. , nonino , m. , hasinger , g. , norman , c. , bergeron , j. , borgani , s. , gilli , r. , gilmozzi , r. , & zheng , w. , 2001 , apj , 551 , 624 gregory , p.c . , & loredo , t.j ., 1992 , apj , 398 , 146 hasinger , g. , & van der klis , m. , 1989 , a&a , 225 , 79 heinke , c.o . ,grindlay , j.e . , edmonds , p.d . ,lloyd , d.a . ,murray , s.s . ,cohn , h.n . , & lugger , p.m. , 2003 , apj , 598 , 501 hong , j.s . ,schlegel , e.m . , & grindlay , j.e . , 2004 ,apj , 614 , 508 gillessen , s. , & harney , h.l . , 2005 , a&a , 430 , 355 jansen , f. , lumb , d. , altieri , b. , clavel , j. , ehle , m. , erd , c. , gabriel , c. , guainazzi , m. , gondoin , p. , much , r. , munoz , r. , santos , m. , schartel , n. , texier , d. , & vacanti , g. , 2001 , a&a , 365 , l1 kashyap , v.l ., drake , j.j . , gdel , m. , & audard , m. , 2002 , apj , 580 , 1118 kashyap , v.l . , & drake , j.j ., 1998 , apj , 503 , 450 kim , d .- w ., barkhouse , w.a . , colmenero , e.r . ,green , p.j . , kim , m. , mossman , a. , schlegel , e. , silverman , j.d ., aldcroft , t. , ivezic , z. , anderson , c. , kashyap . v. , tananbaum , h. , & wilkes , b.j . , 2005 , submitted to apj kim , d .- w . , fabbiano , g. , & trinchieri , g. , 1992 , apj , 393 , 134 pallavicini , r. , tagliaferri , g. , & stella , l. , 1990 , a&a , 228 , 403 protassov , r. , van dyk , d.a . ,connors , a. , kashyap , v.l . , & siemiginowska , a. , 2002 , apj , 571 , 545 reale , f. , betta , r. , peres , g. , serio , s. , & mctiernan , j. , 1997 , a&a , 325 , 782 reale , f. , gdel , m. , peres , g. , & audard , m. , 2004 , a&a , 416 , 733 reale , f. , serio , s. , & peres , g. , 1993 , a&a , 272 , 486 rosner , r. , golub , l. , & vaiana , g.s . , 1985 , araa , 23 , 413 schmitt , j.h.m.m . , & favata , f. , 1999 , nature , 401 , 44 serio , s. , reale , f. , jakimiec , j. , sylwester , b. , & sylwester , j. 1991 , a&a , 241 , 197 silverman , j.d . , green , p.j . , barkhouse , w.a . , kim , d .- w . , aldcroft , t.l . , cameron , r.a . ,wilkes , b.j . , mossman , a. , ghosh , h. , tananbaum , h. , smith , m.g . ,smith , r.c . ,smith , p.s . , foltz , c. , wik , d. , & jannuzi , b.t . , 2005 ,apj , 618 , 123 tanner , m.a . & wong , w.h . , 1987 ,j. am ., 82 , 528 tuohy , i.r . , garmire , g.p . ,lamb , f.k . , & mason , k.o ., 1978 , apj , 226 , l17 vaiana , g.s . ,cassinelli , j.p . ,fabbiano , g. , giacconi , r. , golub , l. , gorenstein , p. , haisch , b.m . ,harnden , f.r.,jr . ,johnson , h.m . ,linsky , j.l . ,maxson , c.w . ,mewe , r. , rosner , r. , seward , f. , topka , k. , & zwaan , c. , 1981 , apj , 245 , 163 van den oord , g.h.j . , & mewe , r. , 1989 , a&a , 213 van dyk , d.a ., 2003 , in _ statistical challenges in astronomy .third statistical challenges in modern astronomy ( scma iii ) conference _ , university park , pa , usa , july 2001 , eds .e.d.feigelson , g.j.babu , new york : springer , p41 van dyk , d.a . ,connors , a. , kashyap , v.l . , & siemiginowska , a. , 2001 , apj 548 , 224 van dyk , d.a . , & hans , c.m . , 2002 , in spatial cluster modelling , eds .d.denison & a.lawson , crc press : london , p175 van dyk , d.a . , &kang , h. , 2004 , stat .sci , 19 , 275 wargelin , b.j . , kashyap , v.l . ,drake , j.j ., garca - alvarez , d. , & ratzlaff , p.w . , 2006 , submitted to apj weisskopf , m.c . ,tananbaum , h.d ., van speybroeck , l.p . , &odell , s.l . , 2000 ,spie , 4012 , 2 wichura , m.j ., 1989 , technical report no .257 , department of statistics , the university of chicago zamorani , g. , henry , j.p ., maccacaro , t. , tananbaum , h. , soltan , a. , avni , y. , liebert , j. , stocke , j. , strittmatter , p.a . ,weymann , r.j . ,smith , m.g . , & condon , j.j ., 1981 , apj , 245 , 357
a commonly used measure to summarize the nature of a photon spectrum is the so - called hardness ratio , which compares the number of counts observed in different passbands . the hardness ratio is especially useful to distinguish between and categorize weak sources as a proxy for detailed spectral fitting . however , in this regime classical methods of error propagation fail , and the estimates of spectral hardness become unreliable . here we develop a rigorous statistical treatment of hardness ratios that properly deals with detected photons as independent poisson random variables and correctly deals with the non - gaussian nature of the error propagation . the method is bayesian in nature , and thus can be generalized to carry out a multitude of source - population based analyses . we verify our method with simulation studies , and compare it with the classical method . we apply this method to real world examples , such as the identification of candidate quiescent low - mass x - ray binaries in globular clusters , and tracking the time evolution of a flare on a low - mass star . ^
in this section we explain in detail proposed hashing mechanism for initial dimensionality reduction that is used to preprocess data before it is given as an input to the autoencoder . as mentioned earlier , the mechanism is of its own interest .we introduce first the aforementioned family of -regular matrices that is a key ingredient of the method .assume that is the size of the hash and is the dimensionality of the data .let be the size of the pool of independent random gaussian variables , where each .assume that .we say that a random matrix is -regular if is of the form : where for , , for , for , , and furthermore the following holds : * for every column of every appears in at most entries from that column .notice that all structured matrices that we mentioned in the abstract are special cases of the -regular matrix .indeed , each toeplitz matrix is clearly -regular , where subsets are singletons .let be a function satisfying and .we will consider two hashing methods .the first one , called by us _ extended -regular hashing _ , applies first random diagonal matrix to the datapoint , then the -normalized hadamard matrix , next another random diagonal matrix , then the -regular projection matrix and finally function ( the latter one applied pointwise ) .the overal scheme is presented below : the diagonal entries of matrices and are chosen independently from the binary set , each value being chosen with probability .we also propose a shorter pipeline , called by us _short -regular hashing _ , where we avoid applying first random matrix and hadamard matrix and the hadamard matrix , i.e. the overall pipeline is of the form : the goal is to compute good approximation of the angular distance between given -normalized vectors , given their compact hashed versions : . to achieve this goal we consider the -distance in the -dimensional space of hashes .let denote the angle between vectors and .we define the _ normalized approximate angle between and _ as : in the next section we will show that the normalized approximate angle between vectors and is a very precise estimation of the actual angle if the chosen parameter is not large enough . furthermore , we show an intriguing connection between theoretical guarantess regarding the quality of the produced hash and the chromatic number of some specific undirected graph encoding the structure of . for many of the structured matrices under considerationthis graph is induced by an algebraic group operation defining the structure of ( for istance , for the circular matrix the group is a single shift and the underlying graph is a collection of pairwise disjoint cycles and trees thus its chromatic number is at most ) .we are ready to provide theoretical guarantees regarding the quality of the produced hash .our guarantees will be given for a _ sign _ function , i.e for defined as : for , for .however we should emphasize that empirical results showed that other functions ( that are often used as nonlinear maps in deep neural networks ) such as sigmoid function , also work well .it is not hard to show that is an unbiased estimator of , i.e. .what we will focus on is the concentration of the random variable around its mean . we will prove strong exponential concentration results regarding the extended -regular hashing method .interestingly , the application of the hadamard mechanism is not necessary and it is possible to get concentration results , yet weaker than in the former case , also for short -regular hashing . as a warm up , let us prove the following .[ mean_lemma ] let be a -regular hashing model ( either extended or short ) .then is an unbiased estimator of , i.e. notice first that the row , call it , of the matrix is a -dimensional gaussian vector with mean and where each element has standard deviation for ( ) .thus , after applying matrix the new vector is still gaussian and of the same distribution .let us consider first the short -regular hashing model .fix some -normalized vectors ( without loss of generality we may assume that they are not collinear ) and denote by the -dimensional hyperplane spanned by .denote by the projection of into and by the line in perpendicular to .let be a _ sign _ function .notice that the contribution to the -sum comes from those for which divides an angel between and , i.e. from those for which is inside the union of two -dimensional cones bounded by two lines in perpendicular to and respectively .observe that , from what we have just said , we can conclude that , where : now it suffices to notice that vector is a gaussian random variable and thus its direction is uniformly distributed over all directions .thus each is nonzero with probability exactly and the theorem follows .for the extended -regular hashing model the analysis is very similar .the only difference is that data is preprocessed by applying linear mapping first .both and are matrices of rotations though , thus their product is also a rotation matrix . since rotations do not change angular distance , the former analysis can be applied again and yields the proof . as we have already mentioned , the highly well organized structure of the projection matrix gives rise to the underlying undirected graph that encodes dependencies between different entries of .more formally , let us fix two rows of of indices .we define a graph as follows : * , * there exists an edge between vertices and iff .the chromatic number of the graph is the minimal number of colors that can be used to color the vertices of the graph in such a way that no two adjacent vertices have the same color .let be a -regular matrix .we define the -chromatic number as : we present now our main theoretical results .let us consider first the extended -regular hashing model .the following is true .[ ext_technical_theorem ] take the extended -regular hashing model with independent gaussian random variables : , each of distribution .let be the size of the dataset .denote by the size of the hash and by the dimensionality of the data .let be arbitrary positive function .let be two fixed vectors with angular distance between them .then for every the following is true : where and .notice how the upper bound on the probability of failure depends on the -chromatic number .the theorem above guarantees strong concentration of around its mean and therefore justifies theoretically the effectiveness of the structured hashing method .it becomes more clearly below . as a corollary, we obtain the following result : [ ext_theorem ] take the extended -regular hashing model with .assume that the projection matrix is toeplitz .let be the size of the dataset .denote by the size of the hash and by the dimensionality of the data .let be an arbitrary positive function .let be two vectors with angular distance between them .then for every the following is true : theorem [ ext_theorem ] follows from theorem [ ext_technical_theorem ] by taking : , , and noticing that every toeplitz matrix is -regular and the corresponding -chromatic number is at most .let us switch now to the short -regular hashing model .the theorem presented below is the application of the chebyshev s inequality preceded by the careful analysis of the variance .[ short_theorem ] take the short -regular hashing model , where is a toeplitz matrix .let be the size of the dataset .denote by the size of the hash and by the dimensionality of the data .let be two vectors with angular distance between them .then the following is true for any : the proofs of theorem [ ext_technical_theorem ] and theorem [ short_theorem ] will be given in the appendix .in this section we prove theorem [ ext_technical_theorem ] and theorem [ short_theorem ] .we will use notation from lemma [ mean_lemma ] .[ first_lemma ] let be the set of independent random variables defined on such that each has the same distribution and .let be the set of events , where each is in the -field defined by ( in particular does not depend on the ) .assume that there exists such that : for .let be the set of random variables such that and for , where stands for the random variable truncated to the event .assume furthermore that for .denote .then the following is true . [ hadamard_lemma ]let denote data dimensionality and let be an arbitrary positive function .let be the set of all -normalized datapoints , where no two datapoints are identical .assume that .consider the hyperplanes spanned by pairs of different vectors from .then after applying linear transformation each hyperplane is transformed into another hyperplane .furthermore , the probability for every there exist two orthonormal vectors in such that : satisfies : we have already noticed in the proof of lemma [ mean_lemma ] that is a matrix of the rotation transformation . thus , as an isometry , it clearly transforms each -dimensional hyperplane into another -dimensional hyperplane . for every pair us consider an arbitrary fixed orthonormal pair spanning .denote .let us denote by vector obtained from after applying transformation .notice that the coordinate of is of the form : where are independent random variables satisfying : similar analysis is correct for .notice that is orthogonal to since and are orthogonal .furthermore , both and are -normalized .thus is an orthonormal pair . from the lemmaabove we see that applying hadamard matrix enables us to assume with high probability that for every hyperplane there exists an orthonormal basis consisting of vectors with elements of absolute values at most .we call this event . notice that whether holds or not is determined only by , and the initial dataset .let us proceed with the proof of theorem [ ext_technical_theorem ] .let us assume that event holds . without loss of generality we may assume that we have the short -regular hashing mechanism with an extra property that every has an orthonormal basis consisting of vectors with elements of absolute value at most .fix two vectors from the dataset .denote by the orthonormal basis of with the above property .let us fix the row of and denote it as .after being multiplied by the diagonal matrix we obtain another vector : we have already noticed that in the proof of lemma [ mean_lemma ] that it is the projection of into that determines whether the value of the associated random variable is or . to be more specific , we showed that iff the projection is in the region .let us write down the coordinates of the projection of into in the -coordinate system .the coordinates are the dot - products of with and respectively thus in the -coordinate system we can write as : notice that both coordinates are gaussian random variables and they are independent since they were constructed by projecting a gaussian vector into two orthogonal vectors . now notice that from our assumption about the structure of we can conclude that both coordinates may be represented as sums of weighted gaussian random variables for , i.e. : where each is of the form or for some that depends only on .notice also that the latter inequality comes from the fact that , by [ coord_eq ] , both coordinates of have the same distribution . [ small_dot_product_lemma ]let us assume that holds .let be an arbitrary positive function .then for every with probability at least , taken under coin tosses used to construct , the following is true for every : [ pseudo_ortho_lemma ] notice that the we get the first inequality for free from the fact that is orthogonal to ( in other words , can be represented as and the latter expression is clearly ) .let us consider now one of the three remaining expressions .notice that they can be rewritten as : or or for some .notice also that from the -regularity condition we immediately obtain that for at most elements of each sum .get rid of these elements from each sum and consider the remaining ones . from the definition of the -chromatic number , those remaining ones can be partitioned into at most parts , each consisting of elements that are independent random variables ( since in the corresponding graph there are no edges between them ) .thus , for the sum corresponding to each part one can apply lemma [ azuma_general ] .thus one can conclude that the sum differs from its expectation ( which clearly is zero since for ) by a with probability at most or or now it is time to use the fact that event holds .then we know that : for . substituting this upper bound for in the derived expressions on the probabilities coming from lemma [ azuma_general ] , and then taking the union bound , we complete the proof .we can finish the proof of theorem [ ext_technical_theorem ] .from lemma [ small_dot_product_lemma ] we see that + are close to pairwise orthogonal with high probability .let us fix some positive function and some .denote let us consider again .replacing by and by in the formula on , we obtain another gaussian vector : for each row of the matrix .notice however that vectors have one crucial advantage over vectors , namely they are independent .that comes from the fact that , are pairwise orthogonal .notice also that from [ ineq1 ] and [ ineq2 ] we obtain that the angular distance between and is at most .let for be an indicator random variable that is zero if is inside the region and zero otherwise .let for be an indicator random variable that is zero if is inside the region and zero otherwise .notice that .furthermore , random variables satisfy the assumptions of lemma [ first_lemma ] with , where .indeed , random variables are independent since vectors are independent . from what we have said so far we know that each of them takes value one with probability exactly .furthermore only if is inside and is outside or vice versa .the latter event implies ( thus it is included in the event ) that is near the border of the region , namely within an angular distance from one of the four semilines defining .thus in particular an event is contained in the event of probability at most that depends only on one .but then we can apply lemma [ first_lemma ] .all we need is to assume that the premises of lemma [ small_dot_product_lemma ] are satisfied .but this is the case with probability specified in lemma [ hadamard_lemma ] and this probability is taken under random coin tosses used to product and , thus independently from the random coin tosses used to produce . putting it all together we obtain the statement of theorem [ ext_technical_theorem ] .[ variance_lemma ] define as in the proof of theorem [ ext_technical_theorem ] .assume that the following is true : for some .the the following is true for every fixed : it suffices to estimate parameter .we proceed as in the previous proof .we only need to be a little bit more cautious since the condition : can not be assumed right now .we select two rows : of .notice that , again we see that applying gram - schmidt process we can obtain a system of pairwise orthogonal vectors such that and the fact that right now the above upper bounda are not multiplied by , as it was the case in the previous proof , plays key role in obtaining nontrivial concentration results even when no hadamard mechanism is applied .we consider the related sums : + as before .we can again partition each sum into at most subchunks , where this time ( since is toeplitz ) .the problem is that applying lemma [ azuma_general ] , we get bounds that depend on the expressions of the form and where indices are added modulo and this time we can not assume that all are small . fortunately we have : and let us fix some positive function .we can conclude that the number of variables such that is at most .notice that each such and each such corresponds to a pair of rows of the matrix and consequently to the unique element of the entire covariance sum ( scaled by ) .since trivially we have , we conclude that the contribution of these elements to the entire covariance sum is of order .let us now consider these and that are at most .these sums are small ( if we take ) and thus it makes sense to apply lemma [ azuma_general ] to them . that gives us upper bound with probability : taking and , we conclude that : thus , from the chebyshev s inequality , we get the following for every and fixed points : that completes the proof .
we present here new mechanisms for hashing data via binary embeddings . contrary to most of the techniques presented before , the embedding matrix of our mechanism is highly structured . that enables us to perform hashing more efficiently and use less memory . what is crucial and nonintuitive is the fact that imposing structured mechanism does not affect the quality of the produced hash . to the best of our knowledge , we are the first to give strong theoretical guarantees of the proposed binary hashing method by proving the efficiency of the mechanism for several classes of structured projection matrices . as a corollary , we obtain binary hashing mechanisms with strong concentration results for circulant and topelitz matrices . our approach is however much more general .
a cryptographic hash function is a deterministic procedure that compresses an arbitrary block of data and returns fixed - size bit string , the hash value ( message digest or digest ) . an accidental or intentional change of the data will almost certainly change the hash value .hash functions are used to verify the integrity of data or data signature .+ let us suppose that is a hash function without key .the function is secured if the following three problems are difficult to solve .+ * problem 1 : * first preimage attack + _ instance : _ a function and an image + _ query : _ such that + we suppose that a possible hash is given , we want to know if there exists such that . if we can solve _ first preimage attack _ ,then is a valid pair . a hash function for which _ first preimage attack _ ca nt be solved efficiently is sometimes called _ preimage resistant_. + * problem 2 : * second preimage attack + _ instance : _ a function and an element + _ query : _ such that and + a message is given , we want to find a message such that and .if this is possible , then is a valid pair .a function for which _ second preimage attack _ca nt be solve efficiently is sometimes called _ second preimage resistant_. + * problem 3 : * collision attack + _ instance : _ a function + _ query : _ such that and + we want to known if it is possible to find two distinct messages and such that .a function for which collision attack ca nt be solve efficiently is sometimes called _ collision resistant_.+ there exists many hash functions : md4 , md5 , sha-0 , sha-1 , ripemd , haval .it was reported that such widely hash functions are no longer secured .thus , new hash functions should be studied .the existing hash functions such as md4 , md5 , sha-0 , sha-1 , ripemd , haval ... want to achieve two goals at the same time : ) : : for any input , they return a hash of ( of fixed length , this length depends on the hash function choosed ) ) : : preimage resistant , second preimage resistant and collision resistant .our contribution is to separate the two goals defined in points ) and ) .our hash function is defined as follows : * , * is a classical hash function such as md5 , sha-0 , sha-1 , ripemd , haval , .... * given a , find such that is np - complete , * find , such that and is np - complete , * for any input , the length of is not fixed .this is the main difference with the classical hash functions .the paper is organized as follows : in section 2 , some preliminaries are presented .section 3 is devoted to the design of our hash function .concluding remarks are stated in section 4 .let s define some preliminaries useful for the next section .data security in two dimension have been studied by many authors .let and be two positive integers , and let and be non - negative integral vectors .denoted by the set of all matrices satisfying thus a matrix of 0 s and 1 s belongs to provided its row sum vector is and its column sum vector is .the set was studied by many authors .ryser has defined an _ interchange _ to be a transformation which replaces the submatrix : of a matrix a of 0 s and 1 s with the submatrix if the submatrix ( or ) lies in rows and columns , then we call the interchange a _ -interchange_. an interchange ( or any finite sequence of interchanges ) does not alter the row and column sum vectors of a matrix . ryser has shown the following result .[ th : n1 ] let and be two and matrices composed of 0 s and 1 s , possessing equal row sum vectors and equal column sum vectors. then is transformable into by a finite number of interchanges .let us consider a matrix , i.e. its row sum vector is such that and its column sum vector is such that .we define the function from to as follows : where denotes the concatenation .irving and jerrum have studied the extension of the problem in three dimension and shown that problems that are solvable in polynomial time in the two - dimensional case become np - complete .suppose that for a given table of non - negative integers , and for each , the row , column and file sums are denoted by and respectively .in other words : the following problem is studied by irving and jerrum : + * problem 4 . * three - dimensional contingency tables ( 3dct ) + _ instance : _ a positive integer , and for each non - negative integers + values , and + _ question : _ does there exist an contingency table of non - negative integers such that : for all ? irving and jerrum show the following result : [ corrol:1 ] 3dct is np - complete , even in the special case where all the row , column and file sums are 0 or 1 .let us consider a matrix such that its row sum matrix is a matrix such that ( i.e. ) , the column sum matrix is a matrix such that ( i.e. ) and the file sum matrix is a matrix such that ( i.e. ) .we define the function as follows : let us consider the following matrices and .we define the element product of matrices and as follows : _ element product of matrices of dimension 2 _ + let , we define the _ element product of matrices _ and as follows : _ element product of matrices of dimension 3 _ + let , we define the _ element product of matrices _ and as follows : the construction of our hash function , let us explain the main idea . in page 175 of paper , brualdi gives the example of the following five matrices : which belong to where .let us note the following matrix : based on the _ element product of matrix _ defined in the previous subsection , it is easy to verify that : by computation , we evaluate that : it is easy to verify that , , and .all these differences imply that * the second term of is not equal to the second term of , * the third term of is not equal to the third term of , * the fifth term of is not equal to the fifth term of , * the sixth term of is not equal to the sixth term of . more formally , from the construction of , we can deduce easily that if , then : ) : : the i - th term of would probably be different from the i - th term of , ) : : the ( n+j)-th term of would probably be different from the ( n+j)-th term of . from the fact that which is related to ( this is an extension of ) is np - complete , we deduce that : ) : : given and a matrix , find a matrix such that is np - complete .our idea is to build a new hash function such that where * is a classical hash function such as md5 , sha-0 , sha-1 , ripemd , haval , ... * is a function which exploits the ideas presented in ) , ) and ) .let us denote the vector such that and each of its elements is equal to 1 .also , let us denote the matrix such that and each of its elements is equal to 1 .in other words : we denote the set of strictly positive natural number defined as follows : in the next sub - section , we formalize the observation made in points ) and ) and we take into account the np - completeness of 3dct to build a new hash function . for any integers and such that , let us denote the decomposition of the integer in base 2 on positions . in other words : let us also define the following function : represents the number of bits necessary to represent any integer between and in base 2 .+ we also define the following functions : represents the maximun of sum of any consecutive elements of the matrix belonging to the same row , or to the same column or to the same file . represents the number of bits necessary to represent in base 2 the sum of any consecutive elements of the matrix belonging to the same row , or to the same column , or to the same file .+ subsequently , in the aim to be more precise , we redefine as follows : let us define the following problem : + * problem 5 : * + _ instance : _ a positive integer , two binary strings and + two matrices + _ query : _ find two matrices + such that : let us characterize the complexity of .[ ref : prop1 ] problem 5 is np - complete .* proof idea of proposition [ ref : prop1 ] : * + we want to show how to transform a solution of 3dct to a solution of problem 5 . without loss of generality , we work in dimension 2 . let us suppose that we want to find a matrix such that : [ e : gpgp ] where and .+ it is easy to see that the determination of the matrix which verifies equations ( [ e : gpgp ] ) is also equivalent to determining the matrix such that : [ e : gh ] where and . + * remark 1 : * ( respectively ) is a duplication of ( respectively ) . + it is easy to see that from the matrix : which verifies equations ( [ e : gpgp ] ) , we can associate the two following matrices and which verify equations ( [ e : gh ] ) .this is the idea of the transformation which associates to one solution of the problem defined in equations ( [ e : gpgp ] ) two distinct solutions of the problem defined in equations ( [ e : gh ] ) .+ before the proof , let us introduce the function duplic ( which is pseudo - duplication ) of x. we note : where and .we define the function as follows : the function duplic is defined as follows : where is defined as follows : and for illustration , is defined as follows : * remark 2 : * in the definition of , the term means the concatenation of all the elements between and . in other words : where .* proof of proposition [ ref : prop1 ] * : it suffices to show that .+ let us suppose that the procedure _ generalize _ solves problem 5 and we want to show how to build a procedure _ sol3dct _ which solves 3dct . + the procedure _sol3dct _ takes as input a binary string x , an integer n and returns as output the matrix of size such that .the procedure _ generalize _ takes as input : * p the dimension of the matrices * two binary strings and * two matrices and and returns as output : * two matrices and such that : * and we show in the procedure below how to use as a subroutine to solve . procedure = * sol3dct ( n : integer , x : string , var a : matrix ) ; * + v , w , c , d : matrix + p , i , j , k : integer + z : string + * begin * = + 1 : + 2 : + 3 : + 4 : + 5 : + 6 : for i = 1 = to n do + 7 : for j = 1 = to n do + 8 : for k = 1 = to n do + 9 : + 10 : endfor + 11 : endfor + 12 : endfor + * end * + * remark 3 : * in the procedure * sol3dct * , the matrix belongs to the set , whereas the matrices belong to the set .+ the string of the procedure * sol3dct * ( see instruction 3 ) is constructed such that if and only if the matrices and defined in equations ( [ ee : ep1 ] ) and ( [ ee : ep2 ] ) are the solutions of problem 5 with the following entries : * 2n the dimension of the matrices , * two binary strings and , * two matrices and such that , .the terms of the matrix are : the terms of the matrix are : + the main idea of the design of the collision - resistant hash function is that : * the hash function is the composition of two functions and , * the function is a function for which _ problem 1 _ , _ problem 2 _ and _ problem 3 _ ca nt be solved efficiently and is not a compression function . * is a hash function such as sha-256 , ripemd , or haval , .... let us consider two vectors and .we say that is not a linear combination of and we note is nlc of if and only if such that .two matrices verify the hypotheses ( [ e : gp ] ) if and only if : [ e : gp ] the matrices and used as entries in the procedures and below verify the hypotheses defined by equations ( [ e : gp ] ) .we note the empty chain .let us define the function which takes as input a vector of size and returns as output an equivalent matrix of size .* procedure * ( vect : table[1 .. ] of bit ; var a : table[1 .. n , 1 .. n , 1 .. n ] of bit ) + var i , j , k , t : integer + begin + + i = 1 to n do + j = 1 to n do + k = 1 to n do + + + + + + + the function is defined as follows : + * function * : + _ entry . _ the initial message + : table[1 .. n , 1 .. n,1 .. n ] of integer + : table[1 .. n , 1 .. n,1 .. n ] of integer + : an integer + _ output . _ : an intermediate message + var i , p : integer + begin + 1 .pad with one bit equal to 1 , followed by a variable number of + zero bits and a block of bits encoding the length of in bits , + so that the total length of the padded message is the smallest + possible multiple of .let denote the padded message + 2 .cut into a sequence of -bits vectors + + 3 . + . for i= 1 to p do + 4.1 + 4.2 + endfor + 5 . return + + our hash function is defined as the composition of the function and , where is a hash function such as sha-256 , ripemd , haval ... the matrices and used as entry in the hash function must verify the hypotheses defined in equations ( [ e : gp ] ) . to obtain the hash of the message by , we proceed as follows : * we obtain the intermediate message by application of the function to the message , * by application of the hash function to , we build the hash of the initial message .formally , the hash function is defined as follows : + + * procedure * : + _ entry . _ the initial message + : table[1 .. n , 1 .. n,1 .. n ] of integer + : table[1 .. n , 1 .. n,1 .. n ] of integer + : an integer + _ output ._ : the hash of the message + begin + + + + * comment :* + we can represent roughly the function as follows : = 8.6 cm in the figure [ figgraph9 ] : * the aim of the branches ( 1 ) and ( 2 ) is to make that the problem 2 and problem 3 are difficult to solve efficiently for the function * the aim of the branch ( 6 ) is to make sure that problem 1 is difficult to solve efficiently for the function during some attacks , an adversary is needed to solve the following problem : * problem 6 : * + _ instance : _ matrices a , v , w + binary strings : and + _ query : _ find a matrix such that and : based on problem 5 , we deduce that is np - complete . + second preimage attack and collision of the function difficult because : * problem 5 and problem 6 are np - complete , * from the fact that and verify the hypotheses ( [ e : gp ] ) , we deduce that if we take two matrices and such that , then we would probably have first preimage attack of the function is difficult because the 3dct is np - complete .truncated differential attack of is possible , but the differential attack of is difficult because 3dct is np - complete and also problem 5 is np - complete .let s consider the two messages x1 and x2 : + we have md5(x1)=md5(x2)= efe502f744768114b58c8523184841f3 + after applying our hash function on these messages using , [j][k ] = i + 8j + 64k ] for we obtain : + (x1)= 5fe0e56f9a4ab66a47d73ce660a2c4eb and + (x2 ) = 620e2f3cfe0afc403c0a8343173526fc .+ it follows that whereas .from a classical hash function , we have built a new hash function from which first preimage attack , second preimage attack and collision attack are difficult to solve . our new hash function is a composition of functions .the construction used the np - completeness of three - dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function .the complexity of our new hash function increases with regard to the complexity of classical hash functions .2 r. a. brualdi , _ matrices of zeros and ones with fixed row and column sum vectors _ , linear algebra and its applications , * 33 * , 1980 , pp .159 - 231 .l. cox , _ suppression methodology and statistical disclosure control _ , j. amer .assoc . , 75(1980 ) , pp .377 - 385 .i. p. fellegi , _ on the question of statistical confidentiallity _, j. amer .assoc . , 67 , ( 1972 ) , pp .d. r. fulkerson , _ an upper bound for the permanent of a fully indecomposable matrix _ , pacific j. math . ,* 10 * , 1960 , pp .831 - 836 .d. gale , _ a theorem on flows in networks _, pacific j. math ., * 7 * , 1957 , pp. 1073 - 1082 .r. m. haber , _ minimal term rank of a class of ( 0,1)-matrices _ , canad ., * 15 * , 1963 , pp . 188 - 192 .hongbo yu , xiaoyun wang , _ multi - collision attack on the compression functions of md4 and 3-pass haval _ , lecture notes in computer science , * 4817 * , springer 2007 , pp .206 - 226 .hongbo yu , gaoli wang , guoyan zhang , xiaoyun wang , _ the second - preimage attack on md4 _ , lecture notes in computer science , * 3810 * , springer 2005 , pp . 1 - 12kao , d. gusfield hongbo yu , _ efficient detection and protection of information in cross tabulated tables : linear invariant set _ , siam j. disc ., 6 ( 1993 ) , pp . 460 - 473 .xiaoyun wang , hongbo yu , yiqun lisa yin , _ efficient collision search attack on sha-0 _ , lecture notes in computer science , * 3621 * , springer 2005 , pp . 1 - 16 .xiaoyun wang , yiqun lisa yin , hongbo yu , _ finding collisions in the full sha-1 _ , lecture notes in computer science , * 3621 * , springer 2005 , pp .xiaoyun wang , xuejia lai , dengguo feng , hui cheng , xiuyuan yu , _ cryptanalysis of the hash functions md4 and ripemd _ , lecture notes in computer science , * 3494 * , springer 2005 , pp . 1 - 18 .xiaoyun wang , hongbo yu , _ how to break md5 and other hash functions _ , lecture notes in computer science , * 3494 * , springer 2005 , pp .bert den boer , antoon bosselaers , _ collisions for the compression functions of md5 _ , lecture notes in computer science , * 765 * , springer 1994 , pp .293 - 304 .r. w. irving , m. r. jerrum , _ three - dimensional statistical data security problems _ , siam j. comput .* 23 * , no 1 , pp .170 - 184 , 1994 .h. j. ryser , _ combinatorial properties of matrices of zeros and ones _ , canad .* 9 * , pp .371 - 377 , 1957 .g. sande , _ automated cell suppression to preserve confidentiality of business statistics _, statist .j. united nations ece 2 ( 1984 ) pp .
a cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed - size bit string . there exists many hash functions : md5 , haval , sha , ... it was reported that these hash functions are no longer secure . our work is focused on the construction of a new hash function based on composition of functions . the construction used the np - completeness of three - dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function . + * keywords :* np - complete , one - way function , matrix of zeros and ones , three - dimensional contingency table , collision - resistant hash function .
present knowledge about susceptibility of modern technological systems to space weather related disturbances has lead in a significant rise of the scientific and operational interest of issues associated to the understanding and forecast of space weather events . in turn , it has resulted in substantial increase of simultaneously operating satellites devoted to space weather effects , and to an exponential growth in the volume of space weather related data .thus , it is impossible to provide comprehensive analysis of these huge quantity of data without the development of proper algorithms and software tools that are able to identify automatically segments of data relevant to a particular physical process of space weather event of interest .the sudden energy release of the solar corona is the source of powerful electromagnetic disturbances of the near - terrestrial environment .solar flares and coronal mass ejections ( cmes ) can trigger geomagnetic storms which may affect terrestrial communications and the reliability of power systems .the most hazardous space weather events are associated with cmes .thus , cmes and flares are principal objects of detection and cataloguing . however , automatic detection of flares and cmes detection is not evident , due to their complex characteristics .a number of flare and cme detection methods in different wavelengths with different instrumental limitation have been reviewed by . todaya number of the instruments accessible for space community is able to monitor flares and cme events on the regular basis .moreover , there are several data sources that can be consulted in real or near - real time for the case of solar flares .the solar flare automatic detector of solar soft , which is based on recognition of total x - ray intensity flux bursts from goes-2 , provides a solar flare catalog which is automatically constructed and is available on lmsal / nasa website ( http://www.lmsal.com/solarsoft/latest_events ) . a full catalog of solar flares observed by different instruments is manually constructed by noaa space weather center in boulder / us with information provided to the community with one day delay ( http://www.swpc.noaa.gov/ftpdir/indices/events ) .the detection of cmes has been traditionally based on the manual recognition of features moving radially outwards from the sun using coronagraph data . the full cme catalog based on lasco / soho coronograph observationsis manually constructed by n. gopalswamy group ( http://cdaw.gsfc.nasa.gov/cme_list ) .a significant progress in the automatic detection and cataloguing of cmes was achieved recently by the cactus software using lasco / soho and secchi / stereo data ( http://sidc.be/cactus ) .this has led to issuing of near - real - time messages ( with a delay of a quarter of a day ) alerting the space weather community .the ever growing importance of space weather has led to new requirements on the accuracy of cme detection and forecast . in situ measurementsare not able to address the problem of prediction of solar eruptive events that can be extremely hazardous to space based technological systems .however , solar surface observations may provide such information in advance and substantially increase the forecast time of cme prediction .furthermore , the increasing processing power of computers and the evolution in image processing and pattern recognition techniques has made possible to develop automatic detection tools for the solar drivers of space weather event . during the last decadethe extreme ultraviolet ( euv ) solar imaging from space has revealed a rich diversity of solar disk events , such as global waves and eruptive dimmings ( so - called `` eit wave '' phenomenon .euv imager telescope on - board soho was able to transmit high cadence solar corona euv images on earth from the beginning of 1997 till 1 august , 2010 24 hrs a day .eit waves appear as global bright features propagating in the solar corona followed by intensity dimness ( dimming ) as seen in extreme ultraviolet ( euv ) .they are usually triggered by solar flares or brusque filament disappearance .it has been suggested that these waves are among the best indicators of the large - scale reorganization of coronal magnetic fields .various eit waves were found to precede cmes in time and space ( i.e. cme signatures ) and later got a scientific name of solar eruptions .euv observation of the associated with cme phenomena ( e.g. , eruptive dimmings and eit waves ) can be processed faster than coronagraphic images ( e.g. , less images are required for detection ) .thus , observations of eruptive dimmings end eit wave phenomena might improve drastically the forecast times of cme warnings .the first catalog of the large scale eit waves was manually constructed by and accounted for the years 19971998 .in contrast to the detection of solar flares and cmes which is based on the recognition of well distinctive signatures ( intensity bursts or global structures propagating in the plane of sky ) the detection of eit waves is not trivial since they are characterized by rather low intensity ( on the level of general background ) , they present large variety in morphology and can last down to tens of minutes . the first proof - of - principle demonstration for the automated detection of eit waves using the statistical properties of the eruptive euv on - disk events and the underlying physical mechanismswas proposed by .the physical properties of eruptive cme on - disk precursors modify drastically the high order statistical properties of the image sequences . based on this proof - of - principle an algorithm was developed in order to detect eit waves and eruptive dimmings .the algorithm was successfully tested with the calibrated eit / soho data of 19971998 period in order to extract the events listed in the eit wave catalogue of . in 2006 , the novel eit wave machine observing ( nemo ) ( http://sidc.be/nemo ) operational tool was developed in order to detect automatically solar eruptions using real - time quick look eit images and extract eruptive dimmings with their parameters .one of the particularly important nemo feature is the capability to detect among the large family of solar dimmings - which are always present on the euv solar disk - those connected to cmes .nemo has been built using a series of high level image processing techniques suitable to extract eruptive features from the euv solar disk under complex solar conditions .the operation of nemo allowed the automatic construction of a catalogue with the eruptive dimmings and eit waves as detected in the image sequences of soho for the period 19972010 .the majority of the events listed in the catalogue were identified as cme precursors ( using cme catalogs ) .in contrast to the detection methods based on coronograph data , nemo could detect even the precursors of faint halo cmes as well .euv solar imagers are currently operating on - board proba-2 , stereo ( solar terrestrial relations observatory ) and sdo ( solar dynamic observatory ) missions while two more imagers are under construction for future space experiments . a project for automatic recognition of eruptive dimmings for post - processed sdo data was recently initiated by nasa using detection principles similar to that of nemo .similar principles have been also used for automated flare detection based on euv images by while demonstrated a principle to detect eruptive dimmings that are widely distributed in the visible euv solar corona .it is evident that nemo may further contribute to the development of detection schemes of eruptive events . in this work ,the recent updates of nemo code are presented resulting to increase of the recognition efficiency of the solar eruptions linked to cmes . in particular , these updates provide : ( 1 ) the direct calculation of the the surface of the dimming region in terms of physical variables ( square kilometers ) since eit images are in fact projection of the solar sphere ; ( 2 )the optimization of a clustering technique for the dimmings ; ( 3 ) new criteria to flag the eruptive dimmings , based on their complex characteristics ( area and intensity of dimmings ) . the basic scheme of nemo algorithm is described in the next section and the most recent developments in section 3 .some examples are presented in section 4 and a conclusive summary in the last section .nemo algorithm consists of two main parts that are briefly presented below .* event detection : * the initial part refers to the detection of an event in the euv disc and it is based on the modifications of basic statistical properties of the image sequences during such events . all the discovered type of cme precursors in euv , such as various eit waves and the sudden loop opening events are triggered by solar flares or brusque filament disappearance and they always contain eruptive dimmings . as the consequence ,all these events have one important common characteristic : they strongly affect the probability distribution functions ( pdf ) of pixels distributions .the techniques that allowed the extraction of the eruptive features from the euv solar disk under complex solar conditions are based on the analysis of skewness and kurtosis .skewness is the measure of the asymmetry of the probability distribution function ( pdf ) .if the left tail of the pdf is more pronounced than the right one , then the pdf has negative skewness and when the reverse is true , it has positive skewness .kurtosis measures the excess probability ( flatness ) in the tails , where excess is defined in relation to a gaussian distribution .large values of these high order moments indicate the existence of intermittent / bursty events which are characterized by strongly non - gaussian pdfs .the sudden appearance of coherent and intensive structures on the sun is reflected by bursts of skewness and kurtosis of the pixel distribution .the appearance of these bursts is a reliable indicator of a flare or an eruption occurrence .solar eruptions are barely seen in the image due to their low intensity contrast with respect to simultaneously appeared solar flare .thus , fixed differences ( fd ) images are constructed by the subtraction of a reference image of the day and taking into account the solar differential rotation compensation .the first step in nemo consist of the detection of higher order moments bursts of differences images distribution .this detection principle is very robust and can be applied successfully in data of any euv telescope especially for the case of large scale events .if these moments grows in few consecutive images an event is detected .as an example , let us consider _ `` a worst case '' _ of the four faint cmes observed during 1997 april , 1 by lasco coronograph .the recognition of their precursors is not distinctive by visual inspection on the euv / soho 195 images of the full sun ( fig .[ fig1]_a _ ) . however , the asymmetry and the flatness of the pixel distributions on the differences images are excellent indicators of their occurrence ( fig .[ fig1]_b _ ) .remarkably , one can see that these cme signatures are observed a up to a couple of hours prior to the associated coronograph observations .once the occurrence of an event has been detected , the next step is to recognize if an _ eruptive dimming _ is present on the fd image of solar disk .eruptive dimmings appear as regions of dark intensity in euv wavelengths . however , there are many regions of dark intensity that are present continuously on the euv sun .the eruptive dimmings differs from them as they appear rapidly and they have the tendency to expand much faster .the algorithm of eruptive dimming recognition includes the following consecutive steps : 1 . _the construction of fixed differences , _ with solar differential derotation compensation included , where the first image before the solar event is subtracted from the sequence of the subsequent images .2 . _ the extraction of regions with decreased intensity _ from the difference images : this region is extracted by selecting the 5% of the darkest pixels in the fd image .the resulted images contain large - scale distinctive regions of reduced intensity. however , many small- scale scattered regions of reduced intensity - attributed to the presence of noise effects - still remain .the median filtering application _ for the reduction of small scale noise : using this standard procedure the small scale scattered noisy points are further reduced ._ clustering _ of decreased intensity regions : dimmings are the large scale connected regions of reduced intensity with respect to the scattered points attributed to noise . for the extraction of the dimming regionthe agglomerative filtering is applied ; for each decreased intensity reference pixel with coordinates the agglomerative weight is computed .this weight is equal to the number of pixels with reduced intensity in the limited square vicinity around the reference one and permits the separation between the small scattered and the large scale regions of negative intensity .dimming extraction . _+ to extract the larger dimming area from the background of decreased intensity regions , we set the maximum weight and we select those pixels with weight that exceeds , where $ ] is the coefficient of agglomeration . + figure [ fig2 ] shows all the five steps of dimming extraction as described above using eit / soho 195 images taken at 1997 may , 12 at 05:07 ut ( top panel ) and 1997 april , 7 at 14:22 ut ( bottom panel ) . if the resulted filtered image contains more then one dimming area , the largest one is selected for the subsequent analysis .the decision whether a dimming is eruptive or not is received depending on the growth of its dimming area in few images . the algorithm of eruptive dimming extraction can be summarized as follows : * positive answer on the first part indicated the detection of burst in moments of pixel distribution behavior and provides an alert about solar flare occurrence . * positive answer on the second part indicates a growth in the surface area of the dimming and provides alert about solar eruption occurrence and prompt appearance of cme in the heliosphere . it is evident that the identification of dimming regions may be the earliest and most efficient method for the prediction of cmes .the dimming extraction by nemo as presented above was based so far on the estimation of dimming surface in terms of the numbers of pixels .however , areas of equal surface at the image center and at the solar limb may differ up to five times in terms of pixel numbers due to projection effects . as a consequence ,this may lead to unjustified comparison of regions with decreased intensity and finally to the loss of events occurred near the limb in favor of smaller dimming located in the center of the disc .the clustering technique ( agglomerative filtering ) of decreased intensity regions uses the square vicinity around a center pixel and the information is taken from an area which is actually the projection of a square to a hemisphere .thus , it can significantly distort the filtering result in the resulting area forms due to the asymmetry of the vicinity relatively to its center .the extraction of eruptive dimming is carried out by comparing the size of the decreased intensity regions but not the intensity itself . however , the intensity of the dimming is also a significant characteristic as the area .the released energy during the dimming formation is almost equal to the energy of cme .the joint consideration of dimming areas and intensities can provides an estimation on the power of cme and therefore to increase the reliability of the eruptive dimming extraction . in order to improve the detection efficiency of nemo ,the following updates have been applied : * _ estimation of dimming areas in . _the dimming area is computed now directly in terms of square kilometers instead of the eit ( or euvi ) pixels by taking into account that an euv image is a projection of the solar sphere . using the surface integral and the mean value theorem, we calculate the surface of pixel in using the following relation : where is solar radius in and is the solar radius in pixels .the total dimming area in is simply determined by the sum of the pixels surface that form the dimming area . * _ clustering in circle vicinity . _the clustering procedure is significantly improved since the agglomerative filtering is carried out on the spherical surface of the circular vicinity of the square vicinity .we have found that the circle vicinity clustering improved drastically the estimation of the dimming structures with the maximum possible accuracy . * _ complex characteristic for dimming extraction _ connected to cmes is modified .the nemo algorithm uses a criterion based on the surface of the dimming area for the extraction of the dimming . in order to increase the confidence on dimming extraction , we have introduced a new complex characteristic which allows the simultaneous consideration of both area and intensity of dimmings .this characteristic can be considered as a `` volume metric '' of the dimming and it is defined by the following relation : + \ ] ] + here is the surface of the dimming and is the intensity of the dimming pixels .this characteristic variable can be used for the construction of more qualitative picture about the dimming evolution in time and can provide an rough estimation on the cme power which is connected with the formation of considered dimmings .a comparative analysis between the updated and the basic nemo algorithms has been carried out based on a number of soho and stereo euv observations of solar corona .it was clearly shown that the new algorithm provides a more accurate estimation of the shape and the size of eruptive dimmings for all the considered cases .the modified algorithm is more sensitive to small - scale events and it provides much earlier detection while the basic algorithm could miss completely small scale events or detect larger ones with additional delays .remarkably , the results are drastically improved especially for the events observed near the solar limb .three characteristic examples are presented below .an eruptive event was observed near the solar limb by eit / soho imager in 195 wavelength 7 april , 1997 at 14:22 ut .this event was automatically detected in both basic and modified algorithms .[ fig3 ] depicts the extracted structures from the differences images using the basic ( fig .[ fig3]_a _ ) and the modified algorithm ( fig .[ fig3]_b _ ) .as it is shown , the modified algorithm demonstrates considerably more precise definition of the dimming shape .the surface of the dimming is given in instead of pixels while a number of additional parameters is also provided .the euvi / stereo - b observations of solar disk eruptive event that occurred 2008 april , 26 from 13:15 ut to 13:55 ut are shown in fig .the top panel depicts the fixed differences images of the event created by subtracting a fixed reference image before the event onset from the subsequent images .the middle and bottom panels show the dimmings as extracted by using the basic and the modified algorithms respectively . as it is shown in fig .[ fig4 ] , the modified version of the algorithm detects the dimming from the early stages of the event development ( at 13:15 ut ) in contrast to the basic algorithm which provides detection alert with a relative delay of 10 minutes .the parameters of the event are shown in table [ table1 ] as extracted by the application of the new algorithms .the basic algorithm can not provide in such accuracy the shape and the rest characteristic parameters of the eruptive dimming ..characteristics of 26 april , 2008 euvi / stereo event _ eruptive dimming _ extracted by modified nemo algorithm . [ cols="^,^,^,^,^,^",options="header " , ] [ table2 ] from all the examples presented above it becomes evident that the consideration of both surface of the dimming region and dimming intensity in combination with the more accurate extraction method of circle vicinity clustering in spherical coordinates increase significantly the detection efficiency and the information provided by the extracted dimmings .an updated version of nemo that will include all the algorithms presented is expected to introduce the following additional and more accurate information in the nemo catalogs of eruptive events : * dimming coordinates ( latitudes and longitudes of the dimming borders ) * dimming intensity * `` volume metric '' of dimming * surface of the dimming area in km the dimming coordinates listed in nemo catalogs will provide systematic information about the cme precursors while the other parameters of dimmings : area , intensity , `` volume metric '' may provide essential information for the estimation of cme power .we have developed novel algorithms for the nemo detection tool of flares and cmes for post - processes of eit / soho and secchi / stereo euv solar disk images .nemo version that is currently in operation consists of two main steps , namely the event detection and the eruptive dimming recognition .the event detection is based on the detection of bursts in the high order moments of the image while the eruptive dimming extraction is based on a sequence of filtering techniques applied in the pixel distributions of image sequences . in this work ,we have presented a series of updates in the eruptive dimming extraction algorithms that allow us to increase significantly the detection efficiency of eruptive dimmings linked to cmes .the surface of the dimming area is computed now directly in terms of physical variables ( square kilometers ) by taking rigorously into account that eit images correspond to solar sphere projections .the clustering of dark regions is achieved through circle vicinity clustering on latitude and longitude .furthermore , the novel methods for the eruptive dimming extraction - based on the volume metric of the dimming increase the detection efficiency and the accuracy of the associated extracted parameters . using a series of examples , we have shown that the modified version of nemo tool presents indeed significantly higher temporal and spatial efficiency on the automatic detection of cme precursors .in particular , small eruptive events located near the solar limb can be detected now while major events can be detected at earlier times than before .the nemo tool will incorporate the optimized new algorithms and it is expected to provide early warnings for cmes precursors and automatically construct new catalogs with enhanced and more accurate information about the detected solar eruptive events .
the recent developments in space instrumentation for solar observations and telemetry have caused the necessity of advanced pattern recognition tools for the different classes of solar events . the extreme ultraviolet imaging telescope ( eit ) of solar corona on - board soho spacecraft has uncovered a new class of eruptive events which are often identified as signatures of coronal mass ejection ( cme ) initiations on solar disk . it is evident that a crucial task is the development of an automatic detection tool of cmes precursors . the novel eit wave machine observing ( nemo ) ( http://sidc.be/nemo ) code is an operational tool that detects automatically solar eruptions using eit image sequences . nemo applies techniques based on the general statistical properties of the underlying physical mechanisms of eruptive events on the solar disc . in this work , the most recent updates of nemo code - that have resulted to the increase of the recognition efficiency of solar eruptions linked to cmes - are presented . these updates provide calculations of the surface of the dimming region , implement novel clustering technique for the dimmings and set new criteria to flag the eruptive dimmings based on their complex characteristics . the efficiency of nemo has been increased significantly resulting to the extraction of dimmings observed near the solar limb and to the detection of small - scale events as well . as a consequence , the detection efficiency of cmes precursors and the forecasts of cmes have been drastically improved . furthermore , the catalogues of solar eruptive events that can be constructed by nemo may include larger number of physical parameters associated to the dimming regions .
bacteria are unicellular organisms generally studied as isolated units , however they are interactive organisms able to perform collective behaviour , and a clear marker of the presence of a multicellular organization level is the formation of growth patterns . particularly it has been pointed out that unfavorable conditions may lead bacteria to a cooperative behavior , as a means to react to the environmental constraints .many studies about the multicellular level of organization of bacteria have been proposed and pattern formation during colonies growth has been observed in cyanobacteria , in bacillus subtilis , in escherichia coli , proteus mirabilis and others . some of these patterns have been studied by mathematical models , that explain the macroscopic patterns through the microscopic observations .there is a group of bacteria that differs from those cited above because their normal morphological organization is clearly multicellular : actinomycetes , and _ streptomyces _ is a genus of this group ._ streptomycetes _ are gram - positive bacteria that grow as mycelial filaments in the soil , whose mature colonies may contain two types of mycelia , the substrate , or vegetative , mycelium and the aerial mycelium , that have different biological roles .vegetative mycelium absorbs the nutrients , and is composed of a dense and complex network of hyphae usually embedded in the soil .once the cell culture becomes nutrient - limited , the aerial mycelium develops from the surface of the vegetative mycelium .the role of this type of mycelium is mainly reproductive , indeed the aerial mycelium develops the spores and put them in a favorable position to be dispersed . in our laboratorywe have isolated a bacterial strain , identified with morphological criteria as belonging to _streptomyces_. this strain is interesting because its growth pattern differs on maximal and minimal culture media . on maximal culture medium ( lb , luria broth ) , after days of growth at , the strain shows a typical bacterial growth with formation of the rounded colony characteristic of most of the bacterial strains ( fig .[ exp : max ] ) . on minimal culture medium ( fahreus ) growth proceeds more slowly than in maximal media and a concentric rings pattern of aerial mycelium sets up ( fig .[ exp : min ] ) .the rings are centered on the first cell that sets up the colony - we call it the founder - where usually the aerial mycelium develops as well .the number of rings increases with time till after days of growth at . in both cases agarconcentration was .the presence of concentric rings patterns is a quite common feature in bacterial and fungi colonies ; many models can originate such patterns , a possible explanation was proposed in , where is suggested that the interplay of front propagation and turing instability can lead to concentric ring and spot patterns . a different approach based on competition for resourceshas been recently proposed to study species formation as pattern formation in the genotypic space .we consider a similar mechanism to investigate the spatial pattern formations observed in our laboratory in a _streptomyces _ colony .before introducing the mathematical model we have to go through some of the biological features of the system .aerial mycelia are connected through the vegetative hypae network .this network has a peculiar structure in the _ streptomyces _ isolated in our laboratory , indeed we observe that the growing boundary of the substrate mycelium is made by many hyphae extending radially from the founder so that , in this area , the substrate mycelium has a radial polarity , also if the hyphae have many branching segments .substrate mycelium has the biological objective to find nutrients to give rise to spores , therefore we expect that on minimal media a strong competition arises for the energetic resources between neighbor substrate mycelia , whereas in maximal media , where there are sufficient nutrients , the competition is weaker . if the cells are connected mainly along the radial direction , then competition will be stronger along this direction than along the tangential one . in other words , in the growing edge of the colony , the competition is not isotropic but , following the vegetative mycelium morphology , it will be stronger among cells belonging to neighboring circumferences ( radial direction ) than among cells of the same ( tangential direction ) , and we will keep track of these aspects in the model .although the radial polarity is lost inside the colony , the asymptotic distribution of aerial mycelium is strongly affected by the initial spots derived by the growing boundary of the vegetative mycelium .finally another important feature of the biological system is the presence of a founder .the founder behaves as every other aerial mycelium - it competes with the other cell - , moreover it is the center of every circle .that means that every hypha originates from the founder : it is the source of the vegetative hyphae , and as the colony grows the ring near the founder become increasingly densely packed .moreover during the enlargement of the colony no new center sets up and therefore substrate mycelium density is highest near the founder and decreases radially away from it .to summarize , in our model we make the following assumptions based on the previous considerations .* there is competition among every aerial mycelium for some substances that we assume for sake of simplicity uniformly distributed over the culture .* we consider only the aerial mycelium : we do not introduce explicitly the substrate mycelium but we take in account it assuming that * * the competition is stronger along the radial direction than along the tangential one . * * the probability for the aerial mycelium to appear is higher near the founder assuming this framework we show that a concentric rings pattern may be explained as a consequence of strong competition , and a rounded pattern of weak competition . from the biological point of viewthis result implies that the formation of concentric rings patterns is a mean that _ streptomyces _ adopts to control growth . in the following we propose a mathematical model to reproduce the aerial mycelium growth patterns described in the introduction .this model is derived from a similar model introduced , in a different framework , ( species formation in genotypic space ) in .let us consider a two - dimensional spatial lattice , that represents the petri dish .each point is identified by two coordinates , we study the temporal evolution of the normalized probability to have an aerial mycelium in position at time .the evolution equation for , is in the form : where is the probability of formation of a new aerial mycelium in position and we suppose it can depend also on the distribution . according to the hypothesis described above , it is the product of two independent terms : where is the so - called static fitness , and represents the probability of growth of an aerial mycelium in presence of an infinite amount of resources ( no competition ) .the founder is the source of every hypha , so we expect it will be a decreasing function of the distance from the founder , with , assuming the founder occupies position .the second term is the competition term , and in general it depends on the whole spatial distribution , moreover we suppose that two aerial micelia compete as stronger as close they are . is the average fitness and it is necessary to have normalized .it is defined as following : both terms are positive , therefore can be written in the exponential form where is the intensity of competition ( it will be large in presence of strong competition , i.e. low resource level ) and is a decreasing function of the distance between two mycelia .we also allow to diffuse to the nearest neighbors with diffusing coefficient .finally we get : according to the assumptions stated in section 2.1 , we now introduce the particular forms for and . depends on the distance from the founder , and the competition kernel , depending on the distance between mycelia .as mentioned above , we expected the probability of growth for the aerial mycelium to be higher near the founder , therefore has to be a decreasing function of . for the sake of simplicity we have chosen a single maximum , `` almost linear '' function , that has a quadratic maximum in ( founder ) , in fact close to we have and for , is linear . and control the intensity of the static fitness .the competition kernel has to be a steep decreasing function of ; we expect to have a finite range of competition , i.e. two mycelia at distance do not compete ( or compete very weakly ) .a possible choice is : we have also chosen the form for the kernel ( [ kernel ] ) and static fitness ( [ h_1 ] ) because it is possible to derive some analytical results that assure us the existence of a non - trivial spatial distribution for exponential kernel with exponent greater than ; is the range of competition .all the numerical and analytical results described in this paper are obtained using ( [ h_1 ] , [ kernel ] ) , but we have also tested similar potential obtaining the same qualitative results .computing numerically from eq .( [ evolution ] ) the asymptotic probability distribution , we get , for different values of the parameters , two types of spatial patterns . in particular numerical and analytical studies ( see ref . ) show that the crucial parameter is , i.e. the ratio between the intensity of competition and the intensity of the static fitness . for small values of , that is the competition is rather weak or in other wordswe have a maximal medium , we get a single peak gaussian - like distribution centered on the founder ( similar to the one showed on the left in fig .[ sim ] ( left ) with ) . for larger values of get a multi - peaked distribution ( see fig. [ alpha1 ] , ) , where the central peak ( founder ) is still present , but we get also some others peaks at an approximate distance , range of competition , between each other .this is the expected pattern for an isotropic competition , in fact the presence of equally distanced spots is due to the competition term , that inhibits the growth of any aerial mycelium around another one . to obtain spatial patterns similar to the concentric rings observed in our experiments , some feature of the peculiar spatial structure of _ streptomyces _ has to be added . as stated before , we hypothesize that due to the presence of the substrate mycelium morphology the competition is much stronger in the radial direction ( along the hyphae ) than in the tangential direction. therefore we decompose the distance between any points and in a radial and tangential part ( see fig .[ fig : distance ] ) where is a parameter that allows to change the metric of our space .for the relative weight of tangential distance is larger than one due to the lack of cell communications along this direction , the competition is mainly radial along the hyphae because the mycelia do not compete if they are not directly connected by an hypha . for get the usual euclidean distance . using the distance ( [ newdist ] ) in eq.([evolution ] ) with and strong competitionwe are able to obtain a set of rings composed by equally spaced spots at fixed distances from the founder ( see fig .[ sim ] ( right ) for ) , while in presence of large resource we still have a single peaked distribution ( fig .[ sim ] ( left ) ) . for larger values of rings become continuous , while for low values , , the multi - peaked structure of appears .these results are in agreement with those presented in ref . , where an one - dimensional system is considered . in this case the genotypic space plays the role of the real space , and using and a gaussian kernel is possible to derive analytically the value of transition between the two regimes ( single peaked and multi - peaked distribution ) .it is , for ( slow diffusion ) and ( static fitness almost flat ) with .thus for we have a multi - peaked distribution , while for only the fittest one survives ( single - peaked distribution ) .we isolated a strain of _ streptomyces _ that has a dual pattern of growth concerning the aerial mycelium : it gives rise to concentric rings centered on the founder cell , or to the classic circular bacterial colony .the medium is discriminant : in minimal media the first type of pattern arises , in maximal media the second one .the substrate mycelium follows a different pattern : optical microscopy observations revealed that every hypha originates from the primordial central colony ( the founder ) .moreover the growth of the substrate mycelium growing edge proceeds in radial direction from the founder . using a simple mathematical model for the formation of aerial mycelium we are able to simulate both aerial mycelium spatial patterns .the parameter we modulate to obtain these two different patterns is the competition intensity .indeed the main assumption of the model is that there is competition among the hyphae of vegetative mycelia for the energetic sources necessary for the formation of the aerial mycelium . in a medium with low nutrient concentrationthere is a strong competition for the aerial mycelium formation - and the model produces concentric rings patterns - instead in a maximal medium the competition is weaker - and the model produces the classic circular bacterial colony .the aerial mycelium is derived by the substrate mycelium , so we derived the constraints of the model from the morphological observations concerning the substrate mycelium described in the introduction .the system has a radial geometry centered on the founder ( the probability of formation of aerial mycelium is higher near the founder ) , and we assumed that the competition is affected by this feature .indeed the competition is stronger along an hypha due to the cell - cell communication typical of the `` multicellular '' organization of _streptomyces_. this implies that the competition is stronger along the radial direction than along the tangential , at least in the outer boundary of the colony .the growth pattern description above is referred to the presence of one single primordial colony . in presence of two or more coloniesclose one another we have observed different patterns with additive and negative interactions among the colonies .our minimal model is not able to reproduce these behaviors , due to the fact that in presence of many founders the simple assumptions of radial growth centered on a single founder is no more fulfilled .in conclusion we have found some peculiar spatial patterns for the aerial mycelium of _streptomyces_. we have proposed a simple mathematical model to explain these patterns assuming competition along the hyphae as the main ingredient that leads to pattern formation .our numerical results are able to reproduce spatial patterns obtained experimentally under different conditions ( minimal and maximal medium ) , while to get more complex behavior ( interference patterns , see fig .[ exp : int ] ) we expect more `` chemical '' species have to be added to our minimal model .we wish to thank f. bagnoli , m. buiatti , r. livi and a. torcini for fruitful discussions . m.b . and a.c .thank the dipartimento di matematica applicata `` g. sansone '' for friendly hospitality .
we present a simple model based on a reaction - diffusion equation to explain pattern formation in a multicellular bacterium ( _ streptomyces _ ) . we assume competition for resources as the basic mechanism that leads to pattern formation ; in particular we are able to reproduce the spatial pattern formed by bacterial aerial mycelium in case of growth in minimal ( low resources ) and maximal ( large resources ) culture media .
in the early days of dynamo theory the degree of intermittency of the generated magnetic field was not much of an issue .however , with the development of mean - field theory it became clear that the magnetic field can be thought of as consisting of a mean component together with a fluctuating one .the fluctuating component was initially thought to be weak , but that too changed when it was realized that at large magnetic reynolds numbers the fluctuations can strongly exceed the level of the mean field. given the intermittent nature of solar magnetograms , the surface magnetic field can well be described as fibril .this description was introduced by parker ( 1982 ) to emphasize that such a field may have rather different properties than a more diffuse field .the fibril nature of the magnetic field is particularly well illustrated by the fact that sunspots are relatively isolated features covering only a small fraction of the solar surface .it is often assumed that the fibril magnetic field structure extends also into deeper layers . on the other hand ,observations of sunspots suggest that spots are rather shallow phenomena ( kosovichev 2002 ) .furthermore , simulations of turbulent dynamos tend to show that the dynamo - generated magnetic field becomes less fibril as the fraction of the mean to the total magnetic field increases .such dynamos are generally referred to as large - scale dynamos ( as opposed to small - scale dynamos ) and they require either kinetic helicity or otherwise some kind of anisotropy .these ingredients are generally assumed to be present in the sun , and they are also vital for many types of mean - field dynamos , in particular the -type dynamos .it is therefore of interest to study in more detail the dependence of the degree of intermittency of the field on model parameters .the significance of looking at the degree of intermittency of the sun s magnetic field is connected with the question of how important is magnetic buoyancy in transporting mean magnetic field upward to the surface and out of the sun ( moreno - insertis 1983 ) .magnetic buoyancy may therefore act as a possible saturation mechanism of the dynamo ( see , e.g. , noyes et al .1984 ) , with the consequence of nearly completely wiping out magnetic fields of equipartition strength within the convection zone .on the other hand , if magnetic buoyancy is not a dominant effect , the dynamo may operate in a much more distributed fashion ( brandenburg 2005 ) .let us begin by looking at an idealized case of a dynamo in the presence of fully helical forcing .we shall distinguish between the kinematic regime where the field is weak and still growing exponentially , and the dynamic regime where the field is strong and beginning to reach saturation field strength . in fig .[ pbmean_fluct ] we plot the dependence of the mean - squared values of the small - scale and large - scale fields defined here by horizontal averages , so , where and have been defined as the mean and fluctuating fields .note that in the kinematic regime the energy of the magnetic fluctuations exceeds that of the mean field by a factor of about 3 , while in the dynamic regime this ratio is only about 1/3 .here we have used data from a recent paper of brandenburg ( 2009 ) were the magnetic reynolds number is only about 6 , while the fluid reynolds number is 150 , so the magnetic prandtl number is 0.04 .the turbulence is forced with a maximally helical forcing function at a wavenumber of about 4 times the minimal wavenumber of the domain .this ratio is also called the scale separation ratio and it also determines the ratio of magnetic fluctuations to the mean field in the kinematic regime , and its inverse in the dynamic regime ( blackman & brandenburg 2002 ) .depending on the value of the magnetic prandtl number , i.e. the ratio of kinematic viscosity to magnetic diffusivity , the field can be rather intermittent and lack large - scale order , especially when the magnetic prandtl number is not small ; see fig .note however the emergence of a large - scale pattern in the kinematic stage for , while for there are only a few extended patches and for the field is completely random and of small scale only .however , when the dynamo saturates , a large - scale structure emerges regardless of the value of the magnetic prandtl number and the field is considerably less intermittent than in the early kinematic stages .these simulations ( brandenburg 2009 ) were used to argue that in the sun , where is very small , the onset of large - scale dynamo action should not depend on the actual value of , even though the onset of small - scale dynamo action does depend on it ( schekochihin et al.2005 ; iskakov et al .2007 ) .( solid line in the upper panel ) and the fluctuating field ( dashed line in the upper panel ) and their ratio ( lower panel ) for and .the equipartition field strength has been introduced for normalization purposes . ] to 1 at .the orientation of the axes is indicated for the first panel , and is the same for all other panels . adapted from brandenburg ( 2009 ) . ]another example is forced turbulence in the presence of a systematic shear flow that resembles that in low latitudes of the solar convection zone and open boundary conditions at the surface and the equator .such a model was studied by brandenburg & sandin ( 2004 ) to determine how the effect is modified in the presence of magnetic helicity fluxes , and by brandenburg ( 2005 ) in order to determine the structure of dynamo - generated magnetic fields . in fig .[ pslice_128b3 ] we compare meridional cross - sections of the toroidal component of the magnetic field at a kinematic time ( ) with that at a later time when the dynamo has saturated and a large - scale field has developed ( ) , where is the turbulent rms velocity and is the wavenumber of the energy - carrying eddies or the forcing wavenumber in this case .turnover times , left panel ) and the saturated stage ( turnover times , right panel ) .vectors in the meridional plane are superimposed on a color / gray scale representation of the azimuthal field .the color / gray scale is symmetric about red / mid - gray shades , so the absence of blue / dark shades ( right panel ) indicates the absence of negative values .note the development of larger scale structures during the saturated stage with basically unidirectional toroidal field . adapted from brandenburg ( 2005 ) .] the case shown in fig . [ pslice_128b3 ] looks like the magnetic reynolds number is small , but this is not really the case .in fact , the magnetic reynolds number based on the inverse wavenumber of the energy - carrying eddies , , is about 80 . here , is the forcing wavenumber in units of the smallest wavenumber in the domain , , where is the toroidal extent of the computational domain .so , the magnetic reynolds number based on , which is sometimes also quoted , would then be about times larger , i.e. about 2400 .note also that , unlike the early kinematic stage when there can still be many sign reversals , at later times the field points mostly in the same direction .indeed , the toroidally averaged magnetic field captures about 50% to 70% of the total magnetic energy in the saturated state .these simulations confirm that there is a clear tendency for the magnetic field to become less intermittent and more space - filling and diffuse as the dynamo saturates .it must be noted , however , that these simulations are idealized in that the turbulence is driven by a forcing function that is maximally helical , and that the shear is relatively strong , i.e. the shear - flow amplitude is about five times stronger than the rms velocity of the turbulence . in the sunthis ratio is about unity .therefore one must expect that the degree to what extent the field tends to become more diffuse is in reality less strong than what is indicated by the simulations presented here .in the early 1980s , dynamo theory was confronted with the issue of magnetic buoyancy ( spiegel & weiss 1980 ) .it was thought that buoyant flux losses would reduce the dynamo efficiency .this effect was then also built into dynamo models of various types as a possible saturation mechanism ( noyes et al .1984 , jones et al . 1985 , moss et al . 1990 ) .however , with the first compressible simulations of turbulent dynamo action ( nordlund et al .1992 ) it became clear that magnetic buoyancy is subdominant compared with the much stronger effect of turbulent downward pumping .figure [ bt91 ] shows a snapshot from a video animation of magnetic field vectors together with those of vorticity ( brandenburg & tuominen 1991 ) . the magnetic field forms flux tubes that get wound up around a tornado - like vortex in the middle . in fig .[ bjnrrt96 ] this magnetic buoyancy of the flux tubes is analyzed in more detail .this figure confirms that there is indeed magnetic buoyancy , but it is balanced in part by the effects of downward pumping and the explicit downward motion in the proximity of the downdraft where the field is most strongly amplified during its descent .we have discussed the nature of the magnetic field of a large - scale dynamo in the saturated regime and have argued that the field becomes diffuse and more nearly space - filling as the dynamo saturates and that the effects of magnetic buoyancy are weak compared with the downward motions associated with the strong downdrafts in convection .here we have mostly focused on earlier simulations , but it is important to realize that at the moment there is no general agreement about the detailed nature of the solar dynamo .is it essentially of type , or are there other more dominant effects responsible for generating a large - scale magnetic field ? what causes the equatorward migration of the toroidal magnetic flux belts ?is it the dynamo wave associated with the dynamo , or is it the meridional circulation that overturns the intrinsic migration direction ( choudhuri et al .1995 ; dikpati & charbonneau 1995 ) .what is the dominant shear - layer in the sun for the dynamo to work ?there is first of all latitudinal shear , which is the strongest in absolute terms , and important for amplifying toroidal magnetic field as well as promoting cyclic dynamo action ( guerrero & de gouveia dal pino 2007 ) .in addition , there is radial shear which might be important for determining the migration direction of the toroidal flux belts .however , it is not clear whether the relevant component here is the positive in the bulk or the bottom of the convection zone , or the negative at the bottom of the convection zone at higher latitudes . or is it the negative in the near - surface shear layer ?an attractive property of the latter proposal is that it would allow for a dynamo scenario that is in many respects similar to that envisaged in the early years of mean - field dynamo theory ( steenbeck & krause 1969 , khler 1973 , yoshimura 1975 ) . in fig .[ pdiffrot ] we show the structure of contours as they were estimated by yoshimura ( 1975 ) based on the constraint that the internal angular velocity matches the latitudinal differential rotation at the surface and that is negative in the interior so that the dynamo wave propagates equatorward .the relative strength of the negative gradient near the surface is truly amazing and is best seen in a plot of benevolenskaya et al .( 1998 ) , which shows the radial dependence of at different latitudes ( fig .[ bene_et98 ] ) .the fact that the radial gradient is so strong is in principle not new .indeed , a mismatch between the higher helioseismic results for some below the surface and the lower values from doppler measurements of the photospheric plasma was recognized since the 1980s , but it is only now that helioseismology can actually provide detailed data points nearly all the way to the surface .we emphasize that these proposals ignore the possibility that the meridional circulation could in principle turn the direction of propagation around and might produce equatorward migration even with a positive ( choudhuri et al .1995 , dikpati & charbonneau 1999 ) .however , this requires that the induction effects given by and the radial differential rotation are separated in space , just as it is the case for the babcock - leighton dynamo effect .although such a hypothesis was already made by steenbeck & krause ( 1969 ) for other reasons , it is not clear that this is or will be compatible with results of turbulence simulations .there are two important issues that need to be clarified in the context of distributed dynamos .one is connected with the question why the dynamo might work efficiently in the near - surface shear layer in spite of the opposing effects of downward pumping , for example .the other is related to the formation of active regions and sunspots in models lacking strong fields of strength at the bottom of the convection zone , as is expected based on joy s law and results from the thin flux tube approximation ( chou & fisher 1989 ; choudhuri & dsilva 1990 ) .regarding the first issue one might expect that it could be connected with magnetic helicity conservation , which is now recognized as a major culprit in causing so - called catastrophic quenching of large - scale dynamo effects ( see brandenburg & subramanian 2005 for a review ) . alleviating such catastrophic quenching is facilitated by magnetic helicity fluxes connected with scales that are shorter than those of the large - scale field of the 11-year cycle .disposing of such excess magnetic helicity should be easier near the surface than deeper down , making the near - surface shear layer more preferred for dynamo action .regarding the formation of active regions and sunspots , some important clues have been obtained by investigating mean - field turbulence effects both in the momentum and in the energy equations .we refer here to the work of kitchatinov & mazur ( 2000 ) who find that a self - concentration of magnetic flux is possible as a result of the magnetic suppression of the turbulent heat flux .another mechanism might be connected with negative turbulent magnetic pressure effects ; see rogachevskii & kleeorin ( 2007 ) for a recent reference on this subject. clarifying these questions would be critical before further pursuing the idea of distributed dynamo action in the sun .
the degree of intermittency of the magnetic field of a large - scale dynamo is considered . based on simulations it is argued that there is a tendency for the field to become more diffuse and non - intermittent as the dynamo saturates . the simulations are idealized in that the turbulence is strongly helical and shear is strong , so the tendency for the field to become more diffuse is somewhat exaggerated . earlier results concerning the effects of magnetic buoyancy are discussed . it is emphasized that the resulting magnetic buoyancy is weak compared with the stronger effects of simultaneous downward pumping . these findings are used to support the notion that the solar dynamo might operate in a distributed fashion where the near - surface shear layer could play an important role .
the classical description of many body quantum systems , and the classical simulation of their dynamics , is generically a hard problem , due to the exponential size of the associated hilbert space .nevertheless , under certain conditions an efficient description of states and/or their evolution is possible .this is , for instance , demonstrated by the density matrix renormalization group method , which allows one to successfully calculate ground states of strongly correlated spin systems in one spatial dimension using matrix product states . in this context , the questions _ for which ( families of ) states does an efficient classical description exist? _ , and _when is an efficient classical simulation of the evolution of such states under a given dynamics possible? _ are naturally of central importance .apart from their practical importance , the above questions are directly related to more fundamental issues , in particular to the power of quantum computation and the identification of the essential properties that give quantum computers their additional power over classical devices ; this relation to quantum computation will be central in this article .in particular , we will study these questions from the point of view of the _ measurement based _ approach to quantum computing , more specifically the model of the one way quantum computer . in this model , a highly entangled multi qubit state , the 2d _ cluster state _ , is processed by performing sequences of adaptive single qubit measurements , thereby realizing arbitrary quantum computations .the 2d cluster state serves as a _universal resource _ for measurement based quantum computation ( mqc ) , in the sense that any multi qubit state can be prepared by performing sequences of local operations on a sufficiently large 2d cluster state .when studying the fundamentals of the one way model , two ( related ) questions naturally arise , which we will consider in the following ; first , it is asked which resource states , other than the 2d cluster states , form universal resources for mqc ; second , one may also consider the question whether mqc on a given state can be _ efficiently simulated _ on a classical computer . naturally , these two issues are closely related , as one expects that an efficient classical simulation of mqc performed on ( efficient ) universal resource states is impossible . however , it is important to stress that classical simulation and non universality are principally different issues .the question of which other resource states are also universal has been investigated recently in ref . , where the required entanglement resources enabling universality were investigated . in particular, it was proven that certain entanglement measures , in particular certain _ entanglement width _ measures , must diverge on any universal resource , thus providing necessary conditions for universality . on the other hand , the issue of classical simulation of mqc evidently brings us back to the central introductory questions posed above .results regarding the efficient simulation of mqc do exist , and it is e.g. known that any mqc implemented on a 1d cluster state can be simulated efficiently .more generally , the efficient description of quantum states in terms of ( tree ) tensor networks turns out to play an important role in this context . in this articlewe strengthen the connection between classical simulation of mqc and non universality .our starting point will be the no go results for universality obtained in ref . , stating that the entanglement monotones _ entropic entanglement width _ and _ schmidt rank width _ must diverge on any universal resource ; both measures are closely related , and we refer to section [ sect_ewd ] for definitions .we then focus on the schmidt rank width measure , and prove , as our first main result , that mqc can be efficiently simulated on every resource state which is ruled out by the above no go result .more generally , we prove that mqc can be simulated efficiently on all states where the schmidt rank width grows at most logarithmically with the system size .second , along the way of proving the above results , we provide a natural interpretation of the schmidt rank width measure , as we show that this monotone quantifies what the optimal description of quantum states is in terms of tree tensor networks ; this shows that there is in fact a large overlap between the present research and the work performed in ref . regarding the simulation of quantum systems using tree tensor networks . as our third main result, we show that the schmidt rank width ( and entanglement width ) these are measures which are defined in terms of nontrivial optimization problems can be computed efficiently for all graph states . moreover , for all graph states where the schmidt rank width grows at most logarithmically with the number of qubits , we give efficient constructions of the optimal tree tensor networks describing these states .we further remark that the origin of the schmidt - rank width lies in fact in graph theory , and its definition is inspired by a graph invariant called _rank width_. it turns out that the study of rank width in graph theory shows strong similarities with the study of efficient descriptions and simulations of quantum systems , viz .the two introductory questions of this article .the similarity is due to the fact that , in certain aspects of both quantum information theory and graph theory , one is concerned with the efficient description of complex structures in terms of tree like structures .we will comment on the existing parallels between these fields . finally , we emphasize that the present work is situated in two different dynamic areas of research within the field of quantum information theory ; the first is the study of universality and classical simulation of measurement based quantum computation , and the second is the problem of efficiently describing quantum systems and their dynamics .an important aim of this article consists of bringing together existing results in both fields and showing that there is a strong connection between them ; in particular , we find that the notion of schmidt rank width has been considered independently in refs . and and plays an important role in both areas of research . in order to establish the connections between these two areas in a transparent manner , a substantial part of this articleis devoted to giving a clear overview of which relevant results are known in both fields .the paper is organized as follows . in section [ universality ]we discuss entanglement width and schmidt - rank width , and their role in universality and classical simulation of mqc . in section [ sect_tensor ] the description of states in terms of tree tensor networksis reviewed , and a connection to schmidt - rank width is established .this section also includes our main result , stating that any state with a logarithmically bounded schmidt - rank width has , in principle , an efficient description in terms of a tree tensor network , and hence any mqc performed on such states can be efficiently simulated classically . in section [ sect_grapstates ] these resultsare applied to graph states , and we provide in addition an explicit way of obtaining the optimal tree tensor network .we discuss the relation between the treatment of complex systems in quantum information theory and graph theory in section [ sect_complex ] , and summarize and conclude in section [ sect_conclusion ] .in this section we introduce two related multipartite entanglement measures called _ entropic entanglement width _ and _ schmidt rank width _ and discuss their role in the studies of universality of resources for measurement based quantum computation ( mqc ) and in classical simulation of mqc .these entanglement measures are defined in section [ sect_ewd ] . in section [ sect_uni ]we review the definition of universal resources for mqc , and the use of the above measures in this study . in section [ sect_sim ]we consider the basic notions regarding efficient classical simulation of mqc .finally , in section [ sect_problem ] we pose the two central questions of this article in a precise way ; the first question asks about the interpretation of the measures entanglement width and schmidt rank width , and the second deals with the role of these measures in the context of classical simulation of mqc . the entropic entanglement width of an multi party state is an entanglement measure introduced in ref .qualitatively , this measure computes the minimal bipartite entanglement entropy in the state , where the minimum is taken over specific classes of bipartitions of the system .the precise definition is the following .let be an -party pure state .a _ tree _ is a graph with no cycles .let be a _tree , which is a tree such that every vertex has exactly 1 or 3 incident edges .the vertices which are incident with exactly one edge are called the _ leaves _ of the tree .we consider trees with exactly leaves , which are identified with the local hilbert spaces of the system .letting be an arbitrary edge of , we denote by the graph obtained by deleting the edge from .the graph then consists of exactly two connected components ( see fig .[ subcubic ] ) , which naturally induce a bipartition of the set .we denote the bipartite entanglement entropy of with respect to the bipartition by .the entropic entanglement width of the state is now defined by e_(|):= _ t_et e_a_t^e , b_t^e(| ) , where the minimization is taken over all subcubic trees with leaves , which are identified with the parties in the system .thus , for a given tree we consider the maximum , over all edges in , of the quantity ; then the minimum , over all subcubic trees , of such maxima is computed .( 230,160 ) ( 0,0 ) ( a ) example of a subcubic tree with six leaves ( indicated in blue ) .( b ) tree obtained by removing edge and induced bipartition .,title="fig:",scaledwidth=45.0% ] similarly , one may use the schmidt rank , i.e. the number of non zero schmidt coefficients , instead of the bipartite entropy of entanglement as basic measure .one then obtains the _ schmidt rank width _ , or _ _ , denoted by . the precise definition is the following . letting the number of non zero schmidt coefficients of with respect to a bipartition of as defined above , the of the state is defined by [ chiwidth ] _( |):= _ t_et _ 2 _ a^e_t , b^e_t(| ) .it is straightforward to show ( cf . ) that is an entanglement monotone , i.e. , this measure vanishes on product states , is a local invariant , and decreases on average under local operations and classical communication ( locc ) .the proof can readily be extended to , demonstrating that also is a valid entanglement measure .in fact , using that the schmidt rank is non increasing under _locc , or slocc , it can be proven that the is also non increasing under slocc . since the inequality _ 2 _ a , b(|)e_a , b(|)holds for any bipartition of the system and for any state , we have _ ( | ) e _ ( |).note , however , that these quantities can show a completely different ( scaling ) behavior .it is clear that the definitions of entropic entanglement width and schmidt rank width are based upon similar constructions , where optimizations are performed over subcubic trees .such constructions can of course be repeated for any bipartite entanglement measure ; hence a whole class of multipartite entanglement measures is obtained , which we will call the class of _ entanglement width measures_. the entropic entanglement width and are two examples of entanglement width measures .it would be interesting to consider other examples of entanglement width measures , and investigate their possible role in quantum information theory .the definitions of the above entanglement measures are inspired by a graph invariant called _ rank width _, which was introduced in ref .the connection with rank width is obtained by evaluating the entropic entanglement width or in _graph states_. this is explained next .first we recall the definition of graph states .let denote the pauli spin matrices .let be a graph with vertex set and edge set .for every vertex , the set denotes the set of neighbors of , i.e. , the collection of all vertices which are connected to by an edge .the graph state is then defined to be the unique -qubit state which is the joint eigenstate , with eigenvalues equal to 1 , of the commuting correlation operators [ k_a ] k_a:= _ x^(a)_bn(a ) _z^(b).standard examples of graph states include the ghz states , and the 1d and 2d cluster states , which are obtained if the underlying graph is a 1d chain or a rectangular 2d grid , respectively .we refer to ref . for further details .let be the adjacency matrix of , i.e , one has if and otherwise .for every bipartition of the vertex set , define to be the submatrix of defined by ( a , b ) : = ( _ ab)_aa , bb.using standard graph state techniques it can then be shown ( see e.g. ) that [ ctrk ] _ _ 2 ( a , b)&= & _ 2 _ a , b(|g ) + & = & e_a , b(|g).where denotes the rank of a matrix when arithmetic is performed over the finite field gf(2 ) . thus , the schmidt rank and the bipartite entanglement entropy w.r.t . any bipartition coincide for graph states , and are given by the rank of the matrix .using the identity ( [ ctrk ] ) , one immediately finds that the ( and entropic entanglement width ) of the graph state coincides with the _ rank width _rwd of the graph .the explicit definition of rwd reads ( g):= _ t_et _ _ 2 ( a_t^e , b_t^e)(where the minimization is again over subcubic trees as in the definition of ) , which , using ( [ ctrk ] ) , indeed coincides with the of . note that the subcubic trees which are considered in the definition of rank width are not to be confused with the defining graph of the graph state ( the latter can be an arbitrary graph ) ; the subcubic trees merely serve as a means of selecting certain bipartitions of the system , independent of the state which is considered .for instance , if we consider a linear cluster state of six qubits , corresponding to a graph that is a linear chain , then the tree depicted in fig .[ subcubic ] corresponds to the optimal tree in the definition of the rank - width ( and -width ) , leading to rwd . in section [ sect_complex ]we will further comment on the motivations for the definition of rank width , and we will draw parallels with the study of complex systems in quantum information theory . in ref . a definition for universality of families of states for mqc was put forward , and the use of to assess non universality of states was demonstrated . in this sectionwe briefly review the definition and the corresponding results . consider an ( infinitely large ) family of qubit states = \{|_1,|_2 , } , where is a state on qubits and for every .this family is called a _ universal resource for mqc _if for each state on qubits there exists a state on qubits , with , such that the transformation latexmath:[$|\psi_i\rangle\to of locc .that is , any state can be prepared using only states within the family as resource .equivalently , the action of an arbitrary unitary operation on a product input state can be implemented , where now in the above definition .this definition is in the spirit of the model of the one way quantum computer , where sequences of adaptive single qubit measurements performed on a sufficiently large 2d cluster state allow one to prepare any multi qubit state .the definition of universal resource aims to identify the required resources , in terms of entanglement , that allow one to perform universal quantum computation in the sense specified above . in the above definition of universality of a family , we have not yet considered the efficiency with which states can be prepared using members of .an _ efficient _ universal resource is a universal resource having the property that all states that can be efficiently generated with a quantum gate network should also be efficiently generated from universal resource .we refer to ref . for a detailed account on efficient universality . in ref . it was found that any universal resource must satisfy the following property .let be a functional which is defined on the set of _ all _ -qubit states , for _ all _ , and suppose that is non increasing under locc .more precisely , if and are states on and qubits , respectively , then whenever the transformation is possible by means of locc .moreover , let denote the supremal value of , when the supremum is taken over all -qubit states , for all ( the case is allowed ) .then any universal resource must satisfy the property \{e(| ) | | } = e^*.that is , the supremal value of every entanglement measure must be reached on every universal resource .using the fact that there exist families of quantum states where the entropic entanglement width and grow unboundedly with the system size ( the 2d cluster states are such examples ) , it is then straightforward to show that any universal family of states must have unbounded entropic entanglement width and as well .more precisely , one has : [ thm_ewd ] let be a universal resource for mqc .then the following statements hold : * ; * . in other words , families where the measures or are _bounded _ , can not be universal .this insight , together with the relation between entropic entanglement width and and the graph theoretical measure rank width , allows one to identify classes of graph states as being non universal since the rank width is bounded on such classes .examples include linear cluster graphs , trees , cycle graphs , cographs , graphs locally equivalent to trees , graphs of bounded tree width , graphs of bounded clique width or distance hereditary graphs .we refer to the literature for definitions . in the remainder of this paper , we will focus on the measure . rather than considering the question whether a family is a universal resource for mqc, one may also consider the question whether mqc on can be _ efficiently simulated _ on a classical computer .we will say that efficient classical simulation of mqc on a family of states is possible , if for every state it is possible to simulate every locc protocol on a classical computer with overhead poly , where denotes the number of qubits on which the state is defined , as before .we remark that an efficient classical description of the initial states is a necessary , but not necessarily a sufficient condition for efficient simulation on a classical computer .the issue of classical simulation of mqc has recently been considered by several authors . at this pointwe remind the reader of what is already known in this context . regarding simulation of mqc on _ graph states _, we recall the following results : * in ref . it was showed that mqc on 1d cluster states can be simulated efficiently classically ; * in ref . it was showed that mqc on tree graphs can be simulated efficiently classically ; * in ref . it was showed that mqc on graphs with logarithmically bounded tree width can be simulated efficiently classically .note that the above result on tree width implies the two other results , as tree graphs ( and thus also 1d cluster graphs ) have tree width equal to 1 .more general results , i.e. , regarding _arbitrary states _ , were obtained in ref . , where it was shown that mqc can be simulated efficiently on all states allowing an efficient _ tree tensor network _ description .the description of quantum systems in terms of tree tensor networks will play an important role in the present analysis , and will be reviewed in detail in section [ sect_tensor ] .although related , the issues of universality and classical simulation in mqc are fundamentally two different questions .most of us expect that any family for which classical simulation of mqc is possible , will not be an efficient universal resource ; this reflects the common belief that quantum computers are in some sense exponentially more powerful than classical machines note , however , that so far there is no rigorous proof of this statement .while one expects the possibility of classical simulation of mqc to imply non universality of a resource , the converse implication is certainly not believed to hold in general .indeed , it is highly likely that many non universal families could still be used to implement specific quantum algorithms .it is clear that regarding the notion of -width , and the above issues of universality and classical simulation of mqc , a number of open questions remain . in this sectionwe formulate two central questions , ( q1 ) and ( q2 ) , which will constitute the main research topics in this article .we will first state these questions and then discuss them . * does there exist a natural _ interpretation _ of the measure ? * do there exist resources having bounded , which nevertheless _ do not allow _ an efficient classical simulation of mqc ?question ( q1 ) is concerned with the fact that the definition of seems to be rather arbitrary and not intuitive , and solely motivated by the connection to the graph theoretical measure rank width .we will , however , provide a satisfactory interpretation of this measure in the context of quantum information in the next section .question ( q2 ) is concerned with the question whether non universal resources can still be useful for quantum computation , in the sense that mqc performed on such states is more powerful than classical computation . as remarked above, it may well be that there exist non universal families of states where mqc is nevertheless hard to simulate classically .previous results leave open this possibility , as the criteria for non universality and classical simulatability do not coincide . for non universal states detected by the criterion ( i.e. , theorem [ thm_ewd ] ( ii ) ) , we will show that this is not the case . in section [ sect_connection ]we will show that mqc can be simulated efficiently for any family which is ruled out by the criterion as being a non universal resource .in this section we tackle questions ( q1 ) and ( q2 ) as stated in the previous section .first we will attach a natural interpretation to the measure , as we will show that quantifies the complexity of the optimal _ tree tensor network _ ( ttn ) describing the state , thus providing a satisfactory answer to question ( q1 ) .moreover , we shall see that this connection with tree tensor networks immediately allows us to give a negative answer to ( q2 ) : we find that mqc can be simulated efficiently on all resources having a bounded .these results will be obtained in three main steps . in section [ sectionttn ]we review the notions of tensor networks and , more particularly , tree tensor networks .we also review results obtained in ref . , where it was proved that locc on states specified in terms of efficient ttn descriptions can be simulated efficiently ; the results in ref . will be central ingredients to our analysis . in section [ section_simttn ]we show how to obtain ttn descriptions for arbitrary quantum states .finally , in section [ sect_connection ] we establish the connection between ttns and . in this sectionwe review the basic notions regarding ( tree ) tensor networks ( see also ref . ) , and the simulation of quantum systems described by ttns as obtained in ref . .consider a complex tensor a:=a_i_1i_2 i_n , where each index ranges from to , for every .the number of indices is sometimes called the _rank _ of the tensor .we will call the number the _ dimension _ of . for example , every pure -qubit state expressed in a local basis , |= _ i_1 , , i_n=0 ^ 1 a_i_1 i_n rank and dimension 2 . if and are two tensors of ranks and , respectively , and and are integers with and , and both the index of and the index of range from to the same integer , then a sum of the form _ j=1^d a^(1)_i_1 i_s-1j i_s+1 i_na^(2)_i_1 i_t-1 ji_t+1 i_nyields a tensor of rank .this sum is called a _ contraction _ of the tensors and .more specifically , one says that the index of is contracted with the index of .a situation where several tensors are contracted at various indices is called a _tensor network_. the maximal dimension of any tensor in the network , is called the _ dimension _ of the network , and will usually be denoted by in the following . notethat every tensor network with _ open _ indices ( i.e. , indices which are not contracted ) , can be associated in a natural way to an -party pure quantum state .we will only consider tensor networks where every index appears at most twice in the network . in this case , every tensor network can be represented by a _ graph _ in the following way . * for every tensor a vertex is drawn .* whenever two tensors and contracted , an edge is drawn between the corresponding vertices and in the graph . *finally , for every _ open _ index of a tensor , i.e. , an index which is not contracted , one draws a new vertex and an edge connecting this vertex to the vertex .as an example , consider three tensors contracted as follows : [ tn ] _jkla^(1)_ajka^(2)_bjla^(3)_ckl.this tensor network has 3 open indices , and the indices are contracted .the graph underlying this tensor network is depicted in fig .[ tensor1]a .the tensor network ( [ tn ] ) is naturally associated with a pure state |:= _ abc \ { _ jkla^(1)_ajka^(2)_bjla^(3)_ckl } |a_1|b_2|c_3,where we introduced local bases , , and ( the subscripts denote the associated hilbert spaces of the basis vectors ) . in fact , is an example of a matrix product state . writing |_jk^(1)&:=&_aa^(1)_ajk |a_1 , + _ jkl|_jk^(1)|_jl^(2)|_kl^(3).it is clear that similar shorthand expressions can be obtained for arbitrary tensor networks . a _ tree tensor network _ ( ttn ) is a tensor network where the underlying graph is a tree , i.e. , a graph with no cycles .an example of a ttn is _ ijklm a^(1)_abia^(2)_ijka^(3)_jlma^(4)_cdla^(5)_efma^(6)_ghk , and the corresponding tree graph is depicted in fig .[ tensor1]b .note that ( [ tn ] ) is an example of a tensor network which is _ not _ a ttn .( 230,120 ) ( -5,0 ) the following definitions regarding ttns will be important below ( see theorem [ thm_ttn ] ) .let be a tree .open edge _ is an edge which is incident with a leaf of .an _ inner edge _ is an edge which is not an open edge .consider a ttn with tree having open edges , corresponding to an -party state .let be an inner edge , and let be the corresponding bipartition of the system . by partitioning all tensors in the network in two classes as induced by the bipartition and grouping all contractions which occur between tensor in the same class of the bipartition , one can write the network in the form _ i bipartition _ if the vectors and are ( up to a normalization ) the schmidt vectors of the state w.r.t the bipartition .we say that the ttn is in _ normal form _ if it is in normal form for all bipartitions , where ranges over all inner edges in .the interest in ttns in quantum information theory lies in the property that the representation of systems in terms of ttns leads to efficient _ descriptions _ of states as well as to the possibility of efficiently simulating the _ dynamics _ of the system .the main results in this context were obtained in refs . and .the latter result will be particularly interesting for our purposes , and will be reviewed next .we will be concerned with ttns corresponding to subcubic trees .it can easily be verified that if a ttn corresponds to a subcubic tree , has open indices , and has dimension , then the ttn depends on at most complex parameters . therefore ,if an -party state can be described by a ttn where scales at most polynomially in , then can be described by poly complex parameters by using this ttn .hence a family of systems allowing an efficient description is obtained . what is more, it has been shown that also the _ processing _ of such systems can efficiently be simulated classically .the following result , obtained in ref . , will play an important role in the subsequent analysis .[ thm_shi ] if an -party pure quantum state is specified in terms of a ttn of dimension , where the underlying tree graph is subcubic , then any mqc performed on can be classically simulated in time .therefore , if grows at most polynomially with , then the above simulation scheme is efficient .it is noted by the authors in ref . that there is no restriction in considering subcubic trees only , in the sense that any -party state which can be represented by a ttn ( with arbitrary underlying tree ) with poly parameters , can also be represented by a subcubic ttn with poly parameters .theorem [ thm_shi ] shows that , if an efficient ttn description is _ known _ for a quantum state , then locc on this state can be simulated efficiently .however , this result does not give any information about _ obtaining _ an ( efficient ) ttn description of a given state .note that , if a state is specified , there might exist several ttn descriptions , some of which might be efficient and some of which might not be .in fact , we will see below that , if a subcubic tree with open edges is specified , then _ any _ -party state can be represented by a ttn with this specific tree structure although generally tensors of exponential dimension in are required .therefore , the following two questions are naturally raised : * if a state and a subcubic tree are given , what is the behavior of the dimension of the associated ttn(s ) ? * if only a state is given , what is the optimal subcubic ttn describing this state , i.e. , the one with the smallest dimension ?next it is shown that the entanglement in the state as measured by the schmidt rank , plays a crucial role in answering the above questions .we prove the following result .[ thm_ttn ] let be an -party state and let be a subcubic tree with leaves which are identified with the parties in the system .then there exists a ttn description of with underlying tree , where the dimension of this ttn is equal to [ d ] _ 2 d = _ et _ a^e_t , b^e_t(|).moreover , this ttn is in normal form ._ proof : _ the proof is constructive .the idea is to stepwise compute all tensors associated to the vertices of , by traversing the tree from the leaves to the root , as depicted in fig .[ tree1 ] .first we need some definitions .a vertex of which is not a leaf is called an _ inner _ vertex ; note that every inner vertex has degree 3 .we fix one inner vertex and call it the _ root _ of the tree .the _ depth _ of a vertex is the length of the shortest path from this vertex to the root .we denote by the maximal depth of any inner vertex in .we refer to fig .[ tree1 ] for a schematic representation .( 230,80 ) ( -5,0 ) the construction is initialized by considering all inner vertices of depth .every such vertex has two open edges , corresponding to two qubits in the system .we let be the vertices associated in this way to , for every .we then compute all schmidt decompositions w.r.t .the bipartitions ( rest of the system ) , i.e. , [ schmidt_ini ] |= _ i |_i^()|_i^ ( ) , for every .the vectors have support on the qubits , the vectors have support on the rest of the system .the schmidt coefficients are absorbed in the latter vectors .one then proceeds by computing the tensors associated to the inner vertices of depth , and then to the vertices of depth , up to depth equal to 1 , by in every step applying the procedure which will be outlined now .let .for every vertex , let be the unique subtree of such that and is one of the two subtrees obtained by deleting the upper edge of .let be the tree obtained by , first , adding one vertex to and connecting to with an edge and , second , drawing open edges at the vertex , where is equal to the number of qubits which do not correspond to leaves of .now , suppose that the following is true : _ for all inner vertices of depth , a ttn description for is known with tree , and all these ttns are in normal form ._ we then outline a procedure to obtain , for every inner vertex of depth , a ttn description for with tree , and all these ttns are in normal form .* procedure. * consider an inner vertex of depth .let denote the edges incident with , such that and are the lower edges , and is the upper edge as in fig .[ tree2 ] .let be the unique tripartition of the system defined by ( x_1 , x_2x_3 ) & : = & ( a^e_1_t , b^e_1_t ) + ( x_2 , x_1x_3 ) & : = & ( a^e_2_t , b^e_2_t ) + ( x_3 , x_1x_2 ) & : = & ( a^e_3_t , b^e_3_t).see also fig .[ tree2 ] for a simple pictorial definition .( 230,310 ) ( -5,0 ) we then make the distinction between the following cases : * neither or are open edges , i.e. , both edges connect to other inner vertices ; * one of these two edges , say , is an open edge .first we consider case ( a ) .let ( ) be the vertex connected to by the edge ( ) . by assumption, we have ttn descriptions for with trees and which are in normal form .consider these ttn descriptions , and group all contractions in such a way that one obtains schmidt decompositions of with respect to the above bipartitions : [ schmidt ] |= _ i=1^d_|^i_x_i|^i_|x_i , for every , where denote the schmidt ranks , and where denotes the complement of ( e.g. , ) .the schmidt coefficients have been absorbed in the vectors .consider also the schmidt decomposition of w.r.t .the split , using an analogous notation [ schmidt2]|= _ i=1^d_3|^i_x_3|^i_|x_3 . the latter decomposition is not given by ttn so far , and has to be calculated separately .using the above 3 schmidt decompositions , we can write |&= & _ i=1^d_1|^i_x_1|^i_x_2x_3[1 ] + & = & _ i=1^d_1 |^i_x_1^i_x_1|[2 ] + & = & _ i=1^d_1_j=1^d_2 |^i_x_1|^j_x_2^i_x_1|^j_x_1x_3[3 ] + & = & _ i=1^d_1_j=1^d_2_k=1^d_3 |^i_x_1|^j_x_2|^k_x_3b^ijk,[4 ] where we have used the following arguments and definitions . in order to go from ( [ 2 ] ) to ( [ 3 ] ) , we have inserted equation ( [ schmidt ] ) for in ( [ 2 ] ) ; to obtain the last equality ( [ 4 ] ) , we have defined the tensor by [ b ] ^i_x_1|^j_x_1x_3= _k=1^d_3 b^ijk |^k_x_3 .this yields a ttn description of with underlying tree .note that ( [ 4 ] ) implies that the schmidt vectors are recuperated as normal form w.r.t.the bipartition .it then immediately follows that this ttn is in normal form .this concludes case ( a ) .next we consider case ( b ) .let and be defined as above .note that in this case .consider again the ttn description and related schmidt decomposition ( [ schmidt ] ) for , i.e. , for the bipartition .note that the schmidt decomposition for the split is not available from the ttn since is not an inner vertex , but we will not need it . as in ( a ) ,consider also the schmidt decomposition ( [ schmidt2 ] ) , i.e. , for the bipartition .we then write |&= & _ i=1^d_1 &= & _ i=1^d_1_k=1^d_3 |^i_x_1|^k_x_3^i_x_1|^k_x_1\{v_2 } + & = & _ i=1^d_1_k=1^d_3 |^i_x_1|^ik_\{v_2}|^k_x_3where we have used the definition [ psi_ik_v2 ] |^ik_\{v_2}:= ^i_x_1|^k_x_1\{v_2}.this yields a ttn description of with underlying tree which is again in normal form .this concludes ( b ) .this also ends the procedure .note that the assumption of the procedure is trivially fulfilled for after the schmidt decompositions ( [ schmidt_ini ] ) have been computed .the procedure is then applied to .after this , all tensors in the desired ttn description are known , except the one associated to the root of . to obtain this final tensor ,the following steps are taken .let , , be the edges incident with , let , , be the corresponding vertices of depth 1 , and let the tripartition be defined as before . from the previous steps in the algorithm, we have ttn descriptions for with trees , and which are in normal form .consider these ttn descriptions , and group all contractions as above , in such a way that one obtains schmidt decompositions of with respect to the above bipartitions : [ schmidt_final ] |= _ i=1^d_|^i_x_i|^i_|x_i , for every .a similar derivation as ( [ 1])([4 ] ) shows that can be written as [ b][5]|= _ i=1^d_1_j=1^d_2_k=1^d_3 similarly as above .this expression describes as a ttn with tree , as desired .moreover , it follows from ( [ 5 ] ) that this ttn is in normal form w.r.t to the bipartitions for .since the ttns ( [ schmidt_final ] ) were in normal form by construction , this implies that the ttn description ( [ 5 ] ) is in normal form altogether .finally , it immediately follows that the dimension of this ttn is equal to ( [ d ] ) .this concludes the proof of theorem [ thm_ttn ] . note that theorem [ thm_ttn ] proves that , if a subcubic tree with open edges is specified , then any -party state can be represented by a ttn with this specific tree structure .the construction presented in the proof of theorem [ thm_ttn ] is similar to a procedure presented in ref . of how to obtain a matrix product description ( which is a particular instance of a tensor network ) for an arbitrary state ; there , too , the dimension of the tensor network depends on the maximal schmidt rank of as measured w.r.t a specific class of bipartite splits , similar to ( but different from ) eq .( [ d ] ) .theorem [ thm_ttn ] now allows us to give a natural interpretation of the -width measure ( [ chiwidth ] ) .namely , for any state one has : * is the smallest possible dimension of a ttn associated to through the schmidt decomposition construction described in theorem [ thm_ttn ] ; * the tree which yields the minimum in ( [ chiwidth ] ) corresponds to the optimal ttn , i.e. , the one with smallest dimension .these observations fully answer ( q1 ) , the first of the two central questions put forward in section [ sect_problem ] of this article .what is more , we now immediately arrive at a satisfactory answer to question ( q2 ) , since theorems [ thm_shi ] and [ thm_ttn ] ( see also ref . ) imply the following .[ mainthm ] let be an -party state .denote , let be a tree yielding the optimum in the definition of , and suppose that the ttn description of with underlying tree is known .then any mqc on can be simulated classically in time .in particular , this result shows that , whenever is _ bounded _ on a family of states , then any mqc on can be simulated efficiently classically even in linear time in the system size .this result fully answers question ( q2 ) in the negative ; i.e. , the measure , which was originally introduced as a means to assess whether a resource is universal for mqc , can equally well be used to asses whether mqc on can be efficiently simulated classically . in particular, we have found that mqc can be simulated efficiently for _ any family which is ruled out by the criterion ( i.e. , theorem [ thm_ewd ] ( ii ) ) as being a non universal resource_. note that theorem [ mainthm ] even allows one to conclude that efficient simulation is possible when grows at most logarithmically with the system size i.e. , it may be unbounded .one observes that if exhibits this scaling behavior on a family of states , then it is not detected by the universality criterion .this apparent paradox is resolved by considering the notion of _ efficient universality _ , which was briefly introduced in section [ sect_uni ] .when this requirement is introduced in the definition of universality , the above paradox is resolved as follows .one can prove that ( and ) need to grow _ faster than logarithmically with the system size _ on any efficient universal resource .this clearly resolves the above apparent contradiction . while the above results indeed settle questions ( q1 ) and ( q2 ) , in practical situations one is of course faced with the problem whether , when a state is specified , the optimal ttn can be computed efficiently . in particular , if theorem [ mainthm ] is to be applied , the following quantities need to be computed : * the quantity itself ; * an optimal subcubic tree in the calculation of ; * the ttn description of corresponding to the tree .it is clear that , for any of the above quantities to be efficiently computable , in the least one needs to have an efficient description of the state in some form say , a polynomial size quantum circuit leading to the preparation of the state , or , in the case where is a graph state , the underlying graph or stabilizer description. if an efficient description is not available , quantities such as e.g. the schmidt rank w.r.t . some bipartition can generally not be computed efficiently , and there is no hope of computing e.g. ( a ) in polynomial time .however , it is important to stress that the possibility of an efficient description is by no means sufficient to compute the quantities ( a)(b)(c ) efficiently .regarding ( a ) and ( b ) , the optimization in the definition of the measure suggests that an explicit evaluation of in a specified state , as well as the determination of the optimal subcubic tree , might be a highly nontrivial task .however , we note that general results in this context are known . in particular , we refer to ref . , where optimization problems of the form _ t_et f(a^e_t)are considered , where is a function defined on subsets of , .it has been shown that such optimizations can be performed in polynomial time in , i.e. , the optimum as well as the tree yielding the optimum can be determined efficiently , for a subclass of functions which meet several technical requirements .in the next section we will see that the graph states form a class of states where these requirements are met , such that the calculation of the -width can be performed efficiently . however , the techniques presented in ref . might be used or generalized to calculate the efficiently for classes of states larger than the graph states .regarding ( c ) , it is clear that the optimal ttn description of can only be computed efficiently if this ttn description is itself efficient , i.e. , if it depends on at most poly parameters this is exactly the case when scales as log .if scales in the latter way , then it follows from the procedure outlined in theorem [ thm_ttn ] , that the optimal ttn description of can be obtained efficiently given one is able to determine the following quantities in poly time : * the schmidt coefficients and schmidt vectors for all bipartitions , where is the optimal tree in the definition of the . * certain overlaps between schmidt vectors : in particular , the tensor coefficients b^ijk = ^i_x_1|^k_x_3|^j_x_1x_3 in eq .( [ b ] ) and similar tensors in eq .( [ b ] ) , as well as the vectors |^ik_\{v_2}:= ^i_x_1|^k_x_1\{v_2}in eq .( [ psi_ik_v2 ] ) .thus , a number of conditions need to be fulfilled to obtain an efficient ttn description , if it exists , for a given state .remarkably , in the next section we show that the quantities ( a ) , ( b ) and ( c ) can be computed efficiently for all _ graph states_. as a final remark in this section , note that an efficient ttn description ( if it exists ) of a state w.r.t . a given tree , can always we obtained efficiently if is already specified in terms of an efficient ttn description w.r.t . a different tree .in this section we specialize the results obtained in the previous section to graph states .theorem [ mainthm ] and the connection between of graph states and rank width of graphs , allows us to obtain the following result .[ mainthm2 ] let be a graph state on qubits .if the rank width of grows at most logarithmically with , then any mqc on can efficiently be simulated classically .in particular , the above result shows that if rwd is bounded on a family , then any mqc on the set ) can also be given here as examples of resources on which mqc can be simulated efficiently classically .note that theorem [ mainthm2 ] supersedes all known results ( see section [ sect_sim ] ) on classical simulation of mqc on graph states . to see this ,let us consider the result in ref . stating that mqc can be simulated efficiently on all graph states with logarithmically bounded tree width twd . using the inequality ( g)4(g )+ 2,one finds that , whenever twd scales as log ( where is the number of qubits in the system ) , then also rwd scales at most as log .thus , theorem [ mainthm2 ] implies that mqc can be simulated efficiently on all graph states with logarithmically bounded tree width , and the result in ref . is retrieved .this shows that theorem [ mainthm2 ] fully recovers and generalizes the known results on simulation of mqc on graph states .finally , we emphasize that the rank width can be bounded on families of graphs which do _ not at all _ have any tree like structure , i.e. , graphs possibly having many cycles ; therefore , the presence of cycles in a graph is no indication that efficient simulation of mqc on the associated state might be hard .one reason of this property is that a possible tree structure of a graph does not remain invariant under local operations ; e.g. , the fully connected graph and the star graph ( one central vertex connected to all other vertices ) are locally equivalent ; the latter is a tree graph , the former is not in fact , the tree width of the star graph is equal to 1 , whereas the tree width of the fully connected graph on vertices is .contrary to e.g. the tree width measure , the rank width is a local invariant , thus taking into account such cases . due to these properties ,our results prove a significant extension to the use of the tree width ; indeed , the above example unambiguously illustrates the superiority of the rank width as a criterion to address the classical simulation of mqc on graph states . in this sectionwe are concerned with the issue whether , if a graph state is given , the optimal ttn can be computed efficiently , i.e. , we consider the quantities ( a)(b)(c ) as denoted in section [ sect_connection ] .let be a graph on vertices .it was shown in ref . that , for a fixed integer , the problem _`` is the rank width of smaller than ? '' _ is in the complexity class .moreover , in ref . several polynomial time so called _ approximation algorithms _ for the rank width are constructed .when is given as an input , the ( most efficient ) algorithm either confirms that rwd is larger than , or it outputs a subcubic tree such that _ et^ * _ _ 2 ( a_t^*^e , b_t^*^e ) = 3k-1,which implies that rwd .the running time of the algorithm is .these results immediately yield an efficient procedure to determine the qualitative behavior of the of graph states , and to determine the optimal subcubic tree in the calculation of the . more precisely ,a possible ( binary search ) approach is the following : first run the above algorithm for ; if the algorithm confirms that rwd , then run the algorithm for ; if not , then run the algorithm for , etc .this algorithm is guaranteed to terminate in poly time .after the last run of the algorithm , the rank width , and the corresponding optimal subcubic tree , is obtained up to a factor 3 .thus , both quantities ( a ) and ( b ) as defined in the discussion following theorem [ mainthm ] , can be computed efficiently for any graph state . as for an efficient calculation of quantity ( c ) , we note that , for any bipartition of the system , both the schmidt coefficients and the schmidt vectors can be computed efficiently for graph states using the stabilizer formalism ; moreover , the schmidt vectors can always be chosen to be stabilizer states themselves .this can be proved as follows ( we only give a sketch of the argument , as it involves standard stabilizer techniques ) .let be a graph state on qubits , and let be a bipartition of .let denote the stabilizer of , defined by : = \{_av ( k_a)^x_a | x_a\{0 , 1 } , av},where the operators have been defined in eq .( [ k_a ] ) .thus , is the commutative group generated by the operators .one then has |gg| & = & _av(i+ k_a)=l _ g g.let be the subgroup of operators in acting trivially on the qubits in . then _b & = & _ g_a _ h_ah= _ a.the second equality holds since is a group .denoting , it follows that showing that a projection operator .thus , all nonzero eigenvalues of this operator are equal to 1 .this shows that all nonzero eigenvalues of ( which are the squares of the schmidt coefficients of w.r.t .the bipartition ) are equal to .moreover , as has unit trace , it follows that r^-1 ( _ a ) = 1 , such that the number of nonzero eigenvalues of is equal to . the eigenvectors of can be computed as follows .let denote a minimal generating set of , where .let be additional pauli operators , chosen in such a way that [ stab ] \{k^a_1 , , k^a_s , k^a_s+1 , , k^a_|a|}is a set of commuting and independent operators ; such a set always exists ( though it is non unique ) and can be computed efficiently , by using the stabilizer formalism ( see e.g. ) .note that ( [ stab ] ) is the generating set of a stabilizer state on the qubits in , namely the state || : = _ i=1^|a|( i + k^a_i ) .moreover , this state is an eigenstate of . to see this , note that is a generating set of the group , this last identity implies that for every , and therefore _ a|&= & _g_a g|= |.in order to obtain a basis of eigenvectors , one considers the stabilizer states with stabilizers generated by \{k^a_1 , , k^a_s , _ s+1k^a_s+1 , , _ |a|k^a_|a| } , where , for every .one can , with arguments analogous to above , show that all these states are eigenvectors of . moreover ,all these states are mutually orthogonal ; one has & & _ _ s+1 , , _|a||__s+1 , , _|a| + & = & ( -1)^_k__s+1 , , _|a||k^a_k|__s+1 , , _ |a| + & = & ( -1)^_k + _ k__s+1 , , _|a||__s+1 , , _ |a|,[orth],for every , where we have respectively used that _ _s+1 , , _|a|| = ( -1)^_k__s+1 , , _ |a||k^a_k and k^a_k|__s+1 , , _ |a|= ( -1)^_k|__s+1 , , _|a|.it immediately follows from the identity ( [ orth ] ) that the states are mutually orthogonal . since there are exactly such vectors , as many as there are nonzero schmidt coefficients , we have computed all schmidt vectors of w.r.t .the bipartition .remark that at this point we only have a stabilizer description of the schmidt vectors ; if necessary , the expansion of these vectors in the computational basis can be computed using the results in ref . .this shows that both schmidt coefficients and schmidt vectors of w.r.t .any bipartition can be computed efficiently , and that the schmidt vectors can always be chosen to be stabilizer states .moreover , note that overlaps between stabilizer states can be computed efficiently using stabilizer techniques , and we refer to , where this problem was considered .thus , all necessary ingredients ( cf .( i)(ii ) in section [ sect_connection ] ) needed for the efficient construction of the optimal ttn of a graph state , can be computed efficiently when rwd scales as log .we then arrive at the following result .let be a graph state on qubits and denote .then an optimal subcubic tree in the definition of can be computed in poly time . moreover ,if scales as log then the ttn description of corresponding to can be computed in poly time .note that , in particular , the conditions of the above theorem are fulfilled for all classes of graphs having bounded rank width , and thus efficient ttns can be computed in poly time for all such classes . in this sectionwe give an explicit example of the computation of the rank width , the optimal subcubic tree , and the corresponding ttn description of a particular graph state , namely the 6-qubit state associated to the _ cycle graph _ ( or _ ring graph _ ) on 6 vertices .the adjacency matrix of is the matrix , where denotes an entry equal to zero .first we compute the rank width of the graph .in fact , we will prove that rwd . to show this , consider the subcubic tree depicted in fig .[ subcubic ] .the leaves of are associated to the vertices of in the following natural way : first , fix an arbitrary vertex of and denote this to be vertex 1 ; then , starting from vertex 1 , traverse the vertices of in a counterclockwise way , and denote the vertices by 2 , 3 , 4 , 5 and 6 , respectively ; these vertices are now associated to the leaves of by identifying vertex 1 with the leftmost leaf of , vertex 2 with the second leaf from the left , etc .it is now straightforward to show that _ et _ _ 2 ( a^e_t , b^e_t ) = 2 .this can be showed by simply computing the ranks of all matrices and picking the largest of these ranks .furthermore , one has [ alpha ] _ t(c_6):=_et __ 2 ( a^e_t , b^e_t ) 2for every subcubic tree .this can be seen as follows : first , note that for every , since _ _ 2 ( a , b)1for every bipartition .second , suppose that is a subcubic tree such that ; we will show that this leads to a contradiction .note that rank is equal to 1 if and only if is a bipartition of the form ( one vertex rest ) .moreover , if , then one must have _ _ 2 ( a^e_t , b^e_t ) = 1for every .thus , every bipartition must be of the form ( one vertex rest ) ; this leads to a contradiction .this shows that the inequality ( [ alpha ] ) is correct .we can therefore conclude that ( c_6):= _ t _ t(c_6 ) = 2and that the tree as depicted in fig .[ subcubic ] yields the optimum . at this pointwe note that here ad hoc methods have been used to obtain the above result ; however , we remind the reader that general algorithms exist to calculate the rank width and the optimal tree , as cited in section [ sect_sim_graph ] .the computation of the ttn description of with underlying tree is performed in appendix [ app ] .the result is the following : [ c_6_ttn]x_1 x_6|c_6= _ abcdef ^(1)_abx_1x_2 ^(2)_abcdx_3^(3)_cdefx_4^(4)_efx_5x_6 , + where and where all indices in the sum run from 0 to 1 .the pair should be regarded as one index taking 4 different values , as well as the pairs and . moreover , one has the following definitions:^(1)_abx_1x_2&:= & _ a , x_1_b , x_2 + ^(2)_abcdx_3&:= & ( -1)^ac+ ab + bx_3 + dx_3 + ^(3)_cdefx_4&:= & _ f , c_d , x_4(-1)^de + ec + ^(4)_efx_5x_6&:= & _ e , x_5_f , x_6 .we have seen that the of a graph state is equal to the rank width of the underlying graph .there is in fact a striking parallel between the _ motivations _ for the definitions of rank width of graphs and of of general quantum states , on which we comment here .as explained above , the gives information about the optimal ttn which describes a given quantum state .the interest in such ttns naturally arises due to the fact that the dynamics of quantum systems which allow ttn descriptions with sufficiently small dimension , can be simulated efficiently on a classical computer .these and similar techniques ( cf .e.g. the matrix product states formalism ) are invoked because the efficient classical simulation of _ general _ quantum systems can be a very difficult problem .thus , in spite of the general hardness of this simulation problem , it becomes tractable when restricted to the class of those systems with efficient ttn descriptions . in graph theoryan analogous situation occurs .while many interesting problems are hard to compute on general graphs , they become tractable for those classes of graphs which can be associated , through certain constructions , with tree structures .the simplest examples are of course the tree graphs themselves , which are in some sense the simplest instances of graphs ; and indeed , many difficult problems become efficiently solvable , or even trivial , on trees .however , this is far from the whole story . in graph theory onehas considered a variety of so called _ width parameters _ , which all measure , in different ways , how similar a graph is to a tree graph .examples are rank width , tree width , clique width , path width , and branch width .it has been shown that for families of graphs where a given width parameter is _ bounded _ , large classes of ( np)hard problems have efficient solutions . for example , the problem of deciding whether a graph is 3colorable , which is a np hard , is efficiently solvable when restricted to classes of graphs of bounded rank width .the graph theoretical results in this context are often very general and far reaching ; e.g. , it has been show that all graph problems which can be formulated in terms of a certain mathematical logic calculus , have efficient solutions when restricted to graphs of bounded rank width . we refer to ref . for an accessible treatment of these and related issues .thus , in certain aspects of both quantum information theory and graph theory there is a natural interest in using tree structures for the approximation of complex systems .moreover , there seems to be a strong parallel in the explicit constructions which are used in both fields .a striking example is obtained here , as the rank width of graphs exactly coincides with the measure on graph states . as a second example, it was found in ref . that the efficient contraction of large tensor network is directly related to the tree width of the underlying graphs .the present authors believe that the aforementioned parallel can significantly be exploited further .in this paper we have considered the possibility to classically simulate measurement based quantum computation .we have shown that all states with a bounded or logarithmically growing schmidt rank width can in fact be described efficiently , and moreover any one way quantum computation performed on such states can also be simulated efficiently .we have given an interpretation of the schmidt rank width , a measure that has its origin in graph theory , in terms of the optimal tree tensor network describing a state .we have also provided a constructive procedure how to obtain the optimal ttn , and discussed the requirements that this can be done efficiently . for graph states ,we have explicitly constructed the corresponding ttn , and provided an efficient algorithm to do this for any graph state where the underlying graph has bounded or logarithmically growing rank width .these results on efficient simulation complement recent findings on universality of states , in the sense that all states that are found to be non universal resources for mqc using the schmidt rank width criteria ( i.e. which have bounded schmidt rank width ) can also be simulated efficiently on a classical computer .the connection to complexity issues in graph theory , also highlighted in this paper , seems to provide future possibilities for a fruitful interchange of concepts and methods between the fields of quantum information and graph theory .this work was supported by the fwf , the european union ( qics , olaqui , scala ) , and the aw through project apart ( w.d . ) .we now compute the ttn description of w.r.t . the tree depicted in fig .[ subcubic ] , using the procedure outlined in theorem [ thm_ttn ] .consider the following schmidt decompositions of : [ c_6_schmidt ] |c_6&=&_i & = & _ j |^(2)_j_123 |^(2)_j_456 [ 123_456 ] + & = & _ k |^(3)_k_1234 |^(3)_k_56 .[ 1234_56]these decompositions are taken w.r.t .the bipartitions , and , respectively ; these correspond to the bipartitions , where runs over all inner edges of .all schmidt vectors in ( [ c_6_schmidt ] ) are normalized , and the are the square roots of the schmidt ranks of the corresponding bipartitions .we now show how the ttn description of w.r.t the tree is obtained , by applying the procedure presented in theorem [ thm_ttn ] .first , note that the depth of is equal to 3 .we start by considering the single inner vertex of depth 3 ; this is the vertex which has leaves 1 and 2 as lower vertices .we then compute the schmidt decomposition ( [ 12_3456 ] ) , corresponding to the bipartition which is obtained by deleting the upper edge of this vertex . in a second step ,we consider the single vertex in having depth 2 , and compute the corresponding schmidt decomposition ( [ 123_456 ] ) .moreover , we write |c_6&= & _ i |^(1)_i^(1)_i|c_6 + & = & _ i , j |^(1)_i^(1)_i|^(2)_j|^(2)_j,[intermediate](where we have omitted the subscripts of the schmidt vectors ) . finally , we consider the schmidt decomposition ( [ 1234_56 ] ) ( corresponding to the uper edge of the unique depth 1 vertex ) , and write it as |c_6&=&_k ( [ intermediate ] ) then shows that can be written as follows : [ c_6_ttn]|c_6= _ ijk |^(1)_i^(1)_i| ^(2)_j|^(3)_k^(3)_k| ^(2)_j .+ note that the states are defined on qubit 3 , for every and , and that the states are defined on qubit 4 , for every and .as for the schmidt coefficients , note that 2 & = & _ _ 2 ( \{1 , 2 } , \{3 , 4 , 5 , 6 } ) + & = & _ _ 2 ( \{1 , 2 , 3 } , \{4 , 5 , 6 } ) + & = & _ _ 2 ( \{1 , 2 , 3 , 4 } , \{5 , 6 } ) , and therefore ( using ( [ ctrk ] ) ) all the schmidt ranks of the above bipartitions are equal to . thus , the indices in eq .( [ c_6_ttn ] ) all run from 1 to , and we also have ^(1 ) = ^(2 ) = ^(3 ) = = 2.it will be convenient to write the indices as pairs of bits , and we will use the notations , , , where .we now consider the schmidt vectors w.r.t .the above bipartitions .we start with the bipartition . hereone finds that _ \{3,4,5,6 } ( |c_6c_6| ) = i.thus , a schmidt basis for the subset could simply be chosen to be the computational basis ; in other words , we take |^(1)_ab= |a|b|ab , defined on the qubits , for every . as for the bipartition , one can easily show that _\{4 , 5 , 6 } ( |c_6c_6| ) = ( i + _ z_x_z)and that , hence , the states [ lc ] |^(2)_cd=_z^ci_z^d|l_3form a valid schmidt basis , where and where is the linear cluster state on qubits , defined on the qubits . to compute the vectors , note that one has |^(2)_cd= 2^(2)_cd|c_6.therefore , we have to compute expressions of the form [ overlap0 ] [ ( l_3| _ z^ci_z^d)i ] |c_6,for every . to do so , we use that every -qubit graph state with adjacency can be written as [exp]|g= _ u\{0 , 1}^n ( -1)^q_g(u)|u , where is the -qubit computational basis and where q_g(u):= u^tu.one then finds that ( [ overlap0 ] ) is equal to ( omitting multiplicative constants ) [ overlap0 ] _ u , v , w ( _ x , y , z ( -1)^q_c_6(x , y , z , u , v , w ) + q_l_3(x , y , z ) + xc + zd)|uvw .+ straightforward algebra then shows that the power of in the above expression is equal to x(w+c ) + z(d+u ) + q_l_3(u , v , w).moreover , one has _x , y , z ( -1)^x(w+c ) + z(d+u ) = \ { the only remaining task is the computation of the states and . to compute the former of these states, it follows from the above that one has to compute , for every , overlaps of the form [ overlap1]^(1)_ab| ^(2)_cd&=&(a|b| i ) ( _ z^ci_z^d|l_3 ) + & = & ( -1)^ac a|b|_z^d|l_3 . using the expansion ( [ exp ] ) , it is then easy to show that ( [ overlap1 ] ) is equal to ( -1)^ac_v=0 ^ 1 ( -1)^q_l_3(a , b ,v)+dv |v , for every , and these states are defined on qubit 3 .a similar calculation can be performed to obtain ^(3)_ef| ^(2)_cd=_f , c(-1)^q_l_3(d , e , c)|d , for every , and these states are defined on qubit 4 .recalling the definition of , namely q_l_3(t_1 , t_2 , t_3):= t_1t_2 + t_2 t_3,for every , we recover expression ( [ c_6_ttn ] ) .note that one can easily check that ( [ c_6_ttn ] ) is correct , by summing out all indices : x_1 x_6|c_6&= & _ abcdef \{_a , x_1_b , x_2 ( -1)^ac+ ab + bx_3 + dx_3 .+ & & ._f , c_d , x_4(-1)^de + ec_e , x_5_f , x_6 } + & = & ( -1)^x_6x_1+x_1x_2+ + x_5x_6 + & = & ( -1)^q_c_6(x_1 , , x_6),where in the last equalitywe indeed obtain the correct computational basis expansion of .a. ekert and r. jozsa , phil .london 1998 , proceedings of royal society discussion meeting quantum computation : theory and experiment , november 1997 .u. schollwck , rev .* 77 * , 259 ( 2005 ) .m. hein _ et al ._ , proceedings of the international school of physics `` enrico fermi '' on `` quantum computers , algorithms and chaos '' , varenna , italy , july , 2005 ( to appear ) ; see also e - print : quant - ph/0602096 . to be precise , we have only considered bipartitions w.r.t ._ inner _ edges of the tree , whereas in ( [ d ] ) all edges are considered .however , bipartitions w.r.t open edges always have the form ( 1 party rest of the system ) , which in any reasonable situation can be disregarded .
we investigate for which resource states an efficient classical simulation of measurement based quantum computation is possible . we show that the _ schmidt rank width _ , a measure recently introduced to assess universality of resource states , plays a crucial role in also this context . we relate schmidt rank width to the optimal description of states in terms of tree tensor networks and show that an efficient classical simulation of measurement based quantum computation is possible for all states with logarithmically bounded schmidt rank width ( with respect to the system size ) . for graph states where the schmidt rank width scales in this way , we efficiently construct the optimal tree tensor network descriptions , and provide several examples . we highlight parallels in the efficient description of complex systems in quantum information theory and graph theory .
with the increase in the capacity of backbone networks , the failure of a single link or node can result in the loss of a significant amount of information , which may lead to loss of revenues or even catastrophic failures .network connections are therefore provisioned with the property that they can survive such edge and node failures .several techniques have been introduced in the literature to achieve such goal , where either extra resources are added or some of the available network resources are reserved as backup circuits .recovery from failures is also required to be agile in order to minimize the network outage time .this recovery usually involves two steps : fault diagnosis and connections rerouting .hence , the optimal network survivability problem is a multi - objective problem in terms of resource efficiency , operation cost , and agility .allowing network relay nodes to encode packets is a novel approach that has attracted much research work from both academia and industry with applications in enterprise networks , wireless communication and storage systems .this approach , which is known as network coding , offers benefits such as minimizing network delay , maximizing network capacity and enabling security and protection services , see and references therein . network coding allows the sender nodes to combine / encode the incoming packets into one outgoing packet .furthermore , the receiver nodes are allowed to decode those packets once they receive enough number of combinations . however , finding practical network topologies where network coding can be deployed is a challenging problem . in order to apply network coding on a network with a large number of nodes, one must ensure that the encoding and decoding operations are done correctly over binary and finite fields .there have been several applications for the edge disjoint paths ( edp ) and node disjoint paths ( ndp ) problems in the literature including network flow , traffic routing , load balancing and optimal network design . in both cases ( edge and vertex disjointness paths ) , deciding whether the pairs can be disjointedly connected is np - complete .a network protection scheme against a single link failure using network coding and reduced capacity is shown in .the scheme is extended to protect against multiple link failures as well as against a single node failure .a protection scheme protects the communication links and network traffic between a group of senders and receivers in a large network with several relay nodes .this scheme is based on what we call _ network protection codes _ ( npcs ) , which are defined in section [ sec : terminology ] .the encoding and decoding operations of such codes are defined in the case of binary and finite fields in . in this paper , we establish limits on network protection codes and investigate several network graphs where npc can be deployed .in addition , we construct graphs with minimum number of edges to facilitate npc deployment . this paper is organized as follows . in section [ sec : terminology ] we present the network model and essential definitions . in section [ sec : npc - minimumedges ] , we derive bounds on the minimum number of edges of graphs for npc , and construct graphs that meet these bounds in section [ sec : graphconstruction ] . section [ sec : k - connectedgraph ] presents limits on certain graphs that are applicable for npc deployment .in this section we present the network model , define briefly network protection codes , and then state the problem .further details can be found in .the network model is described as follows : let be a network represented by an abstract graph , where is the set of nodes and is set of undirected edges .let and be sets of independent sources and destinations , respectively .the set contains the relay , source , and destination nodes , respectively .the node can be a router , switch , or an end terminal depending on the network model and the transmission layer .let be a set of links carrying the data from the sources to the receivers .all connections have the same bandwidth , otherwise a connection with high bandwidth can be divided into multiple connections , each of which has a unit capacity .there are exactly connections . for simplicity ,we assume that the number of sources is less than or equal to the number of links . a sender with a high capacitycan divide its capacity into multiple unit capacities , each of which has its own link .in other words , where and , for some integer .the failure on a link may occur due to network circumstances such as a link replacement and overhead .we assume that the receiver is able to detect a failure and is able to use a protection strategy to recover it .we will use in the rest of the paper the terms edges and links interchangeably edge disjoint paths.,width=283,height=181 ] let us assume a network model with path failures in the working paths , i.e. , paths carrying data from source(s ) to receiver(s ) .one can define a _ network protection code _ npc which protects edge disjoint paths as shown in the systematic matrix in eq .( [ eq : gmultiple ] ) . in general, the systematic matrix defines the source nodes that will send encoded messages and source nodes that will send only plain message without encoding . in order to protect working paths, connections must carry plain data , and connections must carry encoded data .the generator matrix of the npc for multiple link failures is given by : \!,\!\!\!\!\end{aligned}\ ] ] where , and , see .the matrix can be rewritten as ,\end{aligned}\ ] ] where is the identity matrix and is the sub - matrix that defines the redundant data to be sent to a set of sources for the purpose of protection from multiple link failures , .the matrix is defined explicitly using maximum distance separable ( mds ) optimal codes such as reed - solomon ( rs ) codes .based on the above matrix , every source sends its own message to the receiver via the link .in addition , edge disjoint paths out of the edge disjoint paths will carry encoded data .[ defn : mfailurescode ] an $ ] _ network protection code _ ( npc ) is a k - t dimensional subspace of the space that is able to recover from edge disjoint path failures .the code protects working paths and is defined by the matrix described in eq .[ eq : gmultiple ] .we say that a network protection code ( npc ) is feasible / valid on a graph if the encoding and decoding operations can be achieved over the binary field or a finite field with elements . also , we ensure that the set of senders ( receivers ) are connected with each other .we define the feasibility conditions of npc , and we will look for graphs that satisfy these conditions [ def : npcfeasible ] let and be sets of source(s ) and receiver(s ) in a graph , as shown in fig . [fig : npaths ] .we say that the network protection code ( npc ) is feasible ( valid ) for edge disjoint connections ( paths ) from in to in , for , if between any two sources and in , there is a walk ( path ) .this means that the nodes in share a tree ; between any two receivers and in , there is a walk ( path ) .this means that the nodes in share a tree ; there are k edge disjoint paths from to , and the pairs are different edge disjoint paths for all .therefore , we say the graph is valid for npc deployment . by definition[ def : npcfeasible ] , there are edge disjoint paths in the graph from a set of senders to a set of receivers .this also includes the case in which a single source sends different messages through edge disjoint paths to receivers , and vice versa .the feasibility of npc guarantees that the encoding operations at the senders and decoding operations at the receivers can be achieved precisely . the max edge - disjoint paths ( edp ) problem can be defined as follows .let be an undirected graph represented by a set of nodes ( network switches , routers , hosts , etc . ) , and a set of edges ( network links , hops , single connection , etc . ) .assume all edges have the same unit distance , and they are alike regarding the type of connection that they represent .a path from a source node to a destination node in is represented by a set of edges in . put differently , * problem 1 . * given senders and receivers .we can define a commodity problem , aka , edge disjoint paths as follows . given a network with a set of nodes , and a set of links , provision the edge disjoint paths to guarantee the encoding and decoding operations of npc .provision the edge disjoint paths in .let be a set of commodities .the set of sources are connected with each other , and the set of receivers are also connected with each other as shown in definition [ def : npcfeasible ] .the set is realizable in if there exists mutually edge - disjoint paths from to for all .finding the set in a given arbitrary graph is an np - complete problem as it is similar to edge - disjoint menger s problem and unsplittable flow .* problem 2 . * given positive integers and , and an npc with working paths , find a -connected n - vertex graph having the smallest possible number of edges .this graph by construction must have edge disjoint paths and represents a network which satisfies npc .this problem will be addressed in section [ sec : graphconstruction ] .one might ask what the minimum number of edges on a graph is , in which network protection codes ( npc ) is feasible / valid as stated in definition [ def : npcfeasible ] . we will answer this question in two cases : ( i ) the set of sources and receivers are predetermined ( preselected from the network nodes ) , ( ii ) the sources and receivers are chosen arbitrarily .we consider the case of a single source and multiple receivers in an arbitrary graph with nodes .let be a connected graph representing a network with total nodes , among them a single source node and receiver nodes .assume a npc from the source node to the multiple receivers is applied .then , the minimum number of edges required to construct the graph is given by the graph contains a single source , receiver nodes , and relay nodes ( nodes that are not sources or receivers ) . to apply npc , we must have edge disjoint paths from the source to the receivers .also , all receivers must be connected by a tree with a minimum of edges .the remaining relay nodes in g can be connected with at least edges .therefore , the minimum number of nodes required to construct the graph g is given by let be a connected graph with nodes , and predetermined sources and receivers .then , the minimum number of edges required for predetermined edge - disjoint paths for a feasible npc solution on g is given by we proceed the proof by constructing the graph with a total number of nodes and sources ( receivers ) .there are sources that require a edges represented by a tree .there are receivers that require a edges represented by a tree .assume every source node is connected with a receiver node has an nodes in between for all .therefore , there are edges in every edge - disjoint path , and hence the number of edges from the sources to receivers is given by .assume an arbitrary node exists in the graph , then this node can be connected to a source ( receiver ) node or to another relay node . in either case , one edge is required to connect this node to at least one node in .hence , the number of edges required for all other relay nodes is given by .therefore , the total number of edges is given by in the previous lemma , we assume that the sources and receivers can be predetermined to minimize the number of edges on . in lemma [ lem : minedgesk ] , we assume that the sources and receivers can be chosen arbitrarily among the nodes of .[ lem : minedgesk ] let be a connected graph with nodes , and arbitrarily chosen sources and receivers .then , the minimum number of edges required for any edge - disjoint paths for a feasible npc solution on is given by in general , assume there are connection paths , and the source and destination nodes do not share direct connections .in this case , every source node in must be connected to some relay nodes which are not receivers ( destinations ) .therefore , every source node must have a node degree of .this agrement is also valid for any receiver node in .if we consider all nodes in the graph , hence the total minimum number of edges must be : the ceiling value comes from the fact that both and should not be odd .in this section , we look for certain graphs where npc is feasible according to definition [ def : npcfeasible ] .we will consider two cases : single source single receiver and single source multiple receivers .we derive bounds on the cases of and connectivity in a -connected graph , the definitions are states in the appendix section .whitney ( * ? ? ?* theorem 5.3.6 ) showed that the -connected graph must have edge disjoint paths between any two pair of nodes as shown in the following theorem .[ th : whitneytheorem ] a nontrivial graph is -connected if and only if for each pair of vertices , there are at least internally edge disjoint paths in . theorem [ th : whitneytheorem ] establishes conditions for edge disjoint paths in a -connected graph . in order to make npc feasible in a k - connected graph ,we require more two conditions according to definition [ def : npcfeasible ] : all receivers are connected with each others , as well as all source(s ) are connected with each other .[ lem : kconnectednpcfeasible ] let be a non - trivial graph with a source node and a receiver node .then , the npc has a feasible solution with at least edge disjoint paths * if and only if * is a -edge connected graph .first , we know that if is -edge connected , then for each pair and of vertices , the degree of each node must be at least .if not , then removing any number of edges less than will disconnect the graph , and this contradicts the k - edge connectivity assumption .each node connected with will be a starting path to or to another node in the graph .consequently , every node must have a degree of at least , and must have a path to .therefore , there are at least internally edge disjoint paths in .hence , npc is feasible by considering at least edge disjoint paths .assume that npc has a feasible(valid ) solution for , then there must exist edge disjoint paths in .then , for each pair of vertices and , there are at least internally edge disjoint paths such that , for each and non - adjacent nodes .therefore , the graph is -connected .let be a source in the network model that sends different data stream to receivers denoted by .we need to infer conditions for all receivers to be connected with each other , and there are edge disjoint paths from to . in this case , npc will be feasible in the abstract graph representing the network .[ lem:1smultipler ] let be a -edge connected graph with a hamiltonian cycle , and be any distinct nodes in .then , there is a path from to , for , such that the collection are internally edge disjoint paths , the nodes in the set are connected with each other . therefore , npc is feasible in the graph .the second condition ensures that there exists a tree in the graph which connects all nodes in without repeating edges from the edge disjoint paths from to .we look to establish conditions on regular graphs where it is possible to apply npc according to definition [ def : npcfeasible ] .let be a regular graph with minimum degree .then has a edge disjoint paths if and only if the min - cut separating a source from a sink is of at least . as shown in fig .[ fig : regulargnpc ] , the degree of each node is three and the min. cut separating the source node from the receivers is also three .however , we have the following negative result about npc feasibility in regular graphs. there are regular graphs with node degree k , in which npc is not feasible . a certain example to prove this lemma would be a graph of nodes , each of degree three , separated into two equally components connected with an edge , see fig . [ fig : regulargnpc ] . to receiver ndoes and receivers do not share a tree after removing the node . ]we will construct graphs with a minimum number of edges for given certain number of vertices and edge disjoint paths ( connections ) , in which npc can be deployed .let denote the minimum number of edges that a -connected graph on vertices must have .it is shown by f. harary in 1962 that one can construct a -connected graph on vertices that has exactly edges for .the construction begins with an -cycle graph , whose vertices are consecutively numbered clockwise .the proof of the following lemma is shown in ( * ? ? ?* proposition 5.2.5 . ) .let be a -connected graph with nodes .then , the number of edges in is at least .that is , .* begin : * scatter the isolated nodes let .the construction of harary graph .+ from algorithm [ alg : hararynpc ] , one can ensure that there are edge disjoint paths between any two nodes ( one is sender and one is receiver ) . in addition , there are edge disjoint paths from any node , which acts as a source , and different nodes , which act as receivers .all nodes are connected together with a loop .therefore , npc can be deployed to such graphs .due to the fact that harary s graph is -connected ( * ? ? ?* theorem 5.2.6 . ) , then using our previous result , we can deduce that npc is feasible for such graphs .harary s graphs are optimal for the npc construction in the sense that they are -edge connected graphs with the fewest possible number of edges .in this paper , we proposed graph topologies for network protection using network coding .we derived bounds on the minimum number of edges and showed a method to construct optimal network graphs ._ network protection is much easier than human protection against failures .s. a. a. _ 11 s. a. aly and a. e. kamal .network protection codes against link failures using network coding . in _ proc .ieee globelcomm 08 , new orleans , la _ , december 1 - 4 , 2008 .arxiv:0809.1258v1 [ cs.it ] . s. a. aly and a. e. kamal .network protection codes : providing self - healing in autonomic networks using network coding . , submitted , 2009 .arxiv:0812.0972v1 [ cs.ni ] .m. blesa and c. blum .ant colony optimization from the maximum edge - disjoint paths problem ., pages 160169 , 2004 . c. fragouli and e. soljanin . . ,2(2):135269 , 2007 .j. gross and j. yellen . .crs press , 1999 .w. c. huffman and v. pless . .cambridge university press , cambridge , 2003 .macwilliams and n.j.a .amsterdam : north - holland , 1977 .j. vygen .-completeness of some edge - disjoint paths problems ., 61:8390 , 1995 .r. w. yeung , s .- y .r. li , n. cai , and z. zhang . .now publishers inc . ,dordrecth , the netherlands , 2006 . h. zeng and a. vukovic .the variant cycle - cover problem in fault detection and localization for mesh all - optical networks ., 14:111122 , 2007 .h. zhang , k. zhu , and b. mukkerjee .backup reprovisioning to remedy the effect of multiple link failures in wdm mesh networks . , 24:5767 , august 2006 .we assume that all graphs stated in this paper are undirected ( bi - directional edges ) unless stated otherwise .we define the edge - connectivity and node - connectivity of a graph as follows .given an undirected connected graph , an edge - cut in a graph is a set of edges such that its removal disconnects the graph .a node - cut in is a set of nodes such that its removal disconnects the graph .the edge - connectivity of a connected graph , denoted is the size of a smallest edge - cut . also , the node - connectivity of a connected graph , denoted is the minimum number of vertices whose removal can either disconnect the graph or reduce it to a one - node graph .the connectivity measures and are used in a quantified model of network survivability , which is the capacity of a network to retain connections among its nodes after some edges or nodes are removed .a graph is k - node connected if is connected and .also , a graph is -edge connected if is connected and every edge - cut has at least edges , .we define two internal connections ( paths ) between nodes and in a graph to be internally edge disjoint if they have no edge in common .this is also different from the node disjoint paths . throughout this paper ,a path from a starting node to an ending node is a walk in a graph , i.e. , it does not contain the same node or edge twice .
link and node failures are two common fundamental problems that affect operational networks . protection of communication networks against such failures is essential for maintaining network reliability and performance . network protection codes ( npc ) are proposed to protect operational networks against link and node failures . furthermore , encoding and decoding operations of such codes are well developed over binary and finite fields . finding network topologies , practical scenarios , and limits on graphs applicable for npc are of interest . in this paper , we establish limits on network protection design . we investigate several network graphs where npc can be deployed using network coding . furthermore , we construct graphs with minimum number of edges suitable for network protection codes deployment .
the multi - moment concept underlying the cip method ( cubic - interpolated pseudo - particle or constrained interpolation profile) provides a general methodology to construct numerical schemes with great flexibility .one of the major outcome from the practice so far to implement the multi - moments in computational fluid dynamics is that we can build high order schemes on a relatively compact grid stencil using multi - moments , and these moments can be carried forward in time separately by completely different numerical approaches .some schemes have been developed for practical use based on via and sia ( surface - integrated average ) and on via and pv ( point value ) .the later is much more suitable for unstructured or other complex computational grids where a point - wise local riemann problem can be posed at any specified point to update the pv .it is found that increasing the number of the pvs is a simple way to get higher order schemes .we have devised and verified the schemes up to 4th order on 2d triangular unstructured grid for both scalar and system conservation laws by employing both via and pv moments . on the other hand , making use of the first derivative at the cell boundary as another moment has been ever used in the so - called cip - csl4(cip - conservative semi - lagrangian with 4th order polynomial ) advection scheme .we in this paper explore further the possibility to construct conservative cip / multi - moment formulation of arbitrary order over single cell using more derivative moments . the spatial reconstruction based on multi - momentsis described in section 2 .the numerical formulation for scalar hyperbolic conservation law is presented in section 3 .the extension to euler equations is discussed in section 4 .section 5 ends the paper with a few conclusion remarks .the essential point in high resolution scheme is how to reconstruct the interpolation function to find the numerical flux at the boundary of each grid cell . among the mostwidely used are , for example , the muscl scheme , the eno scheme and the weno scheme . in all of these schemesthe interpolation is based only on the cell - averaged values of the physical field to be reconstructed . in this section ,we describe a numerical interpolation that makes use of not only the volume - integrated average over each mesh cell but also the derivatives at the cell boundary .we call the present formulation the `` multi - moment '' reconstruction to distinguish it from the aforementioned ones which should be more properly refer to as the `` single - moment '' reconstruction .we consider a physical field variable over a one - dimensional domain divided into control volumes ( mesh cells ) ] , as well as the first - order derivative or gradient that is computed in terms of other independent moments , we can construct a ]th order polynomial is obtained if we specify with is computed from constraint conditions ( [ constraint1 ] ) , ( [ constraint2 ] ) and ( [ constraint3 ] ) .furthermore , a slope limiting can be imposed to to suppress the numerical oscillation ( see for details ) .we used a single - cell minmod limiter in this paper .[ fig:1 ] it is obvious that the reconstruction discussed above can have a order accuracy for smooth solutions .in this section , we consider the scalar conservative law as follows , where is the scalar state variable and is the flux function . assuming the hyperbolicity , we have a real characteristic velocity , .the governing equations for the derivative moments can be directly derived from ( [ scalar - eq ] ) as , where is the numerical flux function consistent to .it is observed from the reconstruction that the derivatives moments up to the order and flux are continuous .thus , we can update the derivative moments for by ( [ dv - eq1 ] ) with the spatial derivatives of the flux function directly computed from the derivative moments that readily defined and computed at the cell interface as when one advances the highest order derivative moment , is required , which , however , might not be continuous at the cell boundaries .we make use the the simple lax - friedrichs splitting in terms of the spatial derivatives of the flux function and the state variable as where is the largest value of the characteristic speed in the related region .the state variables and are computed from the multi - moment reconstructions ( [ interpol ] ) separately built for cells ] , i.e. the corresponding derivatives are it should be noted that we have used an assumption similar to in getting a homogeneous and linearized riemann problem for spatial derivatives for the state variable . in order to update the via moment ,we integrate ( [ scalar - eq ] ) over ] , which results in a finite volume formulation , given the derivative moments at cell boundaries , the numerical fluxes in the above equation are directly found . again , the runge - kutta method is used for time integration for all moments .a 1d shock tube test was computed to verify the present method for euler conservation laws .we include the numerical result of the 5th - order weno scheme as well for comparison .shown in fig.3 , the numerical results of the present scheme with different orders are quite competitive .the close - up plots for shock and contact discontinuity are given in fig.4 .both linear and non - linear discontinuities are well resolved with correct locations .it is observed that better resolution can be obtained by simply increasing the order of the derivative moments .a formulation that uses high order derivative moments has been suggested and tested .given all the derivative moments that are continuous at cell boundaries and updated separately , the resulting numerical formulation is still single - cell based and quite computationally efficient . in case that the derivative moments are defined and continuous at the cell boundary , the numerical fluxes can be computed directly as in the ido scheme , while for the spatial derivative higher than the continuous one , we simplify and cast it into a linearized derivative riemann problem .the present formulation is substantially different from the ader method where all the derivatives are discontinuous at cell boundaries , thus is more efficient .our numerical results show that the resolution of the scheme can be improved by simply increasing the order of the derivative moments involved . with the simple slope limiting , the numerical oscillation around the large gradient can be effectively suppressed .a. harten , b. engquist , s. osher and s. chakravarthy , j. comput . phys .* 71 * ( 1987 ) 231 .s. ii , m. shimuta and f. xiao , comput .phys . comm .* 173 * ( 2005 ) 17 .s. ii and f. xiao , j. comput .* 222 * ( 2007 ) 849 .g. jiang and c.w .shu , j. comput .* 126 * ( 1996 ) 202 .shu , siam j. sci .stat . comput .* 9 * ( 1988 ) 1073 . g. sod , j. comput* 27 * ( 1978 ) 1 .r. tanaka , t. nakamura and t. yabe , comput .phys . commun .* 126 * ( 2000 ) 232 .v.a.titarev and e.f.toro , j. sci .* 17 * ( 2002 ) 609 .v.a.titarev and e.f.toro , j. comput* 204 * ( 2005 ) 715 .e.f.toro and v.a.titarev , j. comput . phys .* 202 * ( 2005 ) 196 .b. van leer , j. comput . phys .* 32 * ( 1979 ) 101 .f. xiao , j. comput . phys . *195 * ( 2004 ) 629 .f. xiao , r. akoh and s. ii , j. comput .* 213 * ( 2006 ) 31 .f. xiao and t. yabe , j. comput . phys .* 170 * ( 2001 ) 498 .t. yabe and t. aoki , comput .* 66 * ( 1991 ) 219 .
this paper presents a general formulation of the cip / multi - moment finite volume method ( cip / mm fvm ) for arbitrary order of accuracy . reconstruction up to arbitrary order can be built on single cell by adding extra derivative moments at the cell boundary . the volume integrated average ( via ) is updated via a flux - form finite volume formulation , whereas the point - based derivative moments are computed as local derivative riemann problems by either direct interpolation or approximate riemann solvers . high order scheme , finite volume method , multi - moment , fluid dynamics , derivative riemann problem , conservation
magnetic field in the quiet solar photosphere , while ubiquitously present , is concentrated at the edges of convective cells .we refer the interested reader to for a comprehensive review of small - scale magnetism in the lower solar atmosphere .the convective flows expunge field from the interiors of granules and concentrate it as `` magnetic elements '' ( mes ) in the intergranular downdrafts .supergranular flows advect field to supergranular boundaries where it forms the magnetic network .gas motions are also expected to push field to the edges of cells of an intermediate , so - called mesogranular scale .this scale has indeed been observed in the positions of photospheric magnetic elements in internetwork areas . also observed `` voids '' in active network .however , no evidence has been presented that the observed cells outlined by mes correspond to mesogranular cells .mesogranules have been associated with so - called `` trees of fragmenting granules '' ( , called `` active granules '' by ) .tfgs consist of repeatedly splitting granules that originate from a single granule .they may live for several hours , much longer than the lifetime of an individual granule .flow fields derived from granular motions also show convergence at the borders of tfgs .one would thus expect mes to lie predominantly on tfg boundaries .an image sequence spanning several hours of moderately quiet sun was recorded using the solar optical telescope on the hinode spacecraft on march 30 , 2007 .the narrowband filter imager was used to record stokes i and v in the photospheric fe i line at 630.2 nm .more details can be found in .their study of magnetism in quiet - sun internetwork yielded the locations of strong concentrations of magnetic flux that we use in this analysis . to reduce noise , the fe i intensity data is initially filtered using an optimum filter , and averaged over 3 frames in time . to ensure proper segmentation and separation of granules ,the data is scaled up spatially by a factor of two .noise is further reduced through convolution with a 8-pixel gaussian .we identify granules by computing the curvature in each pixel in 4 directions .if the minimum curvature is positive , the pixel is labeled as part of a granule .the granule mask is then extended to values of curvature up to , following the cst algorithm .we apply erosion - dilation processing with a 3-pixel + -shaped kernel to remove very small features .finally , granules are grouped into families by following them in space and time .two granules are considered to be members of the same family if there is a path through the granule mask forward in time that connects them . at the start of the sequence ,all families consist of a single granule .as time progresses , some families die out , others grow , and new families appear .if two granules of different families merge , we keep the oldest family .the lifetime of tfgs is on the order of several hours , so one must expect to wait a similar amount of time before the tfg pattern is established . here , we will study a single frame taken about 5 hours after the start of the sequence .the segmentation of the granules and the subsequent grouping into families has been performed for several variations of the number of frames over which is averaged in time , the width of the gaussian , and the level to which the masks are extended to negative curvature values .the chosen parameters appear to be fairly robust , i.e. , the resulting pattern does not change much within some range of the chosen values for the parameters .the process is most sensitive to the amount that the mask is extended .too much extension may merge separate granules , causing larger tfgs , while too little extension results in very limited grouping .frame number 524 in the fe i sequence , recorded at 05:25:33 , is shown in fig . [fig : fig1 ] .it shows the granular pattern , with the borders of tfgs overlaid in black , and the positions of mes overlaid as white diamonds .the granular pattern is shaded in dark gray in network areas .the segmentation there is not good , because of the confusion between granules and bright points . as a result ,the tfg pattern there is not trustworthy .also , the analysis by did not identify mes in the network .this is not a problem for our analysis , because our interest is primarily in the internetwork . a closeup of the white box is shown in fig .[ fig : fig2 ] .many mes appear to lie preferentially near the borders of tfgs .the interiors of large tfgs are mostly devoid of mes .these preliminary results are encouraging .it thus seems highly likely that the cells described by , e.g. , and the `` voids '' in maps made with the hinode sp instrument found by correspond to mesogranules .further statistical study is required to quantify the relationship between the borders of tfgs and the positions of mes .while these preliminary results are encouraging , it must be verified that mes lie statistically closer to the borders of tfgs than , e.g. , to the border of randomly placed cells of similar size .mes that emerge in tfg interiors are expected to migrate to the borders in a matter of perhaps an hour , then along the borders of tfgs to the network . in this contextit is of interest to study the motions of mes in detail and to quantify the contribution of tfgs to the formation of the magnetic network and the diffusion of magnetic field ._ hinode _ is a japanese mission developed and launched by isas / jaxa , with naoj as domestic partner and nasa and stfc ( uk ) as international partners .it is operated by these agencies in co - operation with esa and nsc ( norway ) .
we investigate the relation between trees of fragmenting granules ( tfgs ) and the locations of concentrated magnetic flux in internetwork areas . the former have previously been identified with mesogranulation . while a relationship has been suggested to exist between these features , no direct evidence has yet been provided . we present some preliminary results that show that concentrated magnetic flux indeed collects on the borders of tfgs .
transcriptional regulation plays a crucial role in many biological processes , ranging from intracellular metabolic and physiologic homeostasis to cellular differentiation and development . alterations of regulatory specificity are among the major sources of phenotypic diversity and evolutionary adaptation .transcriptional regulation is mainly controlled by a class of dna binding proteins known as transcription factors ( tfs ) .typically , a tf is able to recognize and bind to a specific set of similar sequences , centered around a consensus one .this set of sequences defines a binding motif , which can be synthetically represented in a position weight matrices ( pwm ) , fully describing the variability and the information content of the characteristic sequence pattern , under the assumption that each tf - dna base interaction is independent .the combined regulatory actions of the set of expressed tfs on their target genes in a particular cell type define the so - called _ regulatory network_. deciphering how these regulatory networks and their components evolved has been a crucial quest over the last decades .tfs are usually classified in families on the basis of the structural similarity of their dna - binding domains ( dbds ) , i.e. , the protein components that mediate the tf - dna interaction . however , this classification lacks the detailed information about the preferences of sequence binding , which is ultimately the feature that defines the set of potential target genes of a tf .currently , thanks to the remarkable progress in the characterization and classification of the human repertoire of tfs and their pwms , it is possible to introduce an organization of tfs into families based on their preferences of sequence binding ( here called `` motif families '' ) , and to address quantitatively the evolutionary origins of this organization .this is the main goal of the present paper .in fact , we propose a simple model , based on the birth - death - innovation ( bdi ) paradigm , to describe the evolution of tf binding preferences , and in particular to capture the mechanisms that shaped the tf distribution in motif families. the model will be used as a `` neutral scenario '' to identify the main evolutionary forces acting on the tf repertoire and on its regulatory strategies in complex eukaryotes . to this end , it is also important to characterize a few relevant features of the eukaryotic repertoire of tfs , which will be important ingredients of our model . in discussing this issue, a comparison between eukaryotes and prokaryotes is fairly instructive . _ the constant repertoire of dbds in the post - metazoan era points to _ cis - innovation _ as the dominant source of pwm evolution . _+ in bdi models , the evolutionary mechanisms taken into consideration are birth ( duplication ) of a gene or a protein domain , death ( deletion ) , and innovation ( the acquisition of a new element ) .there are essentially two main ways in which an organism can acquire a tf with a new set of binding sequences , thus introducing in the dynamics of the pwms ( and thus of the motif families ) a completely new class .it can arise from duplication of an existing tf followed by a mutation - induced divergence of its binding preferences ( hereafter referred to as _ cis - innovation _ ) , or there can be a _ de novo _ creation or acquisition of a new dbd .however , this second possibility intuitively requires a much more complex scenario , involving the `` birth '' of a functional sequence from a non - coding background or an event of horizontal gene transfer , which seems to be extremely rare in higher eukaryotes .this observation can be substantiated by looking at the evolution of dbds .dbds have a different distribution among species in prokaryotes and eukaryotes .independent studies in bacteria , archaea , plants and animals have consistently shown that dbds are drawn from relatively small , anciently conserved repertoires . however , such conserved repertoires are superkingdom - specific , with prokaryotes and eukaryotes sharing only few dbds .prokaryotic conserved dbds are present in all the species , with no clearly distinguishable partition within the major bacterial phyla , probably due to the extensive use of horizontal gene transfer . on the other hand , the conservation patterns of dbds within the three main eukaryotic kingdoms ( metazoa , fungi and plants ) are much more well defined , thus allowing a rough reconstruction of their evolutionary history .a relative small portion of eukaryotic dbd families ( 29% according to ) are shared among all the three kingdoms , including homeobox families , zinc fingers , hlh and bzips .focusing on metazoa , there is evidence that most of the tfs classes originated at the dawn of the animal kingdom , with dbds highly conserved over the last 600 million years .therefore , two major bursts of dbd family expansion can be identified : the first when eukaryotes branched off from prokaryotes and the second at the common ancestral node between animals and fungi .in particular , the metazoa - specific wave of expansion and diversification of tfs is thought to have provided the basal toolkit for generating the metazoan multicellular condition through the process of development .following these observations , we shall assume the dbd repertoire of complex eukaryotes as essentially stable over the last 600 million years .therefore , it is reasonable to assume a low rate of de novo innovation in the post - metazoan era , as confirmed by the very few appearances of new dbds , while most of the innovation , and thus the regulatory network rewiring , seems to come from a duplication - divergence mechanism , or in other words from _ cis - innovation_. _ evolution of the set of tfs versus evolution of their binding preferences . _+ it is well known that there is a precise relation between the total number of tfs of an organism and the genome size . in prokaryotesthe number of tfs increases almost quadratically with the genome size , while this increase is much slower in eukaryotes ( power - law exponent ) .several models have been proposed to explain this intriguing relation .all these models essentially agree on the idea that the total number of tfs is related to the organism complexity , as roughly captured by its genome size .it can be argued that this value has been tuned to an optimal one to address in the most efficient way the regulatory needs of the organism .in fact , it has been observed that an upper bound must exist on the total number of tfs to ensure an optimal coding strategy in which misrecognition errors are minimized .since we aim to describe only the evolution of the tf regulatory strategies in complex eukaryotes , we shall assume that the mean number of tfs is essentially constant over time and stably close to the optimal value .in fact , the dynamics in which we are interested in is the evolution of the binding preferences of these tfs , which is presumably acting on a faster timescale with respect to the changes in the tf total number .this assumption of a separation of time scales is in line with the notion of punctuated equilibrium often implied in several evolutionary models : long period of stasis are punctuated by short bursts of evolutionary activity that involve radical alterations of the duplication and elimination rates . between these periods of drastic changes , the system seems to rapidly relax to equilibrium .the assumption of equilibrium naturally implies an approximate balance between the mechanisms generating an inflow and an outflow of genes . as a result, the dynamics of binding preferences will be described by a process of duplication , deletion and divergence , while the total number of tfs is essentially stable .given the evidence that de novo innovation is basically negligible in higher eukaryotes , the equlibrium condition translates in equal deletion and duplication rates .in fact , cis - innovation can only change the binding preferences of already existing genes . on the contrary ,bdi models usually studied in the context of protein domain evolution consider de novo innovation as the main source of innovation .this paper proposes a mathematical description belonging to the bdi class , but conveniently modified to model the evolution of tf pwms in higher eukaryotes .we will use a specific classification of tfs in motif families , i.e. grouping tfs with similar binding preferences , as a tool to understand the tf organization in terms of regulatory strategies . as we shall see, the model describes remarkably well , _ with only one free parameter _ , the global evolution of motif families , suggesting that the abundance of most of them have been only marginally modulated by selective pressure. indeed , the `` core '' of the distribution can be fairly well explained by a random stochastic process based on a scenario of neutral evolution .however , two macroscopic deviations from our random model can be identified , which can be associated to selective forces acting on the regulatory network .the detailed study of these deviations will allow us to shed some light on the complex evolution of the tf repertoire of higher eukaryotes .the organization of dbd families into more specific motif families will be also analyzed and compared with our null model predictions .few classes of tf dbds will be shown to differentiate in an `` anomalous '' way with respect to the apparently neutral average behavior , and these deviation find motivation in specific selective pressures already suggested to act on those tfs in independent studies .finally , a comparison between the motif family structure in different species suggests a general trend of increasing `` redundancy '' in tf binding preferences with organism complexity .large - scale studies have reported that tfs with similar dbd sequences tend to bind very similar dna sequences , while tfs with different dbds have clearly distinct specificities .more importantly , it has been shown that distinct tfs can be associated to a single pwm , which characterizes the binding preferences of the group .following , we propose a new classification of tfs based on pwms , rather than on dbds or on sequence homology .the result of this classification is an organization of tfs in what we call _ motif families _ , grouping together tfs with similar binding preferences ( i.e. , with the same pwm , see materials and methods ) . in order to obtain such motif families, we built a network where each gene is a vertex and an indirect edge connects two tfs if they are annotated with at least one common pwm .the network defined in this way is characterized by several disconnected components of high density , each of which defines a motif family .figure [ fig:1 ] shows a snapshot of these families .most of these components are cliques , i.e. groups of tfs with at least one pwm in common among all the members .isolated tfs were considered as motif families of size 1 .figure [ fig:2 ] reports the size distribution of these families .it is interesting to compare this classification with the usual one based on dbds .we verified that it never happens that different dbds are contained in the same motif family , while most of the dbds families are split in smaller more specific motif families .indeed , the classification in terms of motif families represents a finer partition of the tf repertoire with respect to the usual one based on dbds .moreover , the motif family classification is expected to be more closely related to the regulatory potential and thus to the functions of tfs .in fact , tfs with similar binding motifs show a high tendency to co - regulate the same target genes , and more generally they show sets of target genes with similar functional annotations .the empirical distribution of tfs in these motif families is supposed to be strictly related to the regulatory strategies of an organism , since changes in the pwms associated to a tf imply alterations of its regulatory ability .the size distribution of these motif families in figure [ fig:2 ] is the observable that we aim to explain in terms of a simple evolutionary model . .the red - line is the best - fit model according to maximum likelihood estimation , which has a goodness - of - fit p - value .the model captures the general trend , but clearly underestimates the number of families of size 1 and does not predict the presence of the largest families . ]the model we propose belongs to the general class of birth - death - innovation models ( for a thorough introduction see ) .the focus of these models is on systems in which individual elements are grouped into families whose evolution is ruled by the dynamics of their individual members .these models typically include the elementary processes of family growth via element duplication ( gene duplication ) , element deletion as a result of inactivation or loss ( negative gene mutation ) , and innovation or emergence of a new family ( neutral / positive gene mutation ) .all these processes are assumed to be of markov type and the corresponding rates are assumed to be constant in time . applying this theoretical framework to the evolution of the tf repertoire and tf binding preferences in higher eukaryotes requires , as mentioned in the introduction , a specific set of observation - based ingredients : 1 .tfs evolve via gene duplication and divergence as in the typical bdi dynamicsthe repertoire of dbds in higher eukaryotes is remarkably conserved over the last 600 million years .this implies that _ cis - innovation _ is the driving force of tf evolution on the time scale of pwm evolution we are interested in .in fact , our model description focuses only on the `` late '' stage of tf evolution in metazoans , in which very few new dbds , and thus new motif families , are created de novo .3 . there is a separation between the time scale typical of the evolution of tf binding preferences and the time scale over which the number of tfs change significantly . this is essentially equivalent to the equilibrium hypothesis usually introduced in bdi models , which implies that the family distribution is at a stationary state and the number of tfs is approximately constant .based on these ingredients , we propose a null - model of the bdi type to describe the specific features of the motif family distribution of tfs in higher eukaryotes ( and in particular in the human case ) and then use this model to identify deviations from this neutral scenario . to introduce the model in more detail ,let us define as `` class '' the set of all families of size .let be the number of families in the -th class , be the total number of classes ( or the maximum size of a family ) , and the total number of elements , thus representing also the extreme value for .acting at the `` local '' level on individual elements , the evolutionary dynamics shapes `` globally '' the system relocating a family from class to class in case of duplication ( or to class in case of removal ) . typically , bdi models introduce innovation in the model only as a constant inflow in the class 1 due to de novo emergence of a new family ( increase of by 1 ) . as discussed above ,we propose a generalization of the model by introducing also cis - innovation , in which an element of a family in class mutates and gives rise to a new family .this results in the relocation of that element in class 1 and of its original family in class ( i.e. a decrease of and increase of and by 1 ) .let , , and be the rates of element birth , death , _de novo_-innovation and _ cis_-innovation respectively .solving the master equations at the steady state ( see the materials and methods section ) one finds : where .+ the corresponding probability distribution can be found straightforwardly by normalization : a few comments are in order at this point : * the normalized solution in equation [ eqx2 ] gives a one - parameter prediction about the size distribution of motif families .the functional dependence on is equivalent to the one that can be obtained with standard bdi models , i.e. , with de novo innovation as the only source of innovation .however , our generalized model suggests a different interpretation of the parameter .in fact , and thus its value depends on the rate of cis - innovation . *the steady state condition is , or equivalently the total number of elements is constant over time .this condition translates into the parameter constraint . *as previously discussed , we expect to be very small in our case ( i.e. , negligible de novo innovation ) , and accordingly we shall approximate in the following .we shall further verify `` a posteriori '' the validity of this approximation using an independent analysis on the evolution of tfs in different lineages ( see below ) . in this regime, the stationary condition simplifies to a balance between duplication and deletion rates , and .therefore , the deviation of from 1 allows to directly estimate the magnitude of with respect to , i.e. , the relevance of cis - innovation with respect to the birth / death rate . as we will see below, a comparison with the data in the human case supports a value of , thus highlighting the important role that cis - innovation had in the recent evolution of the eukayotic tf repertoire .moreover , within this approximation , also the family distribution in equation [ eqx ] can be written in a very simple and compact form : + f_i = n ( ) ^i = n ( 1- ) .[ eq1 ] * an analytical estimate of the number of classes in which the elements are organized when the dynamics reaches equilibrium can also be calculated as : + = _i^m ln(1- ) [ eq3 ] + this represents the neutral model prediction on the number of motif families given a set of tfs subjected to the described bdi dynamics .we compared the distribution predicted by our model , equation [ eqx2 ] , with the empirical tf organization in motif families obtained from the cis - bp database ( materials and methods ) .this comparison has been quantified by estimating a best fit value for with a maximum likelihood method and a p - value associated to the quality of the fit using a _ goodness - of - fit _ test based on the kolmogorov - smirnov statistics ( materials and methods ) .although the central part of the size distribution seems well captured by the theoretical model , a direct fit of the whole distribution gives very low p - values ( , see figure [ fig:2 ] ) .this poor p - value shows the presence of significative deviations with respect to our random null - model .these deviations can be easily identified looking at figure [ fig:2 ] .they are located at the two ends of the distribution and involve a few of the largest families and the smallest ones ( i.e. , families of size 1 ) . using the ks test and a p - value threshold for acceptance of 0.75 , we can identify in a quantitatively and consistent way the fraction ( about 25% ) of isolated tfs and the number ( three ) of the largest families which account for most of the deviations from the null model ( materials and methods and figure [ fig:3 ] ) .if we subtract from the whole distribution these two tails ( for a total of tfs , i.e. about 16% of the total number of tfs in analysis ) , we eventually find a remarkable agreement between the model predictions and experimental data ( , see figure [ fig:4 ] ) .therefore , the `` core '' of the distribution is well described by the exponential - like solution of eq .( [ eq1 ] ) , while deviations are due to few families that can be isolated and studied in detail .this suggests that the evolution of a large portion of the tf repertoire in higher eukaryotes was driven by a neutral stochastic process of the bdi type with only two exceptions : an excess of isolated tfs and three large families which on the contrary are characterized by a strong level of duplication without innovation .let us address in more detail these two deviations. indicates the threshold in size above which families are excluded from the sample . on y - axis indicates the number of families of size one excluded from the sample .an increase in or reduces the sample size in analysis by reducing the number of tfs considered . for each sample sizea goodness - of - fit test for the best - fit model was performed and the corresponding p - value is reported with the color code in the legend . considering a p - value of as the acceptance limit identifies as the size threshold at which the fit is acceptable .this corresponds to the exclusion of the three largest families .for such a threshold , the optimal values for the p - value are reached for values of in the range . ] .the line represents the prediction of our model with the best fit choice of the parameter , which turns out to fit very well the data contained in the reduced sample with a goodness of fit p - value .the best fit value does not differ substantially from the value that is obtained by fitting the whole empirical sample as in figure [ fig:2 ] . ]the fitting procedure allows us to obtain a rough estimate of the fraction of size 1 families which are not explained by our theoretical description .this number is in the range , i.e , in between 20% and 30% of the total number of size 1 families ( materials and methods and figure [ fig:3 ] ) .the emergence of a size 1 family in our model description can come from de novo innovation or from duplication of an existing tf , followed by a cis - innovation event that defines a new pwm .we argued that de novo innovation is negligible in our case of study , so we expect that most of the isolated tfs are the result of a previous duplication event . in this scenario , they should share their dbd at least with the tf they duplicated from , and we verified that indeed empirically this is the case for the majority of isolated tfs , thus supporting our model description .however , some isolated tfs have a dbd which is not shared with any other tf ( 12 in our sample ) or are characterized by a dbd which is classified as unknown ( 44 in our sample ) , so also potentially unique .the presence of these isolated tfs with unique dbds can be explained by the two following mechanisms ._ newly acquired dbds ._ a few of them are due to actual recent de novo innovation events , thus introducing new dbds in the last period of post - metazoan evolution .these `` recent '' tfs appear in our analysis most likely as size 1 families only because they had not time to enter into the duplication process .looking at the orthology maps we can rather easily identify these dbds and the corresponding tfs ( see supplementary material and below ) which turn out to be very few , thus supporting `` a posteriori '' our approximation ._ singleton genes ._ the majority of excess isolated tfs are most probably _ singleton genes _ for which duplication is peculiarly avoided .the existence of this class of genes has been recently proposed .they are supposed to be ancestral genes of prokaryotic origin , addressing basilar functions and requiring a fine - tuning of their abundances , thus making their duplication particularly detrimental .they would be the result of a selective pressure to avoid duplication , and thus , by definition , can not be explained by our neutral model . since singleton genes are not included in our model , they are good candidates to explain the excess of isolated tfs in figure [ fig:2 ] .to distinguish between putative singleton genes and recent genes in the motif families of size 1 , we analyzed their evolutionary origin .more specifically , we manually inspected the taxonomic profiles of these 56 tfs in the eggnog database : 16 of them have a putative origin at the last universal common ancestor ( luca ) , i.e. they are shared among bacteria , archaea and eukarya ; 25 are in common among all eukarya , 4 among opisthokonta , 3 among metazoa and 8 have a post - metazoan origin. therefore , at least 41 of these tfs have a very ancient origin ( luca + eukarya ) and could well be examples of `` singleton '' tfs , while 8 are instead of very recent origin ( post - metazoan , but 4 of them are shared only among euteleostomi ) and are thus likely to be `` recent '' tfs .these recent tfs constitute less than the of our sample , supporting `` a posteriori '' the approximation .to find additional evidence that these 41 ancient tfs can be bona fide `` singleton genes '' , we queried the ngc5.0 database , which provides information about the gene duplicability for a large set of cancer genes .14 of our putative singletons are present in this collection , and 12 of them show indeed no evidence of duplicability ( at 60% coverage ) , thus supporting their `` singleton '' nature .it is interesting to notice that the overall number of putative singletons ( 41 genes ) is compatible to the size of the deviation from the random null model ( 40 ) observed in our best fit tests .our analysis singles out also three over - expanded families .the over - expansion can be due to two parallel mechanisms : an enhanced rate of duplication and/or a decreased rate of cis - innovation .looking at the three over - expanded families , three very homogeneous groups of tfs can be recognized : the fox family ( size 41 ) , the hox family ( size 34 ) and another homeobox family ( size 25 ) .these three families are good examples of the two mechanisms mentioned above .the hox family contains tfs well known for their role in morphogenesis and animal body development . also tfs in the otherover - expanded homeobox family show enrichments for go annotations related to _morphogenesis _ , _ development _ and _ pattern specification _ , as reported in table [ tab : 1 ] .these two families may well represent cases of positive selection for duplication and subsequent fixation .due to their crucial role in morphogenesis , these tfs could have been under a strong conservation pressure that inhibited mutations leading to gene loss after duplication .the third family , which is the largests one , collects most of the fox ( forkhead box ) tfs present in the sample . tfs belonging to this family are known to be `` bispecific '' , i.e. they recognize two distinct dna sequences , and for this reason they play an important and peculiar role in the regulatory network of metazoans .while their over - expansion can be due to positive selection for functional reasons , their unique feature of bispecific binding could suggest that innovation is particularly difficult for these tfs .in fact , bispecifity is likely to impose stronger constraints , from a structural point of view , than those imposed on other tfs . in this perspective , it is interesting to stress the different distribution of forkhead and homeobox genes in motif families .almost all the forkhead genes are collected in this single large motif family , suggesting no cis - innovation events that would have moved some of these genes in families of other sizes .only 6 forkhead tfs are present in other motif families . on the other hand , homeobox genes , besides the two main families discussed above , are dispersed in several other motif families , thus are associated to a variety of pwms .this difference suggests that duplication of homeobox genes has been positively selected at a certain time point probably because of their crucial role in the development of multicellular organisms ( see table [ tab : 1 ] ) , but cis - innovation have progressively changed their binding preferences . on the other hand ,very few events of cis - innovation are associated to fox genes that indeed `` accumulated '' in a single motif family .these interpretations of the possible evolutionary origins of the over - expanded motif families will be addressed in more detail in the next section . [cols="<,^,>",options="header " , ] so far , we considered the `` global '' distribution of all tfs in motif families .however , the evolution of different dbd families can be considered as independent , and thus can be studied separately . in other words , we can focus on the dynamics of the splitting of each dbd family in smaller motif families through the process of cis - innovation followed by duplication and deletion .indeed , the dynamics of duplication and deletion can only expand or reduce a certain dbd family size , while cis - innovation can change the binding preferences of a tf , thus creating new motif families , but it is not expected to create a brand new dbd class .we provided evidence that the dbd repertoire has remained essentially constant on the evolutionary time scales here in analysis .therefore , our model can also be used to predict how , in a neutral scenario , each dbd family is expected to split in different motif families through the cis - innovation process .our model directly provides an analytical estimate of the average number of families in which a set of tfs is expected to be organized in equation ( [ eq3 ] ) .to evaluate also the possible variability of this neutral expectation , we ran simulations of the model for different system sizes , corresponding to the different numbers of tfs in the dbd families .the rates of duplication , deletion and cis - innovation are set by the fit of the global distribution of motif families in figure [ fig:2 ] ( ) and by the additional constraint . in a neutral scenario, we expect the rates to be the same for all dbd families .clearly , considering the dbd families separately significantly reduces the statistics that can be used , but we can take advantage of a parameter - free prediction of the model ( fitted on the whole dataset ) to compare with .figure [ fig:5 ] shows the numbers of motif families in which a dbd is divided , given its number of tfs .the analytical prediction ( dashed line ) and the results of model simulations ( shaded areas represent 1 and 3 standard deviations from the average simulated behaviour ) are compared with empirical data ( symbols ) . while most of the splitting patterns of dbds families do not deviate significantly from the model prediction , three clear `` outliers '' can be observed .the forkhead dbd family shows a smaller than expected number of motif families .we have previously show that the largest motif family including most of the forkhead associated tfs is also over - expanded .more specifically , most of the forkhead tfs belongs to one single motif family .we speculated that the overexpansion could have been driven by structural constraints limiting the evolvability of the binding preferences .indeed , the small number of motif families in which this dbd class divides into can be indicative of a lower - than - average rate of cis - innovation .similarly , the homeobox dbd family appear to be partitioned in less motif families than expected , and as previously discussed , only some of these families appear to be over - expanded , probably for specific functional reasons .+ the other significant deviation from the model prediction concerns a zinc finger class of tfs , that appears to have greatly diversified the tf pwms .the corresponding motif families are not over - expanded , in fact they did not emerge as deviations in the previous analysis ( figure [ fig:2 ] ) .in fact , the histogram of their motif family sizes ( the analogous of figure [ fig:2 ] but restricted to this zinc finger tfs ) follows reasonably well our null model . however , the fitted parameter is well below the value obtained from the all set of tfs ( ) , thus confirming again that the rate of cis - innovation for this dbd family is higher than the average rate for all tfs .zinc finger tfs are known to be characterized by multiple tandem c2h2 zinc finger domains .such modularity enabled a rapid functional divergence among recently duplicated paralogs , as each domain in the protein can be gain , loss and mutate independently .this piece of evidence seems to further support a higher - than - average tendency of tfs in this dbd class to diversify their binding preferences . as given by eq .[ eq3 ] . in order to evaluate the fluctuations on the expectation, we simulated the evolution of dbd families , with starting size ranging from to , and .the two shaded areas correspond to 1 standard deviation and 3 standard deviations from the average .green diamond : zinc finger c2h2 family .cyan diamond : homeobox family .red diamond : forkhead family . ]this section addresses the differences in the motif family organization in different eukaryotic species .in particular , we focused on model species , which are expected to have well annotated tf repertoires .the same type of analysis presented in figure [ fig:2 ] was performed on the set of tfs of yeast and of three other species of increasing complexity in the animal lineage : _ c. elegans _ , _d. melanogaster _ and _ m. musculus_. figure [ fig:6 ] shows the histograms of the family size distributions and the corresponding fits with the null model in equation ( [ eqx2 ] ) . in all tested cases ,the motif families distribution follows the model functional form with an agreement comparable to the human case .however , there is a trend in the fitted parameter with complexity as measured by the number of tfs in the species ( or alternatively by the total number of genes ) .this trend is reported in figure [ fig:6 ] and it is sublinear in the investigated window of tf repertoires .the definition of indicates that this trend corresponds to a decrease rate of cis - innovation , with respect to the duplication rate , as the complexity of the organism increases .the value of intuitively represents the level of `` redundancy '' , i.e. , the tendency of tfs to keep the same binding preferences .indeed , for the limit value the distribution in equation ( [ eqx ] ) becomes a power - law distribution , thus with an increasing number of motif familes with a large number of tfs .figure [ fig:6 ] shows that this level of `` redundancy '' increases with the organims complexity .as we will argue in the discussion , this result can be linked to the known increased degeneracy of the tf binding sites in higher eukaryotes .a comparison of metazoan lineages shows that the ancestral metazoan genome included members of the bhlh , mef2 , fox , sox , t - box , ets , nuclear receptor , bzip and smad families , and a diversity of homeobox - containing classes , including antp , prd - like , pax , pou , lim - hd , six and tale .this implies that most of the human dbds originated at the dawn of the animal kingdom , before the divergence of contemporary animal lineages , providing a genetic toolkit conserved from the bilateria lineage on .once the main dbds were established , the discovery of new ones dropped down and the lineage speciation took advantage of a finer divergence dynamics at the regulatory level , by changing just the dna - sequence preferences of binding within the dbd . following this observation, this paper proposes a new way to organize transcription factors into families , which we call _ motif families _ , based on their pwms .it also introduces a simple one - parameter model of the bdi type to explain the distribution of tfs in these motif families .the novelty of this model with respect to standard bdi models is the introduction of what we called .cis - innovation accounts for a mutation - induced change of the binding preferences of an existing tf , thus implying a rewiring in the regulatory network but without the introduction of a new tf associated to a new pwm , which seems an extremely rare event on the time scales in analysis . using this simple theoretical description , we showed that the recent evolution of the majority ( more than 80% ) of the human tfs is rather well described by a neutral stochastic process of duplication , deletion and cis - innovation .furthermore we devised two main deviations from this neutral scenario : the overexpansion of three motif families and the `` freezing '' of a set of isolated tfs ( i.e. , size one motif families ) . the deviations which we observed , involving more or less 20% of the tfs ,seem to be due to opposite evolutionary pressures .the three over - expanded families are associated to an enhanced duplication rate and/or a decreased cis - innovation rate .the excess of isolated tfs seems instead to be mainly due to the inhibition of duplication for a specific set of ancient tfs , or `` singletons '' .the largest of the over - expanded families , collects most of the fox tfs ( forkhead box ) present in the tfs in analysis .the second and third over - expanded families contain tfs belonging to the homeobox family ( one of which contains most of the hox tfs ) .these last two families seems to represent examples of positive selection for duplication due to the crucial role their tfs play in morphogenesis and animal body development .indeed , a gene ontology analysis for their genes with respect to the entire set of tfs ( see materials and methods and table:[tab : 1 ] ) shows enrichments for go annotations related to _ morphogenesis _, _ development _ and _ pattern specification_. this result strongly supports the idea that the expansion of these families was positively selected for their crucial role in multicellularity evolution and animal speciation . .the overexpansion of fox genes can also be associated to positive selection because of their importance in the development of several metazoans , but probably coupled with a functional constraint related to the bispecificity of their dna binding limiting the rate of allowed cis - innovation .this low rate of cis - innovation was further confirmed by the low level of diversification of the forkehad dbd family into motif families presented in figure [ fig:5 ] .the analysis of the splitting of the dbd families into motif families also revealed a higher - than - average rate of diversification of the zinc finger dbd family . a result that seems compatible with the high level of `` evolvability '' of these tfs suggested in an independent study .we also detected an over - presence of families of size 1 , identifying a portion of tfs as deviation from the null model .we found a comparable number of tfs ( 41 ) that have a very ancient origin and no evidence of duplication , and thus are likely to be examples of the so - called _ singleton genes _ . according to the `` singleton hypothesis '' , these genes are indeed ancestral genes of prokaryotic origin , addressing basic functions and requiring a precise fine - tuning of their abundances . therefore , the effect of selection against duplication seems to be perfectly compatible with their proposed functional role .a major issue in the study of the evolution of regulatory networks is to identify those features of the network which can be in some way associated to the organism complexity .combinatorial regulation is a distinctive feature of complex eukaryotes . indeed ,prokaryotic and eukaryotic tfs use different binding strategies , with pwms of high and low information content respectively .this difference is related to the evolution of the combinatorial strategies of control , typical of higher eukaryotes , that can compensate the low information content of their tf binding sites by having specific combinations of tfs targeting the same promoter .this could have also been favoured by the widespread presence of transposable elements able to convey combinations of tf binding sites all over the genome .accordingly , we expect a tendency of eukaryotes of increasing complexity to increase their tf repertoire , as it can indeed be observed , to have a richer repertoire to implement combinatorial regulation. however , the increased degeneracy of tf pwms with the organism complexity can also have another relevant consequence .if the set of preferred binding sequences defining the motif family of a tf is loosely defined , it can include several possible sequences .thus , the mutation process is less likely to drive the tf away from its motif family .this would translate in a lower cis - innovation rate in our model for organisms with higher complexity , and this trend seems indeed to emerge from our comparison of the different motif family organization in different species ( figure [ fig:6 ] ) .complexity seems to be associate to the `` redundancy '' of the tf repertoire , i.e. , to the presence of large families of tfs which recognize the same binding sequences .it would be interesting to understand the consequences of this observation on the topology and function of the regulatory network .we took advantage of the catalog of inferred sequence binding preferences ( cis - bp database ) , which collects the specificities of a vast amount of tfs in several species .the pwms in this database were either directly derived from systematic protein binding microarray ( pbm ) experiments or inferred by overall dbd amino acid identity .furthermore , the cis - bp database gathers data from all the main existing databases ( such as transfac , jaspar and selex ) and several chip - seq experiments , which had been used for cross - validation . to construct the motif families ,we downloaded the pwms associated to each tf , considering both those obtained from experimental assays and the inferred ones . in this way, we obtained 4172 pwm unique identifiers ( pwd ids ) annotated to 906 different tfs .we define as `` class '' the set of all families of size . represents the number of families in the i - th class and be the total number of classes corresponding to the possible family sizes , with at most equal to the total number of elements .the evolution equations are : where , , and denote the birth , death , _ de novo _ innovation and _ _ cis-__innovation rates respectively . the model can be mapped in the simplest case of the bdi models discussed in with the substitution and . from the general solution discussed in , we obtain at steady state : where . if , following , we assume a balance between birth and death rates then and eq.([sol ] ) becomes : the deviation of from 1 allows to estimate the magnitude of with respect to . in the limit of ( ) the usual power - like behaviour of the standard dbi model is recovered . since we know , we shall assume and the solution of the model eq.([sol2 ] ) becomes a function only of . to perform a mle of the parameter , we must first move from the distribution of the number of families to a probability distribution .this is simply achieved by normalizing the .the normalization constant assumes a very simple form in the large limit : ^{-1 } \stackrel{m \to \infty}{= } [ -\ln(1-\theta)]^{-1},\ ] ] leading to the probability distribution : p_i= [ prob ] we show in the supplementary material that for our range of values of and the error induced by this approximation is negligible . the probability distribution in eq .[ prob ] is simple enough to allow an analytic determination of the mle for ( see the supplementary material for the detailed calculation ) , which turns out to be : _ mle=1-e^+w_-1(-e^- ) where is the mean size over the sample and w is the lambert function .we compared the empirical data with our model , defined by , following the strategy proposed in .more precisely we used the kolmogorov - smirnov ( ks ) statistic as a measure of the distance between the distribution of the empirical data and our model . in order to obtain an unbiased estimate for the p - value, we created a set of one thousand synthetic data samples with the same size of the empirical one , drawn from a distribution with the same value .for each synthetic sample , we computed the ks statistic relative to the best - fit law for that set and constructed the distribution of ks values .the p - values reported in the paper represent the fraction of the synthetic distances larger than the empirical one .we performed a gene ontology analysis on the genes belonging to the union of the three larger motif families using the overrepresentation test of the panther facility and selecting only the biological process ontology .we chose as a background for the test the entire data sample ( 906 tfs ) to eliminate annotations simply associated to generic regulatory functions of tfs .p - values were evaluated using the bonferroni correction .the work was partially supported by the compagnia san paolo grant genernet .we thank f.d . ciccarelli and m. cosentino lagomarsino for critical reading of the manuscript , and a. colliva , m. fumagalli and a. mazzolini for useful discussions .the authors declare no competing financial interests .gretchen bain , els c robanus maandag , david j izon , derk amsen , ada m kruisbeek , bennett c weintraub , ian krop , mark s schlissel , ann j feeney , marian van roon , et al .e2a proteins are required for proper b cell development and initiation of immunoglobulin gene rearrangements . , 79(5):885892 , 1994 .carlos d bustamante , adi fledel - alon , scott williamson , rasmus nielsen , melissa todd hubisz , stephen glanowski , david m tanenbaum , thomas j white , john j sninsky , ryan d hernandez , et al .natural selection on protein - coding genes in the human genome ., 437(7062):11531157 , 2005 .emmanuelle roulet , stphane busso , anamaria a camargo , andrew jg simpson , nicolas mermod , and philipp bucher .high - throughput selex sage method for quantitative modeling of transcription - factor binding sites ., 20(8):831835 , 2002 . matthewt weirauch , ally yang , mihai albu , atina g cote , alejandro montenegro - montero , philipp drewe , hamed s najafabadi , samuel a lambert , ishminder mann , kate cook , et al .determination and inference of eukaryotic transcription factor sequence specificity ., 158(6):14311443 , 2014 .michael j stanhope , andrei lupas , michael j italia , kristin k koretke , craig volker , and james r brown .phylogenetic analyses do not support horizontal gene transfers from bacteria to vertebrates ., 411(6840):940944 , 2001 .alastair crisp , chiara boschetti , malcolm perry , alan tunnacliffe , and gos micklem .expression of multiple horizontally acquired genes is a hallmark of both vertebrate and invertebrate genomes ., 16(50):101186 , 2015 .j l riechmann , j heard , g martin , l reuber , c - z jiang , j keddie , l adam , o pineda , oj ratcliffe , rr samaha , et al .arabidopsis transcription factors : genome - wide comparative analysis among eukaryotes ., 290(5499):21052110 , 2000 .claire larroux , graham n luke , peter koopman , daniel s rokhsar , sebastian m shimeld , and bernard m degnan .genesis and expansion of metazoan transcription factor gene classes . , 25(5):98096 , may 2008 .mansi srivastava , oleg simakov , jarrod chapman , bryony fahey , marie e a gauthier , therese mitros , gemma s richards , cecilia conaco , michael dacre , uffe hellsten , claire larroux , nicholas h putnam , mario stanke , maja adamska , aaron darling , sandie m degnan , todd h oakley , david c plachetzki , yufeng zhai , marcin adamski , andrew calcino , scott f cummins , david m goodstein , christina harris , daniel j jackson , sally p leys , shengqiang shu , ben j woodcroft , michel vervoort , kenneth s kosik , gerard manning , bernard m degnan , and daniel s rokhsar . the amphimedon queenslandica genome and the evolution of animal complexity ., 466(7307):7206 , aug 2010 .arttu jolma , jian yan , thomas whitington , jarkko toivonen , kazuhiro r nitta , pasi rastas , ekaterina morgunova , martin enge , mikko taipale , gonghong wei , et al .dna - binding specificities of human transcription factors . , 152(1):327339 , 2013 .marcus b. noyes , ryan g. christensen , atsuya wakabayashi , gary d. stormo , michael h. brodsky , and scot a. wolfe .analysis of homeodomain specificities allows the family - wide prediction of preferred recognition sites . ,133(7):12771289 , jun 2008 .michael f berger , gwenael badis , andrew r gehrke , shaheynoor talukder , anthony a philippakis , lourdes pea - castillo , trevis m alleyne , sanie mnaimneh , olga b botvinnik , esther t chan , faiqua khalid , wen zhang , daniel newburger , savina a jaeger , quaid d morris , martha l bulyk , and timothy r hughes .variation in homeodomain dna binding revealed by high - resolution analysis of sequence preferences ., 133(7):126676 , jun 2008 .metewo selase enuameh , yuna asriyan , adam richards , ryan g christensen , victoria l hall , majid kazemian , cong zhu , hannah pham , qiong cheng , charles blatti , jessie a brasefield , matthew d basciotta , jianhong ou , joseph c mcnulty , lihua j zhu , susan e celniker , saurabh sinha , gary d stormo , michael h brodsky , and scot a wolfe . global analysis of drosophila cys-his zinc finger proteins reveals a multitude of novel recognition motifs and binding determinants ., 23(6):92840 , jun 2013 .jaime huerta - cepas , damian szklarczyk , kristoffer forslund , helen cook , davide heller , mathias c. walter , thomas rattei , daniel r. mende , shinichi sunagawa , michael kuhn , lars juhl jensen , christian von mering , and peer bork . eggnog 4.5 : a hierarchical orthology framework with improved functional annotations for eukaryotic , prokaryotic and viral sequences . , 44(d1):d286d293 , jan 2016 .omer an , giovanni m. dallolio , thanos p. mourikis , and francesca d. ciccarelli .ncg 5.0 : updates of a manually curated repository of cancer genes and associated properties from cancer mutational screenings . ,44(d1):d992d999 , jan 2016 .so nakagawa , stephen s gisselbrecht , julia m rogers , daniel l hartl , and martha l bulyk .dna - binding specificity changes in the evolution of forkhead transcription factors . , 110(30):1234912354 , 2013 .joseph f ryan , patrick m burton , maureen e mazza , grace k kwong , james c mullikin , and john r finnerty .the cnidarian - bilaterian ancestor possessed at least 56 homeoboxes : evidence from the starlet sea anemone , nematostella vectensis ., 7(7):r64 , jan 2006 .nicholas h. putnam , mansi srivastava , uffe hellsten , bill dirks , jarrod chapman , asaf salamov , astrid terry , harris shapiro , erika lindquist , vladimir v. kapitonov , jerzy jurka , grigory genikhovich , igor v. grigoriev , susan m. lucas , robert e. steele , john r. finnerty , ulrich technau , mark q. martindale , and daniel s. rokhsar .sea anemone genome reveals ancestral eumetazoan gene repertoire and genomic organization ., 317(5834):8694 , jul 2007 .elena simionato , valrie ledent , gemma richards , morgane thomas - chollier , pierre kerner , david coornaert , bernard m degnan , and michel vervoort .origin and diversification of the basic helix - loop - helix gene family in metazoans : insights from comparative genomics ., 7(1):33 , jan 2007 .nicole king , m jody westbrook , susan l young , alan kuo , monika abedin , jarrod chapman , stephen fairclough , uffe hellsten , yoh isogai , ivica letunic , michael marr , david pincus , nicholas putnam , antonis rokas , kevin j wright , richard zuzow , william dirks , matthew good , david goodstein , derek lemons , wanqing li , jessica b lyons , andrea morris , scott nichols , daniel j richter , asaf salamov , j g i sequencing , peer bork , wendell a lim , gerard manning , w todd miller , william mcginnis , harris shapiro , robert tjian , igor v grigoriev , and daniel rokhsar .the genome of the choanoflagellate monosiga brevicollis and the origin of metazoans ., 451(7180):7838 , feb 2008 .alessandro testori , livia caizzi , santina cutrupi , olivier friard , michele de bortoli , davide cora , and michele caselle .the role of transposable elements in shaping the combinatorial interaction of transcription factors ., 13(1):400 , jan 2012 .volker matys , olga v kel - margoulis , ellen fricke , ines liebich , sigrid land , a barre - dirrie , ingmar reuter , d chekmenev , mathias krull , klaus hornischer , et al .transfac and its module transcompel : transcriptional gene regulation in eukaryotes ., 34(suppl 1):d108d110 , 2006 .anthony mathelier , xiaobei zhao , allen w. zhang , franois parcy , rebecca worsley - hunt , david j. arenillas , sorana buchman , chih - yu chen , alice chou , hans ienasescu , jonathan lim , casper shyr , ge tan , michelle zhou , boris lenhard , albin sandelin , and wyeth w. wasserman .jaspar 2014 : an extensively expanded and updated open - access database of transcription factor binding profiles ., 42(database issue):d142d147 , jan 2014 .arttu jolma , teemu kivioja , jarkko toivonen , lu cheng , gonghong wei , martin enge , mikko taipale , juan m vaquerizas , jian yan , mikko j sillanp , et al .multiplexed massively parallel selex for characterization of human transcription factor binding specificities . , 20(6):861873 , 2010 .
transcription factors ( tfs ) exert their regulatory action by binding to dna with specific sequence preferences . however , different tfs can partially share their binding sequences . this `` redundancy '' of binding defines a way of organizing tfs in `` motif families '' that goes beyond the usual classification based on protein structural similarities . since the tf binding preferences ultimately define the target genes , the motif family organization entails information about the structure of transcriptional regulation as it has been shaped by evolution . focusing on the human lineage , we show that a one - parameter evolutionary model of the birth - death - innovation type can explain the empirical repartition of tfs in motif families , thus identifying the relevant evolutionary forces at its origin . more importantly , the model allows to pinpoint few deviations in human from the neutral scenario it assumes : three over - expanded families corresponding to hox and fox type genes , a set of `` singleton '' tfs for which duplication seems to be selected against , and an higher - than - average rate of diversification of the binding preferences of tfs with a zinc finger dna binding domain . finally , a comparison of the tf motif family organization in different eukaryotic species suggests an increase of redundancy of binding with organism complexity .
_ korteweg - de vries ( kdv ) equations _ are typical dispersive nonlinear partial differential equations ( pdes ) .zabusky and kruskal observed that kdv equation owns wave - like solutions which can retain their initial forms after collision with another wave .this led them to name these solitary wave solutions `` solitons '' .these special solutions were observed and investigated for the first time in 1834 by scott russell . later in 1895 , korteweg andde vries showed that the soliton could be expressed as a solution of a rather simple one - dimensional nonlinear pde describing small amplitude waves in a narrow and shallow channel of water : where is some constant , denotes the gravitational constant , is the density , the surface tension and denotes the surface displacement of the wave above the undisturbed water level .the equation can be written in non - dimensional , simplified form by the transformation : to obtain the usual kdv equation ( subscripts and denoting partial differentiations ) , with the soliton solution is given by the kdv equation has a broad range of applications : description of the asymptotic behaviour of small- but finite - amplitude shallow - water waves , hydromagnetic waves in a cold plasma , ion - acoustic waves , interfacial electrohydrodynamics , internal wave in the coastal ocean , water wave power stations , acoustic waves in an anharmonic crystal , or pressure pulse propagation in blood vessels . in this paper , we focus on the _ linearized kdv equation _ ( also known as generalized airy equation ) in one space dimension where stands for a source term and and are real constants such that and .recall that for and we recover the case considered by zheng , wen & han .although the pde looks very simple , it has a lot of applications , e.g. whitham used it for the modelling of the propagation of long waves in the shallow water equations , see also .we emphasize the fact that the restriction of the solution to equation to a finite interval is not periodic . thus concerning the numerical simulation, we can not use the fft method and we consider instead the equation set on an interval and supplemented with specially designed boundary conditions . since the linear pde is defined on an unbounded domain , one has to confine the unbounded domain in a numerical finite computational domain for simulation . a common used method in such situation consists in reducing the computational domain by introducing _ artificial boundary conditions_. such artificial boundary conditionsare constructed with the goal to approximate the exact solution on the whole domain restricted to the computational one .they are called _ absorbing boundary conditions ( abcs ) _ if they lead to a well - posed initial boundary value problem where some energy is absorbed at the boundary .if the approximate solution coincides on the computational domain with the exact solution on the whole domain , they are called _ transparent boundary conditions ( tbcs)_. see for a review on the techniques used to construct such transparent or artificial boundary conditions for the schrdinger equation .the linearity property of equation allows to use many analytical tools such as the laplace transform . using this tool , zheng , wen & han derived the exact tbcs for equation at fixed boundary points and then obtained an initial boundary value problem `` equivalent '' to the problem in the whole space domain .moreover , using a dual petrov - galerkin scheme the authors proposed a numerical approximation of this initial boundary value problem .thus the derivation in of the adapted boundary conditions is carried out at the continuous level and then discretized afterwards .recently , zhang , li and wu revisited the approach of zheng , wen & han and proposed a fast approximation of the exact tbcs based on pad approximation of the laplace - transformed tbcs . in this paperwe will follow a different strategy : we first discretize the equation with respect to time and space and then derive the suitable artificial boundary conditions for the fully discrete problem using the -transformation .the goal of this paper is therefore to derive analogous conditions of the transparent boundary conditions obtained by the authors in but in the fully discrete case .these discrete artificial boundary conditions are superior since they are by construction perfectly adapted to the used interior scheme and thus retain the stability properties of the underlying discretization method and theoretically do not produce any reflections when compared to the discrete whole space solution. however , there will be some small errors induced by the numerical root finding routine and the numerical inverse -transformation and also later due to the fast sum - of - exponentials approximation .let us finally remark that there exists also an alternative approach in this `` discrete spirit '' , namely to use discrete multiple scales , following the work of schoombie .the paper is organized as follow . in section[ continuous ] we use the ideas of zheng , wen & han to obtain the tbcs for the linearized kdv equation and we briefly recall the results given in for the special case and .in section [ fullydisc ] we present an appropriate space and time discretization and explain the procedure to derive the artificial boundary conditions for the purely discrete problem mimicking the ideas presented in section [ continuous ] . since exact abcs are too time - consuming , especially for higher dimensional problems , we propose in section [ s : expo ] to use a sum - of - exponentials approach , to speed up the ( approximate ) computation of the discrete convolutions at the boundaries . finally , in section [ num ] we present some numerical benchmark examples from the literature to illustrate our findings .the motivation for this section is twofold .first , we briefly recall from the literature the construction of tbcs for the 1d linearized kdv equation for the special case and and the well - posedness of the resulting initial boundary value problem .secondly , we extend the derivation of tbcs to the generalized case and ; these results will serve us as a guideline for the completely discrete case in section [ fullydisc ] . to do so, we consider the cauchy problem where ( for simplicity ) the initial function and the source term are assumed to be compactly supported in a finite computational interval ] , _ i.e. _ denoting by the laplace transform in time of the function , we obtain from the _ transformed exterior problems _ where , with , stands for the argument of the transformation , i.e. the dual time variable .the general solutions of the ode are given explicitly by where , , denote the roots of the ( depressed ) cubic equation the three solutions are given by where and [ theo : continuous ] the roots of the cubic equation possess the following _ separation property _ the proof of the theorem [ theo : continuous ] is given in appendix [ proof : theo : cont ] .+ this result is crucial for defining later the tbcs ; the _ separation _ property allows to separate the fundamental solutions into outgoing and incoming waves .+ [ cubiceq ] considering , as in , the case and we have {s},\qquad\lambda_2(s)=-\omega\sqrt[3]{s } , \qquad\lambda_3(s)=-\omega^2\sqrt[3]{s}.\ ] ] now using the decay condition , the general solution , the separation property and since solutions of have to belong to , we obtain which yields the following tbcs in the laplace - transformed space since , and are roots of the cubic equation we obtain immediately and hence the transformed left tbc can be rewritten solely in terms of now applying the inverse laplace transform to equations and we get where stands for the inverse laplace transform of and denotes the convolution operator .we emphasize that those boundary conditions strongly depend on and through the root .[ tbczheng][r22 ] considering , as in , the special case and we easily obtain from where with is the nonlocal - in - time fractional integral operator given by the _ riemann - liouville formula _ where is the gamma function .we refer to for more details .to summarize our findings so far , the derived initial boundary value problem reads , \label{tbc10 } \\u(0,x)=u_0(x ) , \qquad x \in [ a , b ] , \label{tbc11 } \\ u(t , a)-u_2\,\mathcal{l}^{-1}\biggl(\frac{\lambda_1(s)^2}{s}\biggr)*u_x(t , a)-u_2\,\mathcal{l}^{-1 } \biggl(\frac{\lambda_1(s)}{s}\biggr)*u_{xx}(t , a)=0,\label{tbc12 } \\u(t , b)-\mathcal{l}^{-1}\bigl(\frac{1}{\lambda_1(s)^2}\bigr ) * u_{xx}(t , b)=0,\label{tbc13 } \\u_x(t , b)-\mathcal{l}^{-1}\bigl(\frac{1}{\lambda_1(s)}\bigr)*u_{xx}(t , b)=0 . \label{tbc14}\end{gathered}\ ] ] note that a solution of can be regarded as the restriction on ] for the finite time ] given by with the temporal step size : we also define a uniform subdivision of $ ] given by with the spatial step size : we emphasize here that the temporal discretization must remain uniform due to the usage of the -transform to derive the discrete tbcs .on the other hand , the space discretization in the interior domain could have been non uniform . in the following ,we denote by the pointwise approximation of the solution .we will consider in the sequel two different numerical schemes based on trapezoidal rule in time ( semi discrete crank - nicolson approximation ) .the first one is the _ rightside crank - nicolson _ ( proposed by mengzhao ) ( r - cn ) scheme defined for and .it reads the second one is the _ centered crank - nicolson _ ( c - cn ) scheme which is used for the generalized linear korteweg - de vries equation where and .it reads here , the convection term is discretized in a centered way . indeed , using simply an _upwind crank - nicholson _scheme for the first order term and ( r - cn ) scheme for the third order term leads to a strongly dissipative scheme .both schemes are absolutely stable and their truncation errors are respectively the stencil of the different scheme involves respectively 4 nodes for ( r - cn ) and 5 nodes for ( c - cn ) schemes .this structure will have a strong influence on the computation of the roots for the corresponding equation to at the discrete level .for the ( r - cn ) scheme , we will recover as in the continuous case a cubic equation , but a quartic equation for the ( c - cn ) scheme .the later case will turn out to be more difficult .let us first consider the ( r - cn ) scheme for the interior problem , i.e. with a spatial index such that .let us recall that this scheme is only valid in the case and .for this scheme , as in the continuous case , we will obtain one boundary condition at point and two boundary conditions at the right side which will involve the two nodes and , cf .the continuous abcs . in order to derive appropriate artificial boundary conditions , we follow the same procedure as in section [ continuous ] , but on a purely discrete level .first we apply the -transform with respect to the time index , which is the discrete analogue of the laplace transform in time , to the partial difference equation .we refer the reader to the appendix of for a proper definition of the -transform and its basic properties .the standard definition reads where is the convergence radius of the laurent series and .denoting by the -transform of the sequence we obtain from the homogeneous _ third order difference equation _ it is well - known that homogeneous difference equations with constant coefficients possess solutions of the power form , where solves the cubic equation with equation admits three fundamental solutions denoted here by , and that can be computed analytically or numerically up to a very high precision. thus the general solution of on the exterior domains is of the form let the three solutions of are if we consider , the roots may be discontinuous due to branch changes with respect to in the complex plane .one example of this phenomenom can be seen on left part of figure [ fig : branch ] where we plot the evolution of roots for and .on one side , we clearly see that branch changes occur . on the other side , we always have simultaneously one root inside the unit disk and two outside . instead of considering roots , it is more convenient to identify roots by continuity .we refer to these continuous roots as ( see right part of figure [ fig : branch ] ) .we therefore have the following theorem . [cols="^,^ " , ] , , to the equation for and ., scaledwidth=50.0% ] we prove here the result for stricly positive or negative velocity . in both cases ,we want to determine the domains , image of , by roots defined by since is simply connected , we just have to identify the boundary of each domain and a single point inside it to completely determined them .the boundaries are given by the image of , , and .we perform an asymptotic expansion with respect to .let us define , , and .let us consider . for ,the general expression for roots is for and we have \displaystyle \lambda_2(s)=-\left ( -\frac{\sqrt{3}}{2 } c+\frac{i}{2 } d\right ) - \left ( \frac{1}{2}c+i\frac{\sqrt{3}}{2 } d\right ) \frac{\varepsilon}{3a}+o(\varepsilon^2 ) , \\[2 mm ] \displaystyle \lambda_3(s)=id+c\frac{\varepsilon}{3a}+o(\varepsilon^2 ) . \end{array}\ ] ] it is easy to show that , , and are positive for any . clearly , we have , and .concerning , since and , domain is located on the left and above the complex curve .the conclusion for is similar and we conclude that domain is located on the left and below the complex curve .the domain is located on the right of the complex curve . for , we define and have .we therefore can write with .we obtain we can conclude that is located on the left of the complex curve , , is on the right of the complex curve , and is on the right of , .the situation is completely symetric if we consider .we therefore recover the figure [ fig : cubiceqcont ] and we have shown the _ separation property _ like in the previous case , we perform an asymptotic expansion and consider .let us define , , and .for any , we have , , and .the general expression for roots is for thus , the asymptotics of are given by \displaystyle \lambda_2(s)=-\left ( -\frac{\sqrt{3}}{2 } \tilde{c}+\frac{i}{2 } \tilde{d}\right ) - \left ( \frac{1}{2}\tilde{c}+i\frac{\sqrt{3}}{2 }\tilde{d}\right ) \frac{\varepsilon}{3a}+o(\varepsilon^2 ) , \\[2 mm ] \displaystyle \lambda_3(s)=i\tilde{d}+\tilde{c}\frac{\varepsilon}{3a}+o(\varepsilon^2 ) .\end{array}\ ] ] thanks to the signs of , , and , the conclusions can be drawn by a simliar study to the case and we have , and .we consider here the continuous roots of and ordered them thanks to the relation .we defined . if we note , , then since , we have and thus , instead of studying as functions of , we now consider them as functions of variable and try to identify the domains let us first show that can not belong to the unit circle .let us assume that there exists such that . since is a root of , then it leads to .we therefore obtain we have a contradiction since we had assume that .thus , can not belong to the unit circle . since we consider the continuous roots and simply connected , then or , the complementary of in . in order to findif lie inside or outside the unit circle , we just have to determine the domains for a single value of .next , we know that if we consider a third order algebraic equation then its roots satisfies therefore , , and satisfy the first relation implies .we therefore automatically have one root which lies inside the unit circle and one outside .it remains to understand the behavior of . in order to see the location of in the complex plane, we can therefore take any value of such that .if we take , we have thus , for any and we have shown the proof is similar to the previous one .let us consider a root and show that it can not be equal to .let us assume that .then since , this leads to but , .this is a contradiction and .+ we moreover have if we sort the roots as , the last equation leads to computing and for for example prove that and .we therefore have the discrete separation property of the roots .x. antoine , a. arnold , c. besse , m. ehrhardt , and a. schdle , _ a review of transparent and artificial boundary conditions techniques for linear and nonlinear schrdinger equations _phys . , 4 ( 2008 ) , 729 - 796 .h. gleeson , p. hammerton , d.t .papageorgiou , and j .-vanden - broeck , _ a new application of the korteweg - de vries benjamin - ono equation in interfacial electrohydrodynamics _ , phys .fluids 19 ( 2007 ) , 031703 d.j .korteweg , and g. de vries , _ on the change of form of long waves advancing in a rectangular canal , and on a new type of long stationary waves _, philosophical magazine 39 ( 1895 ) , 422 - 443 .kudryashov , and i.l .chernyavskii , _ nonlinear waves in fluid flow through a viscoelastic tube _ , fluid dynamics 41 ( 2006 ) , 49 - 62 . n.j .zabusky , and m.d .kruskal , _ interactions of solitons in a collisionless plasma and the recurrence of initial states _ ,( 1965 ) , 240 - 243 .n.j . zabusky , _ phenomena associated with the oscillations of a nonlinear model string _ , in mathematical models in physical sciences , s. drobot ( ed . ) prentice - hall , englewood cliffs , new jersey , 1963 .
we consider the derivation of continuous and fully discrete artificial boundary conditions for linearized korteweg - de vries equation . we show that we can obtain them for any constant velocities and any dispersion . the discrete artificial boundary conditions are provided for two different numerical schemes . in both continuous and discrete case , the boundary conditions are non local with respect to time variable . we propose fast evaluations of discrete convolutions . we present various numerical tests which show the effectivness of the artificial boundary conditions .
recovery for jointly sparse signals concerns accurately estimating the non - zero component locations shared by a set of sparse signals based on a limited number of noisy linear observations . more specifically ,suppose that is a sequence of jointly sparse signals ( possibly under a sparsity - inducing basis instead of the canonical domain ) with a common support , which is the index set indicating the non - vanishing signal coordinates .this model is the same as the joint sparsity ( jsm-2 ) in .the observation model is linear : in , is the measurement matrix , the noisy data vector , and an additive noise . in most cases , the sparsity level and the number of observations is far less than , the dimension of the ambient space .this problem arises naturally in several signal processing areas such as compressive sensing candes2006uncertainty , donoho2006compressed , candes2005decoding , candes2008introcs , baraniuk2007compressivesensing , source localization willsky2005source , model2006signal , cevher2008distributed , cevher2009bayesian , sparse approximation and signal denoising .compressive sensing candes2006uncertainty , donoho2006compressed , candes2005decoding , a recently developed field exploiting the sparsity property of most natural signals , shows great promise to reduce signal sampling rate . in the classical setting of compressive sensing , only one snapshot is considered ; _i.e. _ , in . the goal is to recover a long vector with a small fraction of non - zero coordinates from the much shorter observation vector .since most natural signals are compressible under some basis and are well approximated by their representations mallat1999wavelet , this scheme , if properly justified , will reduce the necessary sampling rate beyond the limit set by nyquist and shannon baraniuk2007compressivesensing , candes2008introcs .surprisingly , for exact signals , if and the measurement matrix is generated randomly from , for example , a gaussian distribution , we can recover exactly in the noise - free setting by solving a linear programming task .besides , various methods have been designed for the noisy case . along with these algorithms ,rigorous theoretic analysis is provided to guarantee their effectiveness in terms of , for example , various -norms of the estimation error for .however , these results offer no guarantee that we can recover the support of a sparse signal correctly .the accurate recovery of signal support is crucial to compressive sensing both in theory and in practice . since for signal recoveryit is necessary to have , signal component values can be computed by solving a least squares problem once its support is obtained . therefore , support recovery is a stronger theoretic criterion than various -norms . in practice ,the success of compressive sensing in a variety of applications relies on its ability for correct support recovery because the non - zero component indices usually have significant physical meanings .the support of temporally or spatially sparse signals reveals the timing or location for important events such as anomalies .the indices for non - zero coordinates in the fourier domain indicate the harmonics existing in a signalborgnat2008timefrequency , which is critical for tasks such as spectrum sensing for cognitive radios . in compressed dna microarrays for bio - sensing ,the existence of certain target agents in the tested solution is reflected by the locations of non - vanishing coordinates , while the magnitudes are determined by their concentrationsbaraniuk2007dna , vikalo2007dna , parvaresh2008dna , vikalo2008dna . for compressive radar imaging, the sparsity constraints are usually imposed on the discretized time frequency domain .the distance and velocity of an object have a direct correspondence to its coordinate in the time - frequency domain .the magnitude determined by coefficients of reflection is of less physical significancebaraniuk2007radar , herman2008radar , herman2008highradar . in sparse linear regression , the recovered parameter support corresponds to the few factors that explain the data . in all these applications ,the support is physically more significant than the component values .our study of sparse support recovery is also motivated by the recent reformulation of the source localization problem as one of sparse spectrum estimation . in ,the authors transform the process of source localization using sensory arrays into the task of estimating the spectrum of a sparse signal by discretizing the parameter manifold .this method exhibits super - resolution in the estimation of direction of arrival ( doa ) compared with traditional techniques such as beamforming johnson1993array , capon , and music schmidt1986music , bienvenu1980music .since the basic model employed in willsky2005source applies to several other important problems in signal processing ( see and references therein ) , the principle is readily applicable to those cases .this idea is later generalized and extended to other source localization settings in model2006signal , cevher2008distributed , cevher2009bayesian .for source localization , the support of the sparse signal reveals the doa of sources .therefore , the recovery algorithm s ability of exact support recovery is key to the effectiveness of the method .we also note that usually multiple temporal snapshots are collected , which results in a jointly sparse signal sets as in .in addition , since is the number of sensors while is the number of temporal samples , it is far more expensive to increase than .the same comments apply to several other examples in the compressive sensing applications discussed in the previous paragraph , especially the compressed dna microarrays , spectrum sensing for cognitive radios , and compressive sensing radar imaging .the signal recovery problem with joint sparsity constraint duarte2005distributed , duarte2006dcs , fornasier2006,eldar2007continuous , also termed the multiple measurement vector ( mmv ) problemcotter2005inverse , chen2005mmv , chen2006mmv , tropp2006greedy , tropp2006convex , has been considered in a line of previous works .several algorithms , among them simultaneous orthogonal matching pursuit ( somp ) cotter2005inverse , tropp2006greedy , duarte2006dcs ; convex relaxation tropp2006convex ; ; and m - focuss , are proposed and analyzed , either numerically or theoretically .these algorithms are multiple - dimension extensions of their one - dimension counterparts .most performance measures of the algorithms are concerned with bounds on various norms of the difference between the true signals and their estimates or their closely related variants .the performance bounds usually involve the mutual coherence between the measurement matrix and the basis matrix under which the measured signals have a jointly sparse representation .however , with joint sparsity constraints , a natural measure of performance would be the model s potential for correctly identifying the true common support , and hence the algorithm s ability to achieve this potential .as part of their research , j. chen and x. huo derived , in a noiseless setting , sufficient conditions on the uniqueness of solutions to under and minimization . in cotter2005inverse , s. cotter _ et .al . _ numerically compared the probabilities of correctly identifying the common support by basic matching pursuit , orthogonal matching pursuit , focuss , and regularized focuss in the multiple - measurement setting with a range of snrs and different numbers of snapshots .the availability of multiple temporal samples offers serval advantages to the single - sample case . as suggested by the upper bound on the probability of error ,increasing the number of temporal samples drives the probability of error to zero exponentially fast as long as certain condition on the inconsistency property of the measurement matrix is satisfied .the probability of error is driven to zero by scaling the snr according to the signal dimension in , which is not very natural compared with increasing the samples , however .our results also show that under some conditions increasing temporal samples is usually equivalent to increasing the number of observations for a single snapshot .the later is generally much more expensive in practice .in addition , when there is considerable noise and the columns of the measurement matrix are normalized to one , it is necessary to have multiple temporal samples for accurate support recovery as discussed in section [ sec : lower ] and section [ sec : gaussian_nece ] .our work has several major differences compared to related work and , which also analyze the performance bounds on the probability of error for support recovery using information theoretic tools .the first difference is in the way the problem is modeled : in , the sparse signal is deterministic with known smallest absolute value of the non - zero components while we consider a random signal model .this leads to the second difference : we define the probability of error over the signal and noise distributions with the measurement matrix fixed ; in , the probability of error is taken over the noise , the gaussian measurement matrix and the signal support .most of the conclusions in this paper apply to general measurement matrices and we only restrict ourselves to the gaussian measurement matrix in section [ sec : gaussian ] .therefore , although we use a similar set of theoretical tools , the exact details of applying them are quiet different .in addition , we consider a multiple measurement model while only one temporal sample is available in . in particular , to get a vanishing probability of error , aeron _ et.al ._ require to scale the snr according to the signal dimension , which has a similar effect to having multiple temporal measurements in our paper .although the first two differences make it difficult to compare corresponding results in these two papers , we will make some heuristic comments in section [ sec : gaussian ] .the contribution of our work is threefold .first , we introduce a hypothesis - testing framework to study the performance for multiple support recovery .we employ well - known tools in statistics and information theory such as the chernoff bound and fano s inequality to derive both upper and lower bounds on the probability of error .the upper bound we derive is for the _ optimal _ decision rule , in contrast to performance analysis for specific sub - optimal reconstruction algorithmsgorodnitsky1997focuss , candes2007dantzig , tropp2007omp , dai2008subspace , needell2008cosamp .hence , the bound can be viewed as a measure of the measurement system s ability to correctly identify the true support .our bounds isolate important quantities that are crucial for system performance .since our analysis is based on measurement matrices with as few assumptions as possible , the results can be used as a guidance in system design .second , we apply these performance bounds to other more specific situations and derive necessary and sufficient conditions in terms of the system parameters to guarantee a vanishing probability of error .in particular , we study necessary conditions for accurate source localization by the mechanism proposed in willsky2005source . by restricting our attention to gaussian measurement matrices ,we derive a result parallel to those for classical compressive sensing , namely , the number of measurements that are sufficient for signal reconstruction .even if we adopt the probability of error as the performance criterion , we get the same bound on as in . however , our result suggests that generally it is impossible to obtain the true support accurately with only one snapshot when there is considerable noise .we also obtain a necessary condition showing that the term can not be dropped in compressive sensing .last but not least , in the course of studying the performance bounds we explore the eigenvalue structure of a fundamental matrix in support recovery hypothesis testing for both general measurement matrices and the gaussian measurement ensemble .these results are of independent interest .the paper is organized as follows . in section [ sec : modelandpre ] , we introduce the mathematical model and briefly review the fundamental ideas in hypothesis testing . section [ sec : upper ] is devoted to the derivation of upper bounds on the probability of error for general measurement matrices .we first derive an upper bound on the probability of error for the binary support recovery problem by employing the well - known chernoff bound in detection theory and extend it to multiple support recovery .we also study the effect of noise on system performance . in section [ sec : lower ] , an information theoretic lower bound is given by using the fano s inequality , and a necessary condition is shown for the doa problem considered in willsky2005source .we focus on the gaussian ensemble in section [ sec : gaussian ] .necessary and sufficient conditions on system parameters for accurate support recovery are given and their implications discussed .the paper is concluded in section [ sec : conclusion ] .we first introduce some notations used throughout this paper .suppose is a column vector .we denote by the support of , which is defined as the set of indices corresponding to the non - zero components of . for a matrix , denotes the index set of non - zero rows of . herethe underlying field can be assumed as or .we consider both real and complex cases simultaneously . for this purpose, we denote a constant or for the real or complex case , respectively .suppose is an index set .we denote by the number of elements in . for any column vector , is the vector in formed by the components of indicated by the index set ; for any matrix , denotes the submatrix formed by picking the rows of corresponding to indices in , while is the submatrix with columns from indicated by . if and are two index sets , then , the submatrix of with rows indicated by and columns indicated by .transpose of a vector or matrix is denoted by while conjugate transpose by . represents the kronecker product of two matrices . for a vector , is the diagonal matrix with the elements of in the diagonal .the identity matrix of dimension is .the trace of matrix is given by , the determinant by to denote the probability of an event and the expectation .the underlying probability space can be inferred from the context .gaussian distribution for a random vector in field with mean and covariance matrix is represented by .matrix variate gaussian distribution for with mean and covariance matrix , where and , is denoted by suppose are two positive sequences , means that .an alternative notation in this case is .we use to denote that there exists an and independent of such that for .similarly , means for .these simple but expedient notations introduced by g. h. hardy greatly simplify derivations .next , we introduce our mathematical model .suppose are jointly sparse signals with common support ; that is , only a few components of are non - zero and the indices corresponding to these non - zero components are the same for all .the common support has known size .we assume that the vectors formed by the non - zero components of follow _ i.i.d ._ .the measurement model is as follows: is the measurement matrix and the measurements .the additive noise is assumed to follow __ .note that assuming unit variance for signals loses no generality since only the ratio of signal variance to noise variance appears in all subsequence analyses . in this sense, we view as the signal - to - noise ratio ( snr ) .let and , be defined in a similar manner .then we write the model in the more compact matrix form : start our analysis for general measurement matrix .for an arbitrary measurement matrix , if every submatrix of is non - singular , we then call a measurement matrix . in this case , the corresponding linear system is said to have the _ unique representation property ( urp ) _ , the implication of which is discussed in .while most of our results apply to general non - degenerate measurement matrices , we need to impose more structure on the measurement matrices in order to obtain more profound results . in particular , we will consider gaussian measurement matrix whose elements are generated from __ .however , since our performance analysis is carried out by conditioning on a particular realization of , we still use non - bold except in section [ sec : gaussian ] . the role played by the variance of indistinguishable from that of a signal variance and hence can be combined to , the snr , by the note in the previous paragraph .we now consider two hypothesis - testing problems .the first one is a binary support recovery problem : results we obtain for binary binary support recovery offer insight into our second problem : the multiple support recovery .in the multiple support recovery problem we choose one among distinct candidate supports of , which is a multiple - hypothesis testing problem : we now briefly introduce the fundamentals of hypothesis testing . the following discussion is based mainly on . in a simple binary hypothesis test , the goal is to determine which of two candidate distributions is the true one that generates the data matrix ( or vector ) : there are two types of errors when one makes a choice based on the observed data .a _ false alarm _ corresponds to choosing when is true , while a _ miss _ happens by choosing when is true .the probabilities of these two types of errors are called the probability of a false alarm and the probability of a miss , which are denoted by . depending onwhether one knows the prior probabilities and and assigns losses to errors , different criteria can be employed to derive the optimal decision rule . in this paperwe adopt the probability of error with equal prior probabilities of and as the decision criterion ; that is , we try to find the optimal decision rule by minimizing optimal decision rule is then given by the _ likelihood ratio test _ : where is the natural logarithm function . the probability of error associated with the optimal decision rule , namely , the likelihood ratio test , is a measure of the best performance a system can achieve . in many cases of interest ,the simple binary hypothesis testing problem is derived from a signal - generation system .for example , in a digital communication system , hypotheses and correspond to the transmitter sending digit and , respectively , and the distributions of the observed data under the hypotheses are determined by the modulation method of the system .therefore , the minimal probability of error achieved by the likelihood ratio test is a measure of the performance of the modulation method . for the problem addressed in this paper , the minimal probability of error reflects the measurement matrix s ability to distinguish different signal supports .the chernoff bound is a well - known tight upper bound on the probability of error . in many cases , the optimumtest can be derived and implemented efficiently but an exact performance calculation is impossible . even if such an expression can be derived , it is too complicated to be of practical use .for this reason , sometimes a simple bound turns out to be more useful in many problems of practical importance .the chernoff bound , based on the moment generating function of the test statistic , provides an easy way to compute such a bound .define as the logarithm of the moment generating function of : ^{s}[p(\bs y|\hn)]^{1-s}d\bs y.\end{aligned}\]]then the chernoff bound states that \leq \exp [ \mu ( s ) ] , \label{chernoff_f } \\ p_{\mathrm{m } } & \leq & \exp [ \mu ( s_{m})]\leq \exp [ \mu ( s ) ] , \label{chernoff_m}\end{aligned}\]]and \leq \frac{1}{2}\exp [ \mu ( s)],\ ] ] where and .note that a refined argument gives the constant in instead of as obtained by direct application of and .we use these bounds to study the performance of the support recovery problem .we next extend to multiple - hypothesis testing the key elements of the binary hypothesis testing .the goal in a simple multiple - hypothesis testing problem is to make a choice among distributions based on the observations : the total probability of error as a decision criterion and assuming equal prior probabilities for all hypotheses , we obtain the optimal decision rule given by of the union bound and the chernoff bound shows that the total probability of error is bounded as follows : , 0\leq s\leq 1 , \label{bound_m}\end{aligned}\]]where % correlated , for example in the case of with _ i.i.d ._ elements , .we then take in the chernoff bounds , , and . whereas the bounds obtained in this way may not be the absolute best ones , they are still valid . as positive definite hermitian matrices , and can be simultaneously diagonalized by a unitary transformation .suppose that the eigenvalues of are and ] , then the probability of error for the full support recovery problem with and is bounded by * proof : * combining the bound in proposition [ thm_binarybound ] and equation , we have ^{-\kappa k_{\mathrm{d}}t/2 } \\ & \leq & \frac{1}{2l}\sum_{i=0}^{l-1}\sum_{\substack { j=1 \\j\neq i}}% ^{l-1}\left ( \frac{\bar{\lambda}}{4}\right ) ^{-\kappa k_{\mathrm{d}}t}.\end{aligned}\]]here depends on the supports and . for fixed , the number of supports that have a difference set with with cardinality is , using and and the summation formula for geometric series , we obtain ^{k_{\mathrm{d } } } \\ & \leq & \frac{1}{2}\frac{\frac{k\left ( n - k\right ) } { \left ( \bar{\lambda}% /4\right ) ^{\kappa t}}}{1-\frac{k\left ( n - k\right ) } { \left ( \bar{\lambda}% /4\right ) ^{\kappa t}}}.\ \ \ \ \blacksquare\end{aligned}\ ] ] we make several comments here .first , depends solely on the measurement matrix . compared with the results in wainwright2007bound ,where the bounds involve the signal , we get more insight into what quantity of the measurement matrix is important in support recovery .this information is obtained by modelling the signals as gaussian random vectors .the quantity effectively characterizes system s ability to distinguish different supports .clearly , is related to the restricted isometry property ( rip ) , which guarantees stable sparse signal recovery in compressive sensing .we discuss the relationship between rip and for the special case with at the end of section [ sec : noiseeffectupper ] .however , a precise relationship for the general case is yet to be discovered .second , we observe that increasing the number of temporal samples plays two roles simultaneously in the measurement system .for one thing , it decreases the the threshold ^{\frac{1}{\kappa t}} ] for fixed and , increasing temporal samples can reduce the threshold only to a certain limit . for another, since the bound is proportional to , the probability of error turns to 0 exponentially fast as increases , as long as ^{\frac{1}{\kappa t}}c_{2}=\max_{s:|s|\leq k}\frac{1}{|s|}\sum_{\substack { 1\leq m\leq m \\ n\in s}}% from and . corollary [ noise_effect ] suggests that in the limiting case where there is no noise , is sufficient to recover a signal .this fact has been observed in .our result also shows that the optimal decision rule , which is unfortunately inefficient , is robust to noise .another extreme case is when the noise variance is very large .then from , the bounds in and are approximated by and .therefore , the convergence exponents for the bounds are proportional to the snr in this limiting case .the diagonal elements of , s , have clear meanings .since qr factorization is equivalent to the gram - schmidt orthogonalization procedure , is the distance of the first column of to the subspace spanned by the columns of ; is the distance of the second column of to the subspace spanned by the columns of plus the first column of , and so on .therefore , is a measure of how well the columns of can be expressed by the columns of , or , put another way , a measure of the incoherence between the columns of and .similarly , is an indicator of the incoherence of the entire matrix of order . to relate with the incoherence, we consider the case with and . by restricting our attention to matrices with _ unit _ columns, the above discussion implies that a better bound is achieved if the minimal distance of all pairs of column vectors of matrix is maximized .finding such a matrix is equivalent to finding a matrix with the inner product between columns as large as possible , since the distance between two unit vectors and is where is the inner product between and .for each integer , the rip constant is defined as the smallest number such that : direct computation shows that is equal to the minimum of the absolute values of the inner products between all pairs of columns of .hence , the requirements of finding the smallest that satisfies and maximizing coincide when . for general , milenkovic _ et.al ._ established a relationship between and via gergorin s disc theorem and discussed them as well as some coding theoretic issues in compressive sensing context .in this section , we derive an information theoretic lower bound on the probability of error for _ any _ decision rule in the multiple support recovery problem .the main tool is a variant of the well - known fano s inequality . in the variant ,the average probability of error in a multiple - hypothesis testing problem is bounded in terms of the kullback - leibler divergence .suppose that we have a random vector or matrix with possible densities .denote the average of the kullback - leibler divergence between any pair of densities by by fano s inequality , , the probability of error for _ any _ decision rule to identify the true density is lower bounded by since in the multiple support recovery problem , all the distributions involved are matrix variate gaussian distributions with zero mean and different variances , we now compute the kullback - leibler divergence between two matrix variate gaussian distributions .suppose , the kullback - leibler divergence has closed form expression : -\kappa t\log \frac{\left\vert \sigma _ { i}\right\vert } { \left\vert \sigma _ { j}\right\vert } \right ] \\ & = & \frac{1}{2}\kappa t\left[\func{tr}\left ( h_{i , j}-\eye_{m}\right ) + \log \frac{\left\vert \sigma _ { j}\right\vert } { \left\vert \sigma _ { i}\right\vert } \right],\end{aligned}\]]where .therefore , we obtain the average kullback - leibler divergence for the multiple support recovery problem as \\ & = & \frac{\kappa t}{2l^{2}}\sum_{s_i , s_j}\left[\mathrm{\func{tr}}(h_{i , j})-m% \right],\end{aligned}\]]where the terms all cancel out and . invoking the second part of proposition [ eig_bound ] , we get , the average kullback - leibler divergence is bounded by due to the symmetry of the right - hand side , it must be of the form , where is the frobenius norm .setting all gives therefore , we get using the mean expression for hypergeometric distribution : , we have , the probability of error is lower bounded by conclude with the following theorem : [ thm_bd_lower ] for multiple support recovery problem , the probability of error for any decision rule is lower bounded by each term in bound has clear meanings .the frobenius norm of measurement matrix is total gain of system .since the measured signal is , only a fraction of the gain plays a role in the measurement , and its average over all possible signals is .while an increase in signal energy enlarges the distances between signals , a penalty term is introduced because we now have more signals .the term is the total uncertainty or entropy of the support variable since we impose a uniform prior on it .as long as , increasing increases both the average gain exploited by the measurement system , and the entropy of the support variable .the overall effect , quite counterintuitively , is a decrease of the lower bound in .actually , the term involving , , is approximated by an increasing function with and the binary entropy function .the reason for the decrease of the bound is that the bound only involves the _ effective _ snr without regard to any inner structure of ( e.g. the incoherence ) and the effective snr increases with . to see this, we compute the effective snr as =\frac{\frac{k}{n}\|a\|_{\mathrm{f}}^2}{m\sigma^2} ] , we get a vanishing probability of error . in particular , under the assumption that , if , then } { \log \left [ k\log \frac{n}{k}\right ] } \leq \frac{\log n}{% \log \log n} ] and . ] with implies \left [ \begin{array}{c } l_{1}^{\dagger } \\-l_{2}^{\dagger } % \end{array}% \right ] o^{\dagger } .\]]again using the invariance property of inertia under congruence transformation , we focus on the leading principal minors of \left [ \begin{array}{c } l_{1}^{\dagger } \\-l_{2}^{\dagger } % \end{array}% \right ] . ]. then we have that is congruent to clearly is positive definite when is sufficiently large . hence ,when is large enough , we obtain corollary 4.3.3 of , we conclude that the eigenvalues of are greater than those of if sorted . from proposition eig_count , we know that has exactly positive eigenvalues , which are the only eigenvalues that could be greater than . since is arbitrary , we finally conclude that the positive eigenvalues of are greater than those of if sorted in the same way . for the second claim ,we need some notations and properties of symmetric and hermitian matrices . for any pair of symmetric ( or hermitian ) matrices and , means that is positive definite and means is nonnegative definite .note that if and are positive definite , then from corollary 7.7.4 of if and only if ; if then the eigenvalues of and satisfy , where denotes the largest eigenvalue of ; furthermore , implies that for any , square or rectangular .therefore , recall that from the definition of eigenvalues , the non - zero eigenvalues of and are the same for any matrices and .since we are interested only in the eigenvalues , a cyclic permutation in the matrix product on the previous inequality s right - hand side gives us until now we have shown that the sorted eigenvalues of are less than the corresponding ones of .the non - zero eigenvalues of is the same as the non - zero eigenvalues of . using the same fact again, we conclude that the non - zero eigenvalues of is the same as the non - zero eigenvalues of .therefore , we obtain that in particular , the eigenvalues of that are greater than are upper bounded by the corresponding ones of if they are both sorted ascendantly .hence , we get that the eigenvalues of that are greater than are less than those of .therefore , the conclusion of the second part of the theorem holds .we comment here that usually it is not true that only the inequality on eigenvalues holds .for arbitrary fixed supports , we have where can be written as a sum of independent squared standard gaussian random variables and is obtained by dropping of them .therefore , using the union bound we obtain \right\ } \\ & \leq & k_{\mathrm{d}}\pr \left\ { q_{l}\leq 2\kappa \sigma ^{2}\gamma \right\ } .\end{aligned}\]]since implies that , the mode of , when is sufficiently large , we have ^{\kappa ( m-2k)}}{% \gamma \left ( \kappa ( m-2k)\right ) } e^{-\kappa \sigma ^{2}\gamma } .\end{aligned}\]]the inequality says that when is large enough , \log \left [ \kappa ( m-2k)\right ] \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \kappa ( m-2k){\huge \ } } \\ & \leq & \exp \left\ { -c(m-2k)\right\ } , \end{aligned}\]]where .therefore , we have authors thank the anonymous referees for their careful and helpful comments .100 m. b. wakin , s. sarvotham , m. f. duarte , d. baron , and r. g. baraniuk , `` recovery of jointly sparse signals from few random projections , '' in _ proc .neural inform . processing systems _ ,vancouver , canada , dec .2005 , pp . 14351442 .e. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inf . theory _ ,52 , no . 2 , pp . 489509 , feb .2006 .d. malioutov , m. cetin , and a. willsky , `` a sparse signal reconstruction perspective for source localization with sensor arrays , '' _ ieee trans .signal process ._ , vol .53 , no . 8 , pp . 30103022 , aug .2005 .v. cevher , p. indyk , c. hegde , and r. g. baraniuk , `` recovery of clustered sparse signals from compressive measurements , '' in _ int .sampling theory and applications ( sampta 2009 ) _ , marseille , france , may 2009 , pp .1822 .p. borgnat and p. flandrin , `` time - frequency localization from sparsity constraints , '' in _ proc .acoustics , speech and signal processing ( icassp 2008 ) _ , las vegas , nv , apr . 2008 , pp . 37853788 .z. tian and g. giannakis , `` compressed sensing for wideband cognitive radios , '' in _ proc .acoustics , speech and signal processing ( icassp 2007 ) _ , honolulu , hi , apr . 2007 , pp .iv1357iv1360 .m. a. sheikh , s. sarvotham , o. milenkovic , and r. g. baraniuk , `` dna array decoding from nonlinear measurements by belief propagation , '' in _ proc .ieee workshop statistical signal processing ( ssp 2007 ) _ , madison , wi , aug . 2007 , pp .215219 .h. vikalo , f. parvaresh , and b. hassibi , `` on recovery of sparse signals in compressed dna microarrays , '' in _ proc .asilomar conf .signals , systems and computers ( acssc 2007 ) _ , pacific grove , ca , nov .2007 , pp . 693697 .f. parvaresh , h. vikalo , s. misra , and b. hassibi , `` recovering sparse signals using sparse measurement matrices in compressed dna microarrays , '' _ ieee j. sel .topics signal processing _ , vol . 2 , no . 3 , pp .275285 , jun . 2008 .h. vikalo , f. parvaresh , s. misra , and b. hassibi , `` sparse measurements , compressed sampling , and dna microarrays , '' in _ proc .acoustics , speech and signal processing ( icassp 2008 ) _ , las vegas , nv , apr .2008 , pp . 581584 .g. bienvenu and l. kopp , `` adaptivity to background noise spatial coherence for high resolution passive methods , '' in _ proc .acoustics , speech and signal processing ( icassp 1980 ) _ , vol . 5 , denver , co , apr .1980 , pp .307310 .p. stoica and a. nehorai , `` music , maximum likelihood , and cramr - rao bound : further results and comparisons , '' _ ieee trans .speech , signal process ._ , vol .38 , no .21402150 , dec . 1990 .m. duarte , s. sarvotham , d. baron , m. wakin , and r. baraniuk , `` distributed compressed sensing of jointly sparse signals , '' in _ proc .. signals , systems and computers ( acssc 2005 ) _ , pacific grove , ca , nov . 2005 , pp . 15371541 .m. duarte , m. wakin , d. baron , and r. baraniuk , `` universal distributed sensing via random projections , '' in _ int .information processing in sensor networks ( ipsn 2006 ) _ , nashville , tn , apr .2006 , pp . 177185 .m. mishali and y. eldar , `` the continuous joint sparsity prior for sparse representations : theory and applications , '' in _ ieee int . workshop computational advances in multi - sensor adaptive processing ( campsap 2007 ) _ , st . thomas , u.s .virgin islands , dec .2007 , pp . 125128 .s. cotter , b. rao , k. engan , and k. kreutz - delgado , `` sparse solutions to linear inverse problems with multiple measurement vectors , '' _ ieee trans .signal process ._ , vol .53 , no . 7 , pp . 24772488 , jul . 2005 .j. chen and x. huo , `` sparse representations for multiple measurement vectors ( mmv ) in an over - complete dictionary , '' in _ proc .acoustics , speech and signal processing ( icassp 2005 ) _ , philadelphia , pa , mar . 2005 , pp . 257260 .m. wainwright , `` information - theoretic bounds on sparsity recovery in the high - dimensional and noisy setting , '' in _ ieee int .information theory ( isit 2007 ) _ , nice , france , jun .2007 , pp . 961965 .o. milenkovic , h. pham , and w. dai , `` sublinear compressive sensing reconstruction via belief propagation decoding , '' in _ int .information theory ( isit 2009 ) _ , seoul , south korea , jul .2009 , pp .674678 .g. tang and a. nehorai , `` support recovery for source localization based on overcomplete signal representation , '' submitted to _ proc .acoustics , speech and signal processing ( icassp 2010)_. a. k. fletcher , s. rangan , v. k. goyal , and k. ramchandran , `` denoising by sparse approximation : error bounds based on rate - distortion theory , '' _ eurasip journal on applied signal processing _ , vol .2006 , pp .119 , 2006 . currently , he is a ph.d .candidate with the department of electrical and systems engineering , washington university , under the guidance of dr .arye nehorai .his research interests are in the area of compressive sensing , statistical signal processing , detection and estimation , and their applications .arye nehorai ( s80-m83-sm90-f94 ) earned his b.sc . and m.sc .degrees in electrical engineering from the technion israel institute of technology , haifa , israel , and the ph.d .degree in electrical engineering from stanford university , stanford , ca . from 1985to 1995 , he was a faculty member with the department of electrical engineering at yale university . in 1995, he became a full professor in the department of electrical engineering and computer science at the university of illinois at chicago ( uic ) . from 2000 to 2001 , he was chair of the electrical and computer engineering ( ece ) division , which then became a new department . in 2001, he was named university scholar of the university of illinois . in 2006 , he became chairman of the department of electrical and systems engineering at washington university in st .he is the inaugural holder of the eugene and martha lohman professorship and the director of the center for sensor signal and information processing ( cssip ) at wustl since 2006 .nehorai was editor - in - chief of the ieee transactions on signal processing from 2000 to 2002 . from 2003 to 2005 , he was vice president ( publications ) of the ieee signal processing society ( sps ) , chair of the publications board , member of the board of governors , and member of the executive committee of this society . from 2003 to 2006, he was the founding editor of the special columns on leadership reflections in the ieee signal processing magazine .he was co - recipient of the ieee sps 1989 senior award for best paper with p. stoica , coauthor of the 2003 young author best paper award , and co - recipient of the 2004 magazine paper award with a. dogandzic .he was elected distinguished lecturer of the ieee sps for the term 2004 to 2005 and received the 2006 ieee sps technical achievement award .he is the principal investigator of the new multidisciplinary university research initiative ( muri ) project entitled adaptive waveform diversity for full spectral dominance .he has been a fellow of the royal statistical society since 1996 .
the performance of the common support for jointly sparse signals based on their projections onto lower - dimensional space is analyzed . support recovery is formulated as a testing problem . both upper and lower bounds on the probability of error are derived for general measurement matrices , by using the chernoff bound and fano s inequality , respectively . the upper bound shows that the performance is determined by a quantity measuring the measurement matrix incoherence , while the lower bound reveals the importance of the total measurement gain . the lower bound is applied to derive the minimal number of samples needed for accurate direction - of - arrival ( doa ) estimation for a sparse representation based algorithm . when applied to gaussian measurement ensembles , these bounds give necessary and sufficient conditions for a vanishing probability of error for majority realizations of the measurement matrix . our results offer surprising insights into sparse signal recovery . for example , as far as support recovery is concerned , the well - known bound in compressive sensing with the gaussian measurement matrix is generally not sufficient unless the noise level is low . our study provides an alternative performance measure , one that is natural and important in practice , for signal recovery in compressive sensing and other application areas exploiting signal sparsity . chernoff bound , compressive sensing , fano s inequality , jointly sparse signals , multiple hypothesis testing , probability of error , support recovery
as a branch of non - equilibrium statistical physics , the study of transport phenomena seeks information on _ microscopic processes _ through measurement of _ macroscopic physical quantities _ associated with transport , such as a diffusion coefficient , generally through mean square displacements .an emblematic diffusion example is still the case of particles motion in a granular " medium .this problem of so - called brownian motion was explained by einstein .this is a typical case of normal statistics system and seems to be obeyed by an enormous number of examples , where displacements have arbitrary values but where very precise average values characterize the system .examples can be found in all kinds of scientific fields , from economics and finances , biology and chemistry to astronomy and physics .the interpretation of these stochastic processes is done using well known tools from statistics .these are basically grounded in two main laws : i ) the law of large numbers , stating that as you increase the number of trials the resulting averages and moments tend to the theoretical ones and ii ) the central limit theorem ( clt ) , which states that the accumulated action of a large number , , of equivalent ( same probability distribution , with expected value and variance ) , independent individual random processes results in a gaussian distribution with expected value and variance fully determined by those ( and ) of the generating distribution , no matter the _ shape _ of this generating distribution .this insensitivity to the details of the microscopic process explains why many physical , chemical , social , financial and so on phenomena lend themselves to this interpretation , so that it is commonly called these normal laws applied to diffusion phenomena lead to a mean square displacement varying linearly with time , i.e. after a large number of random kicks in all directions , a particle of coffee in a cup of milk ( or molecules of oxygen in the air , or a drunkard with no purpose ) will have covered a distance proportional to the square root of the time spent : the first transport phenomenon breaking this law that comes to mind is the ballistic movement where a particle , not suffering any collision ( and thus actually non diffusive ) , travels in a straight line with a speed , covering during a time the distance i.e. diffusion is actually ballistic as long as the observation time is shorter than the mean time between collisions .other transport phenomena break the normal laws more radically and one of the most recently observed is the object of this paper : the superdiffusion of light in resonant atomic vapours . the diffusion problem we are discussing here is related to the _ dynamical _ behavior of the system .this dynamical problem has a long tail " signature , a characteristic it shares with another class of scale - free systems , the web and equivalent systems based on connectivity .the peculiar structure or topology of those systems leads to statistical distributions following power laws , resulting in unexpected behaviors such as the dominant role of a very few elements of the system .let us therefore distinguish these categories from the dynamical behavior of systems being discussed here : a dynamical system may , for example , _ evolve _ from normal to sub- or super - diffusive .focussing on the particular case of light diffusion , we will introduce here the fascinating concept of lvy flights , where the average response may become meaningless , because the concepts of mean value or variance are no longer well - defined .in order to better appreciate how abnormal superdiffusion is , let us here summarize the main results of normal ( gaussian ) statistics : let be a set of identically distributed , independent variables , with probability density , expected value and variance . according to the law of large numbers , a random sampling of the distribution will yield a sample sum a mean value and a variance a very interesting result of this statistical treatment of random events is that , regardless of the shape of most distributions , the distribution of sums , or of mean values , , of random samples , tend to be normal , with limiting gaussian distributions fully determined by the single event mean and the second moment , assuming they are finite .this is the deep , powerful statement of the clt . of single events drawn from the distribution , 1 ( dice rolls ) .the distributions ( b ) of samples of sums and ( c ) of samples of mean values tend to gaussian distributions as is increased , with mean and variance determined by expected value and variance of the parent distribution ( see text ) . for and : top , ; middle ; bottom .,width=302 ] , with , 0.,width=302 ] in figs .[ distributionsa ] and [ distributionsb ] are shown two examples of non - gaussian distributions of variables whose sums and averages tend to be normally distributed .the first one ( figure [ distributionsa ] ) , an example of a discrete uniform distribution , is the distribution of probability of rolling any value of a fair 6-sided die in a single roll .the second one ( figure [ distributionsb ] ) is an example of an asymmetric continuous distribution . in both examples , the sum and the mean value of drawsare given by equations ( [ sum ] ) and ( [ mean ] ) , respectively . and are the observed distributions of samples of and , respectively . in figure ( [ distributionsa ] ) and ( [ distributionsb ] )we can therefore see the clt at work : the means of the sample sum and sample mean tend to approach the expected values and , while their variance tend to the theoretical variances var and var , as the number of samples grows .this seemingly systematic convergence explains why so many real phenomena resulting from the action of many tiny random events with very diverse distributions can eventually be fitted by a gaussian distribution .steps of length drawn from the probability distribution .( a ) distribution of samples of sums ( equation ( [ sum ] ) ) for .( b ) 2d representation of one of these random walks .each of the steps of length occurs in a random direction to a new position .the initial position is ( 0,0 ) . in this specific plot ,the sum of step lengths is and the maximum step length is about 10 times larger than the mean one.,width=302 ] an additional example of non - normal distribution leading to normally distributed sums and mean - values is shown in figure [ normaldiff ] for .the 2-dimensional ( 2d ) isotropic random walk resulting from step lengths drawn from is shown in figure [ normaldiff](b ) as a visual confirmation of a diffusive , brownian - like behavior . assuming constant velocity , the mean - square displacement , with .let us now generalize these findings by focussing on probability distributions whose _ asymptotic decay _ can be approximated by a power - law : with .if , the first and second moments of the distribution , and , are finite and the distribution obeys the clt .this is what we observed for in the example given in figure [ normaldiff ] .however , if the distribution s asymptotic decay is _ slower _ than , i.e. if , then is _ no longer finite _ and the clt does not apply anymore . in the case , even the first moment diverges , meaning that these distributions do not exhibit a typical behavior , because the probability of very large values of ( far wings of the probability distribution ) is much - larger - than - normal .their repeated application ( large number of events ) , characterized by defined in equation ( [ sum ] ) , does not follow a gaussian distribution .these distributions obey , however , a generalised central limit theorem , according to which , if the distribution s asymptotic behavior follows equation ( [ asympt ] ) with , then the normalized probability densities of its sum and , consequently , of its mean value , follow a stable lvy law . , for .notice the log - log scales in ( a ) . in the specific plot in ( b ), the sum of the step lengths is , only approximately 7 times larger than the maximum step length , which is itself about 1500 times larger than the mean one : a few particularly long steps dominate this superdiffusive random walk.,width=302 ] in figure [ superdiff ] are shown the same features as in figure [ normaldiff ] , this time for a superdiffusive process , namely for the probability distribution of single steps ( ) .the distributions and ( not shown ) _ do not _ converge to a gaussian shape and the 2d isotropic random walk resulting from drawing the step lengths from does not look diffusive .instead , we can observe in figure [ superdiff](b ) long jumps characteristic of a lvy flight behavior . in this case , the mean - square displacement , with .systems expanding as under the action of stochastic processes are said to exhibit normal diffusion if and superdiffusion ( subdiffusion ) if ( ) .brownian motion is a paradigm of normal diffusion and an overwhelming majority of stochastic phenomena in all fields exhibits this behavior .much less numerous but being gradually uncovered are examples of systems showing abnormal diffusion .they are found in very diverse systems such as physiology , ecology , human mobility and virus propagation , finance and economy , physics , astrophysics , etc .while dynamical effects of superdiffusion can be observed and characterized in those systems , the fundamental mechanisms leading to lvy flights are in general not elucidated and remain the subject of intense study .for instance , broad distributions may appear as a consequence of the fundamental non linear dynamics of the system . in ecology ,lvy flights seem to be associated to ( evolutionary ) optimization of search patterns in situations of scarcity ( food , mates ) , while brownian movements dominate in abundance contexts .as we will see later on , the physical origin of lvy flights of photons in resonant atomic vapours lies in the specific shapes of absorption and emission spectral distributions , which themselves arise from fundamental and kinetic properties of atoms .from now on , we will restrict ourselves to the description of the propagation of photons in material media , i.e. we restrict the discussion to particles of light .propagation of light in scattering media represents _ per se _ a unique and interesting system , because it is a very familiar and ubiquitous phenomenon , yet with huge practical interest , for example as our main source of knowledge about interstellar medium or in modern photonics applications .it has now gained an extra charm as a system likely to exhibit abnormal diffusion properties and , probably , the earliest recognized as such .light represents a versatile tool for studying transport phenomena and can be analysed in a variety of materials . in free space, a photon travels in a straight line with speed and thus covers a distance during a time interval , which is a ballistic movement according to equation ( [ ballistic ] ) . in a ( dilute )material medium , the photons are scattered by material constituents and a transition occurs from ballistic to diffusive behavior as the constituents density increases .the mean free path is the mean distance traveled by the photons between two scattering events , ( in photon scattering , dominant delays are usually the collisions times , much larger than the propagation time between two scattering events , as opposed to scattering of material particles , where the dominant time is the propagation one ) . if the typical dimensions of the medium are smaller than , the transport is still essentially ballistic , otherwise it is diffusive . in a _normal _ diffusive medium , the propagation of photons can be described as a random walk with a gaussian distribution of the step length between two scatterers .this distribution is characterized by its mean value and its second moment .the mean free path is inversely proportional to the density of scatterers , , and to the scattering cross section , .so far very few systems involving light propagation have been reported to exhibit non - normal behavior .one of them is a random amplifying medium ( ram ) , constituted of amplifying fiber segments embedded in a passive scattering bulk .the intensity distribution at the sample exit results from multiple scattering of the photons by the passive bulk as well as from successive passages through fiber segments , in which the photons are amplified .therefore , the longer the segment , the stronger the amplification .the distribution of lengths of the fiber - segments is intentionally tailored so as to yield a lvy - like intensity distribution of the ram emission .a second system , called lvy glass " by its designers , is an engineered solid material where the density of scattering medium ( titanium dioxide nanoparticles ) is modulated by non - scattering spheres of diameter - distribution tailored to yield a lvy - like transmission spatial distribution .both of these experimental observations of lvy flights of photons are based on synthetic , engineered , _ spatial _ inhomegeneities . as we shall see in the following sections , in spatially _ homogeneous _ resonant atomic vapours , the distance a photon will travel between two scattering eventsdepends not only on the scatterers density ( constant ) but also on the photon s frequency , so that the superdiffusive transport in these natural media originates in a _spectral _ inhomogeneity instead of a spatial one .radiation trapping is the name given to the phenomenon of resonant multi - scattering of light in atomic vapours .in such a process , incident photons are first absorbed by atoms in the gas medium because their frequency is close to that of an atomic transition , .the absorption of the photons is stronger at the center of the resonance , i.e. at , and , depending on the absorption spectral shape , decays more or less sharply on either side of the resonance ( ) .the excited atoms eventually decay radiatively to the ground state and the emitted photon can be absorbed by another atom and so on .the absorption - emission process can occur at very high rates ( typically a few 100 mhz at room temperature ) in resonant atomic vapours , so that many of these processes may take place before the photons leave the cell , resulting in a time much longer than the mean atomic lifetime for the radiation to leave the vapour volume .radiation trapping is thus a mechanism that needs to be accounted for in the understanding of light propagation in stellar media , discharge lamps , gas- , liquid- as well as solid - state laser media , atomic line filters , trapped cold atoms , collision and coherent processes in atomic vapours , optical pumping of alkali - metal vapours .the concern with radiation imprisonment goes back at least as far as the 1920s , when compton and milne described theoretically the diffusion of resonant light in absorbing media .kenty interpreted experimental measurements by zemansky by taking into account the redistribution of frequency that takes place between absorption and reemission of light and ascertained that abnormally long free paths are found to be of such importance as to enable resonance radiation to escape from a body of gas faster than has usually been supposed ... it is found that , for a gas container of infinite size , [ the diffusion coefficient , the average square free path , and the average free path ] are all infinite " . in other words ,the diffusion model , based on the finiteness of the mean free path and second moment , does not rigorously apply to the problem of radiation trapping in resonant media .a few years later , holstein proposed a description through an integro - differential equation , which still constitutes the starting point of most formal descriptions of radiation trapping .let us introduce a number of physical parameters useful to describe a single emission - absorption process in an isotropic medium .let be the probability that a photon of frequency travel a distance without being absorbed ( transmission through ) . should have the limit values i ) =1 and ii ) =0 .let now be the probability , per unit length , that the photon be absorbed . to determine the relationship between and , we write the transmission of the photon through a depth plus a thin slice of width from ( ) ( see figure [ fig1 ] ) as the probability it arrives at times the probability it survives an additional distance : .\ ] ] as , by definition , then this is the beer - lambert law , verifying assumptions i ) and ii ) above . )filled with scattering medium , to determine fraction of particles incident at and transmitted through a thin slice of thickness .,width=226 ] the step - size distribution for a given frequency is .it means that of all the photons of frequency created at position that have arrived at position , will not survive an additional slice of depth , so that is the probability that these photons be absorbed between and .again , we write the transmission of the photons through a depth plus a thin slice of width from ( see figure [ fig1 ] ) : from eqs .[ partialt ] , [ trans ] and [ transabs ] , it follows that this expression describes the single - path length distribution for photons of frequency propagating in a scattering medium characterized by the absorption spectrum . we can determine the frequency - dependent mean free path for the photons in the scattering medium as being the mean value of the step - length distribution : equation ( [ ellnu ] ) tells us that we can tune the mean free path of a monochromatic beam of photons at frequency simply by varying the absorption coefficient at this frequency .it is intuitively reasonable , as we may see in section [ specshapes ] , that is proportional to the atomic density , so that tuning of can be achieved through control of the atomic density .if the light incident on the scattering medium is not monochromatic but has instead a spectral distribution , the resulting path - length distribution corresponds to an averaging of over the ( emission ) spectrum : the moments of this distribution are given by : with the gamma function . in the special case of an incident laser beam , considered as monochromatic with frequency ,the photons spectral distribution is and the moments of the photons steps distribution , given by are finite , as long as .particularly , for , we again find the mean free path of equation ( [ ellnu ] ) .similarly , in a vapour illuminated by any far - from - resonance spectral distribution ( ) , the absorption spectral distribution can be considered constant ( ) so that , once again , the moments are finite and the diffusion of light is normal , as it is in any non - resonant , homogeneous scattering medium such as fog or diluted milk : on the other hand , the moments given by equation ( [ momentq ] ) can be shown to diverge for all the spectral lineshapes and usually associated with atomic transitions , specifically the doppler , lorentzian and voigt lineshapes .it means that the random walk in an atomic vapour for photons with any of these spectral distributions can not be characterized by a diffusion coefficient ( ) and , in some cases , not even by a mean free path , thus qualifying propagation of photons in a resonant vapour as anomalous diffusion .clearly therefore , the relationship between the emission and the absorption spectra is a key element of the diffusion regime of photons in the vapour .this relation may evolve in the medium , due to multi - scattering , and is regulated by processes of _ frequency redistribution_. in optically thin samples of cold atoms , neither atomic collisions nor residual atomic motion ( doppler broadening smaller than the natural line width ) are sufficient to significantly change the frequency of scattered photons before these escape the sample , so that no frequency redistribution takes place and the emission spectrum remains unchanged ( monochromatic laser beam for example ) , distinct from the atomic absorption spectrum .partial frequency redistribution ( pfr ) can be observed in stellar atmospheres and plasmas and in optically thick cold atomic samples , where the accumulation of tiny frequency redistribution due to residual motion of the atoms can become significant due either to a small mean free path ( resonant light ) or to large sample dimensions .in the complete frequency redistribution (cfr ) case , the frequencies of the scattered photons are fully decorrelated from the frequencies of the incident ones , for instance in situations where a high collision rate in the vapour destroys correlations between photon absorption and reemission events , i.e. where , with the lifetime of the excited state ( cfr in the atoms frame ) . in near - resonant thermal vapours ,the frequency of the emitted photon is statistically independent of the frequency of the absorbed one through averaging over the maxwellian velocity distribution ( due to the doppler effect ) .this ( multiple ) resonant scattering process very efficiently redistributes the photons frequencies , leading to cfr in the laboratory frame and to coincidence of the emission and absorption spectra ( ) .the single - step length distribution ( equation ( [ single ] ) ) is thus : and the mean free path of the photons in the vapour becomes said above , the propagation of light in a scattering medium can be described as a random walk of photons within the medium .when the medium is near - resonant , the propagation of light is perturbed by the radiation trapping process : a photon incident in the medium ( a laser photon for example ) can be repeatedly absorbed and re - emitted in the medium .the multiple scattering process is now equivalent to a random walk in real space with frequency - dependent steps length , so that the spectral distribution of the photons , aided by the frequency redistribution process , is the key element determining their diffusion regime . in the case of complete frequency redistribution ,the frequency of the emitted photon is totally independent of the frequency of the absorbed one and is exclusively determined by the transition s spectral shape , be it gaussian , lorentzian or voigt . as aforementioned in section [ secprob ] and as we will show in the next section , the single - step length distribution decays asymptotically as ( ) more slowly than .this slow decay characterizes broad ( long tail ) steps distributions , for which all moments with diverge .thus , as first signalised by kenty and holstein , the assumption of a mean free path for the photons is not fulfilled in resonant vapours and a diffusion - like description of radiation trapping is not adequate .the relation with long - tailed distributions , studied in the 1930s by lvy , and the theoretical labeling of incoherent radiation trapping as more a system exhibiting superdiffusive behavior , were made by pereira __ and further studied in the early 2000 s .experimental confirmation followed a few years later .we may now focus on the spectral shapes more particularly associated with resonant light - atom interactions .they are given by : with the atomic density , the resonant cross - section for atoms at rest and the normalized spectral shape , specific of the light - atom interaction regime . , where is the wavelength and the ratio of degeneracy factors .the homogeneous lorentz line shape characterizes systems with homogeneous broadening , due , for example , to spontaneous radiative decay or collisions , and is given by : with , the photon s detuning in relation to the transition s frequency . has a full width at half maximum ( fwhm ) and an asymptotical decay i.e. with . as mentioned in section [ secprob ] , this characterizes a broad distribution with all moments diverging , i.e. not even a mean free path can be determined for the photon leaps . the doppler line shape , reflecting the velocity distribution of doppler shifts , is given by : with the reduced frequency . is the most probable speed of the atoms of mass at temperature and is boltzmann s constant .the fwhm of is .it can be shown that : ) ) and gaussian ( black , equation ( [ doppler ] ) ) shapes of same width ( fwhm ) .( b ) lorentzian ( blue , width ) , doppler ( black , width ) and resulting voigt ( red , equation ( [ voigt ] ) , parameter ) lineshapes for natural linewidth and doppler broadening , i.e. voigt parameter .,width=302 ] in figure [ figglv](a ) a doppler ( gaussian ) and a lorentzian lineshape are drawn with same amplitude and width in order to make easier the comparison between their wings .the lvy " behavior for the lorentz spectral shape ( ) is more dramatic than for the doppler one ( ) because the wings of the lorentzian distribution decrease more slowly than the doppler ones and therefore go farther : the probability that a photon be re - emitted with a very large detuning and therefore travel a very long distance before being absorbed , is higher in the lorentzian than in the doppler case .the voigt spectral distribution is a convolution of the two former ones : with the voigt parameter .it characterizes the interaction with light of atoms individually submitted to homogeneous , lorentzian decaying processes ( spontaneous emission , collisions , .. ) and moving around according to gaussian maxwell - boltzmann velocity distribution . as the wings of the doppler lineshapetend very rapidly to zero , the asymptotic decay of the overall voigt lineshape is the same as the lorentzian one and the steps distribution for a voigt profile is also a broad distribution with . in figure [ figglv](b )are shown the lorentzian , doppler and voigt spectral distributions for a two - level atoms system of natural width ( excited state lifetime ) and doppler width .for example , this doppler broadening would be the amount suffered by a vapour of rubidium atoms at a temperature of approximately 1.5 k .the resulting parameter of the voigt distribution is thus of the order of 0.14 .the parameter basically compares the natural and the doppler line widths , and , respectively ( equation ( [ voigta ] ) ) . in what is called the doppler regime ,the doppler broadening of the lineshape is much larger than the homogeneous linewidth of the atomic transition and the lineshape tends to the doppler one ( small ) . as the parameter increases , the doppler broadening of the lineshape decreases in relation to the natural linewidth and the lineshape ultimately tends to the lorentz one ( cold atoms , for example ) .the configuration depicted in figure [ figglv](b ) corresponds to an intermediate case . as increases from to , the steps length distribution evolves from doppler- to lorentz - like , as can be seen in figure [ jumpsize ] .the probability distributions in figure [ jumpsize ] are actually graphed as functions of a dimensionless jump size , or opacity , . .dot blue : voigt , .dot - dash magenta : voigt , .,width=302 ]in cold atoms samples , the spectral lineshape is lorentzian with natural linewidth ( doppler broadening ) , but the scattering of photons is fundamentally elastic and frequency redistribution is insignificant , so that the hypothesis of lvy flights of photons in such a medium does not apply to a first approximation . up to now , the only atomic system in which photons lvy flights have been directly measured ( single - step length distribution with ) is a warm thermal vapour of rubidium atoms illuminated with a resonant laser .the experimental set - up is shown schematically in figure [ setup ] .photons with a spectral distribution are collimated and directed to a transparent cell containing an homogeneous atomic vapour of atoms .photons scattered by the atoms and escaping the cell through a given solid angle are detected by a ccd camera , which forms images of the spatial profile of the fluorescence in the cell .analysis of these images enables inference of the distribution of distances travelled by the incident photons in the scattering vapour .one major difficulty in this system , as in any multiple - scattering medium , is to infer the _ single - step _ distribution from the observed result of multiple scattering , because multiple scattering erases the information of the photon s initial position .let us detail a little the experimental aspects of this achievement as an illustration of the challenges faced when looking for signatures of microscopic processes in multiple scattering phenomena .obviously , the higher the atomic density , the smaller the spatial scale allowing observation of the asymptotic slope ( - ) ( see figure [ jumpsize ] ) .it may then seem tempting to work with a very high atomic density , but this would increase the mean number of scattering events for the photons before they leave the cell and are detected , blurring the single - step contribution .the adopted compromise consists in slightly increasing the atomic density , so as to guarantee a reasonable rate of scattering events on a reasonable scale and to correct for the potentially perturbative multiple - scattering effects .the order of magnitude of the resonant cross - section for alkali atoms is a few m .their natural linewidth is s . in a warm vapour( between 300 and 350 k ) , the doppler broadening of the atomic spectral response is s , i.e. the parameter ( equation ( [ voigta ] ) ) falls in the range of to . in this case , the asymptotic regime would be reached for optical densities above ( see figure [ jumpsize ] ) , i.e. in the required conditions of relatively low density ( a few atoms m ) , would require observation lengths larger than a few tens or hundreds of centimeters .the spatial scale available for experimental investigation is typically a few centimeters ( let us say , up to 10 cm , the length of a quite long optical cell ) , so what can actually be measured is the local slope ( see figure [ jumpsize ] , bottom ) , which tends to with increasing optical density . , with ( see ) , width=302 ] figure [ fluo ] shows an image of the fluorescence in the 7cm - long optical cell ( see figure [ setup ] ) , as detected by the ccd camera .a bright point in the image indicates the position where a photon has been scattered towards the collection optics and the ccd camera .the brighter the spot , the higher the fluorescence intensity at the corresponding position .a thin slice of the image , a few tens of pixels wide along the cell s axis , is selected : it corresponds to the volume directly illuminated by the incident photons and where all the first - order scattering events take place ( first scattering of an incident photon by an atom in the vapour ) . above and below the central slice, the photons have necessarily been scattered at least once in the vapor and contribute to the multiply scattered signal . in figure[ exp ] the intensity in the central slice is plotted as a function of the position along the cell s axis , after correction for the multiple scattering contribution ( see ) .the profile is plotted in log - log scale , in which a power law is displayed as a straight line .it is here well fitted by a power law , with . in any case, the local slope is always smaller than 3 , i.e. no normal diffusion model can describe the propagation of light in such a medium .another difficulty arises if the cfr condition is to be fulfilled ( if it is not , complex frequency redistribution functions have to be determined ) .the frequency distribution of the impinging photons should therefore be the same as the absorption spectrum of the atoms in the target cell , i.e. a voigt one with same parameter as the vapour .while this condition is not completely fulfilled in , an ingenious set of configurations ( basically , the incident photons are prepared so as to result from a growing number of frequency - redistributing scattering events ) allows the observation of the thermalisation of light , i.e. of the evolution from partially to almost completely frequency - redistributed incident photons , with a measured coefficient always , decreasing and tending to the expected value of .we have shown how the propagation of light in resonant atomic vapours can exhibit a superdiffusive behavior , characterized by rare , long jumps between two scattering events .when present , these lvy flights " dominate the dynamics of the system and may obliterate long sequences of apparently normal , gaussian - like behavior ( gaussian distribution of jump lengths around a mean value ) .the physical origin of these rare events is the fact that , due to frequency redistribution , a photon have a very small but finite probability of being scattered with a frequency far from resonance , i.e. of flying a very long distance in the medium , turned almost transparent for this photon .the non - negligible probability of these long jumps lifts " the tail of the jump - size distribution : such long - tail " distributions decay asymptotically as power laws , characteristic of scale - free phenomena .lvy flights occur for distributions decaying slower than , for which the second moment , key element of normal statistics , diverges .lvy flights have been observed in many other systems but emphasis has been placed on the atomic physics point of view , in which resonant atomic vapours represent the basic object of research .it is interesting that such a simple system , a paradigm of the most basic light - matter interaction mechanisms , still stirs curiosity and holds surprises .atomic vapours actually represent small , tunable table - top systems where , depending on the conditions ( temperature , atomic density ) , different diffusion regimes can be studied , as well as transitions between them .the limited spatial extension of experimental setups actually imposes a cutoff on the jump - length distribution .it has been shown that , although such a _ truncated lvy flight _ has finite variance , it may take a huge number of individual events for the resulting multi - scattering process to converge to a gaussian one .the measurement of the segment of the distribution of individual jump lengths that is experimentally accessible is not affected by this cutoff restriction and the number of scattering events in a typical atomic vapour cell or lamp is in general not sufficient to characterize the diffusion regime as a normal one .it may however affect the actual diffusion regime in _astrophysical samples_. however restricted the example treated in this article may seem , it illustrates a subject belonging to the much more general topics of nonlinear dynamics and non - normal statistics , at the frontier of current knowledge .the advances in theoretical tools and experimental techniques allow us to extend our comprehension of the physical world , leading to the exploration of qualitatively new concepts .these are likely to lead to improved control over the systems in question .99 einstein a. , 1905 , _ annalen der physik _ , * 322 * , 549 .brockmann d. and geisel t. , 2003 , _ phys ._ , * 90 * , 170601 .lvy p. , 1937, _ thorie de laddition des variables alatoires _ ( gauthier - villiers ) .stanley h.e ._ et al . _ , 1996 , _ physica a _ , * 224 * , 302 .west b.j . anddeering w. , 1994 , _ phys . rep ._ , * 246 * , 1 .dieterich p. , klages r. , preuss r. , and schwab a. , 2008 , _ pnas _ , * 105 * , 459viswanathan g.m . ,buldyrev s.v ., havlin s. , da luz m.g.e . , raposo e.p . andstanley h.e ., 1999 , _ nature _ , * 401 * , 911 .bartumeus f. , catalan j. , fulco u.l ., lyra m.l . , and viswanathan g.m . , 2002 ,_ , * 88 * , 097901 .nathan r. , katul g.g ., horn h.s . ,thomas s.m ., oren r. , avissar r. , pacala s.w . , and levin s.a ., 2001 , _ nature _ , * 418 * , 409 .ramos - fernandez g. , mateos j.l ., miramontes o. , cocho g. , larralde h. and ayala - orozco b. , 2004 , _ behav ._ , * 55 * , 223 . bartumeus f. , da luz m.g.e . , viswanathan g.m . , and catalan j. , 2005 , _ ecology _ , * 86 * ,viswanathan g.m . ,raposo e.p ., da luz m.g.e . , 2008 , __ , * 5 * , 133 . humphries n.e . , queiroz n. , dyer j.r.m . , pade n.g, musyl m.k . ,schaefer k.m ., fuller d.w ., brunnschweiler j.m . ,doyle t.k . ,houghton j.d.r . , hays g.c ., jones c.s . , noble l.r . ,wearmouth v.j ., southall e.j . and sims d.w , 2010 , _ nature _ , * 465 * , 1066 .viswanathan g.m ., da luz m.g.e . ,raposo e.p . , and stanley h.e . , 2011 , _ the physics of foraging _ , ( cambridge : university press ) .brockmann d. , hufnagel l. and geisel t. , 2006 , _ nature _ , * 439 * , 462 .gonzlez m.c ., hidalgo c.a . andbarabsi a .-l . , 2008 , _nature _ , * 453 * , 779 . stollenwerk n. and boto j.p . , 2009 , _ aip conf ._ , * 1168 * , 1548 .mandelbrot b. , 1960 , _ international economic review _ ,* 1 * , 79 .solomon t.h ., weeks e.r . , swinney h.l . , 1993 , __ , * 71 * , 3975 .bardou f. , bouchaud j.p ., emile o. , aspect a. , and cohen - tannoudji c. , 1994 , _ phys ._ , * 72 * , 203 .venkataramani s.c , antonsen t.m . , jr . , and ott e. , 1997 , _ phys ._ , * 78 * , 3864 .shimizu k.t ., neuhauser r.g , leatherdale c.a . , empedocles s.a . ,woo w.k . , and bawendi m.g ., 2001 , _ phys .b _ , * 63 * , 205316 . tamura k. , hidaka y. , yusuf y. , and kai s. , 2002 , _ physica a _ , * 306 * , 157 .drysdale p.m. and robinson p.a ., 2004 , _ phys . rev .e _ , * 70 * , 056112 .sharma d. , ramachandran h. , and kumar n. , 2006 , _ opt .letters _ , * 31 * , 1806 .barthelemy p. , bertolotti j. and wiersma d. , 2008 , _ nature _ , * 453 * , 495 .boldyrev s. and gwinn c.r ., 2003 , _ phys ._ , * 91 * , 131101 .klafter j. shlesinger m.f . , and zumofen g. , 1996 , _ phys . today _ ,* 49 * , 33 .kenty c. , 1932 , _ phys ._ , * 42 * , 823 .rogers g.l . , 2008 ,a _ , * 25 * , 2879 .rajaraman k. and kushner m.j . , 2004 ,_ j. phys ._ , * 37 * , 1780 .camparo j.c . andmackay r. , 2007 , __ , * 101 * , 053303 .bachurin b.a . , 1991 , _ sov .j. quantum electron ._ , * 21 * , 42 .hammond p.r . and nelson r. , 1980 , _ ieee j. quantum electron ._ , * qe-16 * , 1161 . sumida d.s . and fan t.y . ,1994 , _ opt . lett ._ , * 19 * , 1343 .eichhorn m. , 2009 , _ appl .b _ , * 96 * , 369 .stoita a. , vautey t. , jacquier b. and guy s. , 2010 , _ j. luminescence _ , * 130 * , 1119 .molisch a.f . and oehry b.p . , 1999 , _ radiation trapping in atomic vapours, ( oxford : oxford university press ) .hillenbrand g. , burnett k. and foot c.j ., 1995 , _ phys .a _ , * 52 * , 4763 .balik s. , havey m.d .sokolov i.m . and kupriyanov d.v . , 2009 , _ phys .a _ , * 79 * , 033418 .lintz m. and bouchiat m.a . , 1998 , _ phys ._ , * 80 * , 2570 .molisch a.f ., oehry b.p . andmagerl g. , 2000 , _ physica scripta _ , * t86 * , 55 .matsko a.b . , novikova i. , scully m.o . and welch g.r ., 2001 , _ phys ._ , * 87 * , 133601 .matsko a.b . , novikova i. , and welch g.r . , 2002 , _ j. mod .optics _ , * 49 * , 367 .rosenberry m.a . ,reyes j.p . , tupa d. and gay t.j . , 2007 , _ phys .a _ , * 75 * , 023401 , and ref . therein .compton k.t . , 1922 , _ phys ._ , * 20 * , 283 .milne e.a . , 1926 , _ j. lond . math ._ , * 1 * , 40 .zemansky m.w . , 1927 , _ phys ._ , * 29 * , 513 .zemansky m.w ., 1930 , _ phys ._ , * 36 * , 919 .holstein t. , 1947 , _ phys . rev ._ , * 72 * , 1212 .grynberg g. , aspect a. , fabre c. , 2010 , _ introduction to quantum optics : from the semi - classical approach to quantized light _ , ( cambridge : university press ) .mihalas d. , 1978 , _ stellar atmospheres _ ,( san francisco : freeman , 2nd ed . ) .voslamber d. and yelnik j .- b ., 1978 , _ phys ._ , * 41 * , 1233 .pierrat r. , grmaud b. and delande d. , 2009 , _ phys .a _ , * 80 * , 013831 .pereira e. , martinho j.m.g . and berberan - santos m.n . , 2004 , _ phys ._ , * 93 * , 120201 .berberan - santos m.n . ,pereira e.j.n . , and martinho j.m.g ., 2006 , _ j. chem ._ , * 125 * , 174308 .mercadier n. , gurin w. , chevrollier m. and kaiser r. , 2009 , _ nature physics _ ,* 5 * , 602 .metcalf h.j . and van der straten p. , 1999 , _ laser cooling and trapping _ , ( new york : springer - verlag ) .alves - pereira a.r . ,nunes - pereira e.j . , martinho j.m.g . and berberan - santos m.n . , 2007 ,_ j. chem . phys ._ , * 126 * , 154505 .chevrollier m. , mercadier n. , gurin w. , and kaiser r. , 2010 , _ eur .j. d _ , * 58 * , 161 .mantegna r.n . and stanley h.e . , 1994 ,_ , * 73 * , 2946 .
multiple scattering is a process in which a particle is repeatedly deflected by other particles . in an overwhelming majority of cases , the ensuing random walk can successfully be described through gaussian , or normal , statistics . however , like a ( growing ) number of other apparently inofensive systems , diffusion of light in dilute atomic vapours eludes this familiar interpretation , exhibiting a superdiffusive behavior . as opposed to normal diffusion , whereby the particle executes steps in random directions but with lengths slightly varying around an average value ( like a drunkard whose next move is unpredictable but certain to within a few tens of centimeters ) , superdiffusion is characterized by sudden abnormally long steps ( lvy flights ) interrupting sequences of apparently regular jumps which , although very rare , determine the whole dynamics of the system . the formal statistics tools to describe superdiffusion already exist and rely on stable , well understood distributions . as scientists become aware of , and more familiar with , this non - orthodox possibility of interpretation of random phenomena , new systems are discovered or re - interpreted as following lvy statistics . propagation of light in resonant atomic vapours is one of these systems that have been studied for decades and have only recently been shown to be the scene of lvy flights .
the field of vacuum arcs is very old .it is complicated by the large number of variables , their large parameter range and the number of mechanisms involved , compared to the limited range of measurements that are done on individual arcs .this work began with the goal of understanding the production of dark currents in rf cavities , where x ray production would interfere with a planned particle beam experiment .we found , however , that by cleanly looking at the field emission from asperities at the breakdown limit , we could directly study the environment of the pre - breakdown surface . initial effort produced a model of the breakdown trigger based on coulomb explosions , and subsequent work has examined the properties of arcs , primarily by assuming that unipolar arcs are the dominant mechanism .we divide the problem of vacuum arcs into four parts : the trigger , plasma initiation , the plasma growth phase , and plasma damage of the metallic surface . at all stagesthe driving mechanism seems to be the plasma and surface electric fields , and ohmic heating of the surface is not required either by the trigger or for plasma evolution .we believe that coulomb explosions and unipolar arcs can explain much of the behavior of these arcs .our work measuring dark currents and x rays from rf cavities showed that rf structures operate in a mode where the dark currents depend on the surface electric field like independent of state of conditioning , frequency or stored energy .when the fowler - nordheim currents are plotted in this way we find that this behavior is characteristic of field emission from a surface at a field of gv / m . at this field the electric tensile stress . ( being the permittivity of free space ) from the electric field is comparable to the tensile strength of the material , and the material is subject to fatigue failure due to the rf oscillations . , for a work function of 4 ev .c ) molecular dynamics simulation of maxwell stresses pulling copper asperities apart , which would trigger the arc . ]while the conventional wisdom is that arc triggers are due to ohmic heating in the surface , asperities of the required dimensions have not been found , and arc triggers occur randomly , with no time to warm up .coulomb explosions , augmented by fatigue , do not significantly constrain the breakdown site geometry .note that while data shows that the breakdown rate is proportional to , this behavior is compatible with a ) ohmic heating , b ) failure due fatigue stress and c ) electromigration , so is inconclusive .the defining property of a vacuum arc is that it can occur in a vacuum .many experiments over the years have shown that the arc can occur at very large gaps , implying a single surface phenomenon .these conditions imply that field emitted beams must ionize material fractured off the surface by electric tensile stress . in order to efficiently ionize material close to the cathode ,our oopic pro simulations show that this requires a total mass of material near the field emitter equivalent to half of a monolayer .this density of gas is sufficient to accommodate a plasma density of m .the ions produced are driven from their production point by space charge fields , that produce a streaming , almost mono - energetic , ion current that hits the surface producing a variety of effects . once some plasma exist above the surface the surface electric fieldwill be affected by both the applied electric field and the surface field produced by the sheath .as the density , , increases , simulations show that a number of things happen : 1 ) the sheath potential , stays roughly constant , since this potential is a function of the overall net charge , which evidently remains constant , 2 ) the surface electric field increases as , where is the debye length , 3 ) the increase in the surface electric field further increases the field emission current and the number of emitters .we find that the codes show exponentially increasing arc densities , and believe that the arc eventually becomes a non - debye plasma .our model the development of rf arcs is shown in fig 2 .we expect that dc arcs would follow a similar path in parameter space .oopic pro simulations of show that the sheath potential is on the order of 75 volts during field emission , however this is evaluated for a small , roughly spherical plasma in contact with the surface and is due to the local ionization .this potential decreases in all directions , however , so the model should be consistent with lower burn voltages seen in small gap arcs . in accelerators thereseem to be two limiting cases for arc interactions with their environment , which we have described as _ killer _ arcs and _ parasitic _ arcs , where the killer arcs are able to directly short out the driving potential and extinguish themselves , and parasitic arcs are unable to dispose of sufficient energy to perturb the driving field , and can survive for as long as external potentials are available .we find that the growth time of the plasma are consistent with measurements we can make of fully formed arcs using x rays , however , the computer simulations stop before experimentally measurable currents are produced , so some extrapolation is required .surface damage by arcs can take many forms , depending on the parameters of the plasma and the properties of the surface .we find that the dominant interaction is self sputtering by plasma ions streaming out of the plasma .as the plasma evolves it becomes denser and the debye length shortens , increasing the surface field and field emission currents and ion flux ( fig 1b . ) .we have calculated the self sputtering rates for high surface fields , high surface temperatures and varying grain orientation with the results shown in fig 2 .although the temperature of the plasma is very low , the potential induced by ionization produces a very high plasma pressure for the high density plasma . after the surfaceis melted , the plasma pressure , or , pushes on the surface , the electrostatic field pulls on the plasma and the surface tension tends to flatten the surface .the combination of a pulling ( or pushing ) force on the surface , combined with the surface tension produces an instability in an initially flat surface which results in a spinodal decomposition - the formation of ripples in the surface .the dimensions of these ripples can then be used to estimate the parameters of the plasma .ripples can also be produced by an oblique flow of ions carrying momentum to the surface in the same say as wind can produce ocean waves .thus ripples of two types can be produced on melted surface .the high plasma pressure can produce sufficient liquid motion to generate particulates with sufficient mass and volume to uniformly cover the surface with secondary breakdown sites .the intense flux of low energy ions hitting the surface is the primary mechanism by which the plasma affects the surface .in addition the deposited heat and pressure , self - sputtering helps to determine the evolution of the plasma by determining the atomic fluxes back to the plasma .anders has shown that self - sputtering coefficients significantly above 1 ( we assume 10 ) can produce a self sustaining arc .since the surface environment beneath the plasma is poorly understood , we have used molecular dynamics to estimate the self - sputtering coefficients , as a function of surface temperature , surface electric field and grain orientation . *the effect of temperature is the simplest . as the temperature approaches the melting point , the surface binding energy decreases and the sputtering yield increases .because surface atoms are more mobile slightly below the melting point , this increase occurs slightly below the bulk melting temperature as shown in fig 3b .* the tensile stress induced by a strong electric field tends to pull atoms out of a surface . in field evaporation at fields of gv / m , this field is sufficient to pull individual atoms out of a polished surface .we find , using molecular dynamics , that fields on the order of 1 - 3 gv / m , which we expect beneath plasma , are sufficient to significantly modify the low temperature self sputtering coefficient .* surface grain orientation also seems to also affect self - sputtering yields .recent studies of rf systems has shown surface damage that seems to be dependent on grain orientation .we have also calculated the sensitivity of the self - sputtering coefficient on grain orientation .we find that this effect can be quite significant at low energies .although we , like others , do not find whiskers on our surfaces , we do find many cracked regions and many sharp edged craters .fig 3 shows an example of one array of cracks consistent with the surface cooling by about 1000 deg . when we model the field pattern on crack junctions , we find that the crack junctions have a field enhancement on the order of 100 , which is consistent with the values we measured from dark currents. this enhancement would be multiplied by a factor determined by the local shape of the surface , so field enhancement as large as 1000 are in principle possible . in this modelthe area of the emitters would be only a few nm , but a large number of them should be produced .this analysis is described in fig 3 .data showing large area emitters can be accommodated by assuming that many emitters are detected .note that if the current is proportional to , small systematic errors in measuring the electric field would produce much larger errors in measuring the emitter area , particularly if a proper two dimensional ( , area ) fit of the data was made .the conventional wisdom is that field emitters heat , vaporize and ionize due to ohmic heating due to field emission currents . with corners or conical geometries ,the field emitting region is very small ( nm ) and the thermal diffusion volume for nanosecond timescales is much larger ( ) , so the heat is quickly removed and there is no significant temperature increase .this effectively prohibits ohmic heating as a breakdown trigger in many cases . ,dependence in many data sets . ]we find that the spectrum of field enhancements for many different kinds of experiments has an exponential form with density of sites .if we assume that the total number and spectrum of particulates ( breakdown sites ) is proportional to the energy in the arc , it is possible to argue that and equilibrium enhancement factor for a given system will be produced where the equilibrium enhancement factor , where is the energy of the arc . we have shown that this behavior is consistent with scaling laws such as the kilpatrick limit .arcing phenomena are a special case of plasma surface interactions a field under active study for many reasons .we believe that breakdown in high gradient accelerators , surface defects from e beam welding , laser ablation , tokamak rf limits , micrometeorite impacts , and some some power grid failure modes should have many common mechanisms and is should be useful to identify and study them more systematically .the study of arcs is complicated because the number of mechanisms involved in any experiment usually exceeds the number of experimental variables .we believe that the basic assumptions of arc models should be carefully examined as the conventional wisdom over - constrains the problem .for example , starting with dyke , et al . , the conventional wisdom is that breakdown is initiated from ohmic heating of whiskers or asperities of comparable geometries .the whisker geometry is unique , because the ratio of volumes in which ohmic heating and thermal diffusion occur are comparable . with all other geometries heating must be much less , and a different mechanism of plasma production like coulomb explosions , more compatible with the shapes that are actually seen in sem images , seems to be required .there are a number of general questions .although arcing occurs in many different fields , it is interesting to see how much of arc behavior is common to all arcing applications . beyond that, there are a number of more specialized questions that may become accessible .how do non - debye plasmas interact with materials ?what are the damage mechanisms ?how do the presence of preexisting plasmas and strong external magnetic fields in three dimensions affect the triggering and evolution of an arc ?how are the equilibrium field enhancements , and gradient limit affected by these variables ? what is the effect of dense neutral gas on the arc ?to what extent can surface treatment mitigate or modify gradient limits ? what determines the time structure of these arcs ?these questions will test modellng .we have developed and modeled a picture of rf arcs in accelerators , where the basic mechanisms are coulomb explosions and unipolar arcs .one of the problems with this field , however , is that there frequently are alternative mechanisms that can produce satisfactory comparisons to a given set of data .we believe it is important to both increase the precision and self - consistency of the models , but it is important to extend models to the widest range of phenomena as the most useful test of their value .the work at argonne is supported by the u.s .department of energy office of high energy physics under contract no .de - ac02 - 06ch11357 .the work of tech - x personnel is funded by the department of energy under small business innovation research contract no .de - fg02 - 07er84833 .99 r. f. earhart , phil . mag . * 1 , * 147 ( 1901 ) . lord kelvin , phil . mag .* 8 , * 534 ( 1904 ) . also , _ mathematical and physical papers , vol .vi , voltaic theory , radioactivity , electrions , navigation and tides , miscellaneous _ , cambridge university press , cambridge ( 1911 ) , p. 211 .d. alpert , d. a. lee , e. m. lyman , and h. e. tomaschke , j. vac .technol . 1 , 35 ( 1964 ) . j. norem , v. wu , a. moretti , m. popovic , z. qian , l. ducas , y. torun , and n. solomey , phys .stab * 6 * , 072001 ( 2003 ) .a. moretti , z. qian , j. norem , y. torun , d. li , and m. zisman , phys .stab * 8 * , 072001 ( 2005 ) z. insepov , j. h. norem , and a. hassanein , phys .stab * 7 * , 122001 ( 2004 ) .a. hassanein , z. insepov , j. norem , a. moretti , z. qian , a. bross , y. torun , r. rimmer , d. li , m. zisman , d. n. seidman , and k. e. yoon , phys .stab * 9 * , 062001 ( 2006 ) .z. insepov , j. norem , t. proslier , h. huang , s. veitzer and s. mahalingam , arxiv:1003.1736 .a. e. robson and p. c. thonemann , _ proc .. soc _ * 73 * , 508 ( 1959 ) .f. r. schwirzke , ieee trans . on plas .* 19 * , 690 ( 1991 ) beherish a. anders , s. anders , m. a. gundersen , a. m. martsinovskii , ieee trans . on plas .* 23 * , 275 ( 1995 ) .d. l. bruhwiler , r. e. giacone , j. r. cary , j. p. verboncoeur , p. mardahl , e. esarey , w. p. leemans , and b. a. shadwick , phys . rev . stab * 4 * , 101302 ( 2001 ) .b. jttner , m. lindmayer , and g. dning , j. phys .d * 32 * , 2537 ( 1999 ) .
although vacuum arcs were first identified over 110 years ago , they are not yet well understood . we have since developed a model of breakdown and gradient limits that tries to explain , in a self - consistent way : arc triggering , plasma initiation , plasma evolution , surface damage and gradient limits . we use simple pic codes for modeling plasmas , molecular dynamics for modeling surface breakdown , and surface damage , and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties . since any given experiment seems to have more variables than data points , we have tried to consider a wide variety of arcing ( rf structures , e beam welding , laser ablation , etc . ) to help constrain the problem , and concentrate on common mechanisms . while the mechanisms can be comparatively simple , modeling can be challenging .